LON01 – EasyXDSL – 21/07/2016

We have been made aware of issues on BT wholesale Network whereby all customers connecting though London may experience drops in connectivity but then reconnect to our network via other gateways. If you are experiencing issues reconnecting, please power down your equipment for a few minutes to clear any stuck sessions. BT are investigating the … Continue reading “LON01 – EasyXDSL – 21/07/2016”

We have been made aware of issues on BT wholesale Network whereby all customers connecting though London may experience drops in connectivity but then reconnect to our network via other gateways.

If you are experiencing issues reconnecting, please power down your equipment for a few minutes to clear any stuck sessions.

BT are investigating the issue and updates will be provided when we have them.

We apologies any inconvenience this may cause.

UPDATE01 – 09:29

We have discovered the data centre that experienced issues yesterday for BT in respect to power may also be suffering issues again this morning.

LON01 – EasyXDSL – 21/07/2016 – 22:30 – *Maintenance*

We have been advised by one of our fibre providers that that they will be conducting maintenance at Goswell Road between 23:00 and 03:00 on the 21/07/2016 from 23:00 – 03:00. This work will affect one of our fibre waves we use for DSL termination between Goswell Road and Telehouse North. To avoid any possible … Continue reading “LON01 – EasyXDSL – 21/07/2016 – 22:30 – *Maintenance*”

We have been advised by one of our fibre providers that that they will be conducting maintenance at Goswell Road between 23:00 and 03:00 on the 21/07/2016 from 23:00 – 03:00. This work will affect one of our fibre waves we use for DSL termination between Goswell Road and Telehouse North. To avoid any possible extended disruption we will be moving traffic away from this link at 22:00 on the day with the view of bringing it back in to service the following day once we have been advised the works are complete.

UPDATE01 – 22:30

After reviewing the maintenance window from our supplier again we have not been advised this time on a reload of any hardware. We have therefor agreed to leave traffic on this link. Should a reload occur then our network will act accordingly.

EasyIPT – Interconnect – 10/07/2016 – 10.25

Our network monitoring has alerted us to a loss of peering with one of our primary wholesale SIP carriers. Traffic has re-routed as expected and calls are flowing. Our engineers are investigating. UPDATE01 – 13:00 Our engineers have confirmed the fault is not within our network and have logged a priority 2 fault with our … Continue reading “EasyIPT – Interconnect – 10/07/2016 – 10.25”

Our network monitoring has alerted us to a loss of peering with one of our primary wholesale SIP carriers. Traffic has re-routed as expected and calls are flowing.

Our engineers are investigating.

UPDATE01 – 13:00
Our engineers have confirmed the fault is not within our network and have logged a priority 2 fault with our fibre provider as well as the affected wholesale SIP carrier.

Once we know more we will provide an update. At the moment this is not service affecting as traffic is automatically re-routing via alternative paths.

UPDATE02 – 14.05
Our network monitoring has alerted to further failures within this wholesale SIP carrier. The failures now extend to there SIP gateways resulting in us loosing all connectivity with them.

Further redundancy our side has automatically kicked in to re-route outbound calls around that carrier, however numbers provisioned via this carrier are offline. We have escalated this to a priority 1 fault with them.

UPDATE03 – 14.20
We are seeing endpoints at our supplier bounce up and down, We can therefor only conclude that there is a serious issue with our supplier.

UPDATE04 – 14.33
We have been provided a notice via our suppliers internal website that they have suffered “multiple failures at one of our data centers (THN)” There engineers are onsite trying to resolve the issues.

We will keep pressing for updates and update as we know more. We apologise for any inconvenience this may cause.

UPDATE05 – 15.18
Our network monitoring has advised that our end points have now come back in to service, however our private interconnect is still down and traffic is routing via the public internet. We are will awaiting official notification.

UPDATE06 – 15.24
We spoke to soon.. endpoints have stopped responding once again.

UPDATE07 – 16:51 We have observed a period of stability to our endpoints along with our interconnect also coming back in to service. Our management team have also been involved due to the lack of support from our supplier and we will be engaging with them Monday morning. We have still not had an official conformation and continue to chase.

EasyIPT – Caller Display – 07/07/2016

We are aware that 1 of our carriers has been rejecting outbound calls where the caller ID is withheld resulting in the call failing to setup where the call is routed via this carrier. We are working with them as to why they are rejecting calls sent in this way. Until then and to ensure … Continue reading “EasyIPT – Caller Display – 07/07/2016”

We are aware that 1 of our carriers has been rejecting outbound calls where the caller ID is withheld resulting in the call failing to setup where the call is routed via this carrier.

We are working with them as to why they are rejecting calls sent in this way. Until then and to ensure calls connect without issue with have assigned non-callable outbound numbers to each account that has an anonymous setup.

Calls returned to this number will be met with a non acceptance massage.

LON01 – Level3 – 30/06/2016 – 15:30

We are aware of a problem with one of our primary transit interconnects with Level3. This is affecting services across the network that transverse over that link. We are currently looking in to this as a priority. UPDATE01 – 15:40 After reviewing our network we can confirm this is a problem with our Adapt transit … Continue reading “LON01 – Level3 – 30/06/2016 – 15:30”

We are aware of a problem with one of our primary transit interconnects with Level3. This is affecting services across the network that transverse over that link.

We are currently looking in to this as a priority.

UPDATE01 – 15:40
After reviewing our network we can confirm this is a problem with our Adapt transit link and not the Level3 side and are taking steps to shutdown this session.

UPDATE02 – 15:45
We have received notice from Adapt that the network issue there side is a result of a high level DDos attack on part of there network.

UPDATE03 – 15:49
Adapt have confirmed the traffic has been mitigated and we have re-enabled the link.

UPDATE04 – 15:50
CORE02 unexpectedly reloaded when this link was re-enabled due to a full BGP table being received and not filtered correctly which would have affected DSL services terminating on this core. DSL services re-routed to CORE01 as expected. We apologise for any inconvenience this may have caused.

UPDATE04 – 15:55
CORE02 has recovered and is accepting connections again.

LON01 – Access Switches – 27/06/2016 – 21:12 *COMPLETE*

We have identified a security bug within the core firmware installed and running on the following access switches within our network: primary-sw.r02 backup-sw.r02 primary-sw.r03 backup-sw.r03 primary-sw.r04 backup-sw.r04 Due to the nature of this security bug we have had little option but to immediately update these devices. This update required the above switches to be reloaded … Continue reading “LON01 – Access Switches – 27/06/2016 – 21:12 *COMPLETE*”

We have identified a security bug within the core firmware installed and running on the following access switches within our network:

primary-sw.r02
backup-sw.r02
primary-sw.r03
backup-sw.r03
primary-sw.r04
backup-sw.r04

Due to the nature of this security bug we have had little option but to immediately update these devices. This update required the above switches to be reloaded so the new IOS could be loaded. Services with a redundant network links to other parts of our network would have seen no disruption.

Other switches are unaffected and this work is now complete.

We apologise for any inconvenience caused.

LON01 – EasyXDSL – 21/06/2016 – 15:32

We are aware lns01.dsl.structuredcommunications.co.uk crashed as a result of a kernel panic within the OS. circuits connected to this LNS would have suffered a drop while they where re-routed to a backup LNS within our network. We are currently engaged with our hardware vendor as to the root cause. lns01.dsl.structuredcommunications.co.uk has recovered to an operation … Continue reading “LON01 – EasyXDSL – 21/06/2016 – 15:32”

We are aware lns01.dsl.structuredcommunications.co.uk crashed as a result of a kernel panic within the OS. circuits connected to this LNS would have suffered a drop while they where re-routed to a backup LNS within our network.

We are currently engaged with our hardware vendor as to the root cause.

lns01.dsl.structuredcommunications.co.uk has recovered to an operation state and is accepting connections again.

We apologise for any inconvenience caused.

UPDATE 01 – 11:00 – 22/06/2016
We have followed up with of hardware vendor as to the progress of this crash report.

LON01 – EasyXDSL – 10/06/2016 – 16:38

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes. Initial reports advise this is / was a BT Wholesale problem. We apologies … Continue reading “LON01 – EasyXDSL – 10/06/2016 – 16:38”

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes.

Initial reports advise this is / was a BT Wholesale problem.

We apologies for any inconvenience caused.

LON01 – 4th Floor suite / network migration – 01/06/2016 – 03/06/2016 – 20:00 – 03:00

Over the past 4-5 years we have continued to invest in our network with the continued aim of providing our own services and simply not reselling as many communication providers do and as we ourselves once did! Growth over the years has consumed a large amount of physical rack space within our own private suite … Continue reading “LON01 – 4th Floor suite / network migration – 01/06/2016 – 03/06/2016 – 20:00 – 03:00”

Over the past 4-5 years we have continued to invest in our network with the continued aim of providing our own services and simply not reselling as many communication providers do and as we ourselves once did!

Growth over the years has consumed a large amount of physical rack space within our own private suite at Level3, Goswell Road. We have taken steps over the past 6 weeks to secure a larger suite within Goswell Road and have already started the complex task of moving our network to this new area.

Over the past 2 weeks we have been adding additional servers to key strategic network locations in order to replace legacy setup to provide continued service for the upcoming works which will be detailed below.

Our core network is fully redundant as are many of our services we provide as standard and on more complex products this can be provided at additional costs, however as we are physically moving core devices and hardware there will unfortunately be some disruption to services. Works are being done outside of hours to minimise this.

Due to various 3rd party contracts being involved, we have been delaying in getting this notice out as various orders have had to fall in to place, such as new bulk fibre runs, power and infrastructure to name but a few. However as detailed above works have been ongoing to insure we keep services operational where possible during the planned works.

Below is a overview of the disruption to services with a full plan being published on the 30th (tomorrow) We would like to stress that due to the works already done, the way in which the works are being carried out along with the time in which they are being conducted will mean that 99% of customers wont see an impact.

Service Type Impact
Connectivity Copper Broadband Minor Impact Expected
Connectivity Fibre Broadband Minor Impact Expected
Connectivity Bonded Broadband Limited Service
Connectivity Leased Line No Impact Expected
Connectivity EFM No Impact Expected
Hosting SMTP, POP, IMAP, HTTP Limited Service
Hosting VPS Limited Service
Hosting Co-Location Limited Service
Voice Managed “Shared” VoiP Limited Service
Voice Managed “Dedicated” VoiP Limited Service
Voice SIP Trunking Limited Service

Again a full detailed report will be available tomorrow and a time line of planned events. (The above looks worse than it really is)

Upon arrival on the 01/06/2016, We will conduct a risk assessment to ensure the new area has been correctly handed over and is operational to our standards. Should this fail or impact our migration to a level that puts our network and thus clients at risk, then we will propose the planned works.

UPDATE 01 – 30/05/2016

As previously advised, please see below for a full breakdown of events and what impact is to be expected.

Connectivity – Copper & Fibre Broadband

Prior to the works commencing we will be updating our network routing to utilise our backup link in to Telehose East for L2TP DSL. This link is provided via another part of the network (CORE01) that is unaffected by the planned works. We will also be updating our RADIUS session steering to terminate all new and existing DSL sessions on LNS02 via RADIUS02 which are both also connected to Core01.

Unfortunately our primary SQL server while redundant is located within our suite and connected to CORE02 which is affected by the planned works. This will mean that DSL users who drop there PPP session(s) while the works are under way will be unable to re-authenticate until this server is back on-line.

DSL sessions will be manually moved tonight @ 22:00 as advised above to ensure the fail over works as expected. DSL customers will see a small outage while there routers re authenticate following the switch over.

Connectivity – Bonded Broadband

While we do provide redundant servers for our bonded platform they are both located within the suite and connected to CORE03 which is affected by the works. Unfortunately our software vendor has not provided a recent software image that compromises of all there recent updates (1 years worth). While we had hoped to “simply” be able to move the redundant server to its new network location as we had done so with other platforms, we where advised this was not possible without a re-image and reconfiguration of the software.

This we where happy to undertake, however due to the volume of updates that then need to be applied following this and the possible outages this would cause due to the requirement of taking the controller offline we are unable to complete the works without greater disruption being caused than simply moving the platform. We are still talking with the vendor to see if other options are available.

At this stage, during the move Bonded Broadband will be unavailable. We have scheduled this platform move to cause as little downtime as possible subject to no unforeseen problems. We apologise for any inconvenience this may cause.

Connectivity – EFM and Fibre Leased Lines

No impact is expected on these services as they are provided from CORE01 which is unaffected by the works

Hosting – All Services

Due to the physical relocation of these services, they will need to be powered down during the maintiance window. Services will be unavailable during this time. Any email should be queued at remote servers. We apologise for any inconvenience this may cause.

Voice – All Managed Services

Due to the physical relocation of the underlying servers they will need to be powered down during the maintiance window. Voice services will be unavailable during this time. We advise clients to have alternative methods of communications for an emergency such as a mobile and apologise for any inconvenience this may cause.

Voice – SIP Trunking

Due to the physical relocation of our media servers and softswitch they will need to be powered down during the maintiance window. SIP trunks will be unavailable during this time. We advise clients to have alternative methods of communications for an emergency such as a mobile and apologise for any inconvenience this may cause.

Our migration plan factors in 3 days worth of works, with the 1st day mostly comprising of physically moving cables and re-routing parts of the network ready for the physical move the following day(s) During day 1 the physical movement of cables will cause our network to suffer re-convergence of traffic , links and transit providers.

UPDATE 02 – 03/06/2016 – 06:00

Works went ahead as planned and all services have been restored however we encountered problems with our new bulk fibre and where forced to utilise existing spares, as a result of this various network peering are down and traffic is being re-routed. Remedial works are being undertaken over the next week to restore the network back to its redundant state.

UPDATE 03 – 03/06/2016 – 09:30

We have observed that one of our private primary voice peering links is flapping due to another fibre problem. This has been taken out of service and traffic is being re-routed.

UPDATE 04 – 03/06/2016 – 17:30

The network has remained stable on its redundant links. Work orders are being raised with Level3 for additional fibre.

UPDATE 05 – 06/06/2016 – 13:00

We have restored our primary voice peering link along with other internal redundant links, along with adding a tempory work around for other down peers. Work is still ongoing with Level3 for additional fibre

LON01 – EasyIPT – 16/05/2016 – 21:40 till 22:45 *Emergency Maintenance* *Complete*

We have taken action to reload our primary soft switch to clear down some stuck SIP sessions. We are working with our software vendor to try an automate this process without the need for a complete reload of the singling platform in future. We have been advised that an update will help limit the need … Continue reading “LON01 – EasyIPT – 16/05/2016 – 21:40 till 22:45 *Emergency Maintenance* *Complete*”

We have taken action to reload our primary soft switch to clear down some stuck SIP sessions. We are working with our software vendor to try an automate this process without the need for a complete reload of the singling platform in future.

We have been advised that an update will help limit the need to do this, with a further one planned to resolve this completely.

Calls are routing correctly