We are aware of an issue affecting leased lines services delivered via Telehouse North. We are investigating as a matter of urgency. We sorry for any inconvenience being caused.
Services with backup DSL will have been automaticity re-routed.
UPDATE 01 – 12:50
Further investigations have shown this is affecting services delivered from our redundant fibre routes as well. We are continuing to work with our suppliers as the root cause has been identified outside of our network.
UPDATE 02 – 13:02
We are starting to see services recover but services should still be considered at risk.
UPDATE 03 – 14:00
We have had further feedback from our suppliers to advise this has been resolved and they believe this to have been down to a configuration issue there side. We have raised this as concern as this took down multiple fail over links for redundancy.
FINAL – 10:39
We have been advise this issue was cause by human error within our wholesale provider and the failure to follow strict guidelines when undertaking work. Due to the impact this had on ourselves and the loss of our redundant links with this provider, We are undertaking an internal review to ensure we mitigate against this in the future.
We have seen a number of leased lines and peering sessions go down over night. This has been caused by a possible fire (fire alarms are going off) in at least 1 London data center affecting a number of providers. We are working to obtain further information.
UPDATE 01 – 8:00
We have been advised the London harbor building (Equinix LD8)remains evacuated.
UPDATE 02 – 09:15
Equinix have advised that a fire alarm was triggered by the failure of output static switch from their Galaxy UPS system. This has resulted in a loss of power for multiple customers and Equinix IBX Engineers are working to resolve the issue and restore power. At this moment in time we do not believe there to have been a fire.
UPDATE 03 – 10:15
Equinix IBX Site Staff report that the root cause of the fire evacuation was due to the failure of a Galaxy UPS that triggered the fire alarm. The fire system has been reinstated and IBX Staff have been allowed back in to the building. We are now awaiting updates on restoring services.
UPDATE 04 – 11:15
Equinix Engineers have advised that their IBX team have begun restoring power to affected devices. Unfortunately, at present there remains no estimate resolution time.
UPDATE 05 – 12:15
Equinix have advised that services are started to be restored with equipment being migrated over to other newly installed infrastructure. We have yet to see any of our affected connections restore but keep checking for updates.
UPDATE 06 – 13:15
Equinix IBX Site Staff reports that services have been further restored to more customers and IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure. Equinix advised access will be granted and prioritized to the IBX should any customers need it to work on their equipment.
UPDATE 07 – 14:20
IBX Site Staff reports that services have been further restored to more customers and increasing numbers of those affected are now operational along with the majority of Equinix Network Services. IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure
UPDATE 08 – 15:15
We are please to advise we have just seen all affected services restore. Circuits remain at risk due to the ongoing power issues on site, however we do not expect them to go down again.
We are aware one of our broadband gateways has reloaded and dropped a number of broadband sessions. Traffic was rerouted to other gateways however, the network will need to be rebalanced in the early hours of the morning.
We are sorry for the impact this will have had on you.
We are aware of issues with customers routed via sip03.easyipt.co.uk for VoIP calls. We are currently looking at this as a matter of urgency.
UPDATE 01 – 13:39
We have discovered an issue with the Database running on the media gateway and will be performing a reboot. Any active calls will drop.
UPDATE 02 – 13:45
The reboot has completed. However we have lost a number of critical services and are working to restore them.
UPDATE 03 – 13:58
We have been able to recover the services, however concerned about stability and why these services did not automatically start as expected. We are undertaking further reviews and the platform should still be considered as “at risk” until further notice.
UPDATE 04 – 14:47
We have been able to automatically recover a number of services, however we are still seeing some services fail to load on boot. This is something we need to look in too and believe it to be part of a race condition on the server. The media gateway has remained stable and processing calls as expected.
A full review of the media gateway will take place next week to ensure all startup services recover as expected.
We are currently seeing packet loss on many broadband connections, this is currently being investigated and updates will be provided shortly.
UPDATE 01 – 10:45
We have located a back-hull link within our network between our core in Horsham (HOR) and London (THN) that is operating with packet loss at a low level which is affecting services provided by our Horsham facility. This has been removed from service and traffic is flowing via alternative routes.
UPDATE 02 – 11:00
The link removal has restored full service to platforms that operated in part via this link. Investigations are taking place with our fibre backhull provider to ascertain where the fault is. However we suspect this is a common fault further upstream.
UPDATE 03 – 11:29
We are aware some Horsham based broadband services are still seeing issues. Investigations show this to be based around a common fibre link and provider as per our backhull link and investigations have been escalated.
UPDATE 04 – 12:55
We have chased for an update with our suppler who have advised they are seeing issues within there core network affecting other exchanges and services. We have pushed for an escalation due to the severity of the issue.
We apologise for the continued disruption
UPDATE 05 – 14:10
We have been advised the issue is still ongoing and we expect an update by 15:00
UPDATE 06 – 16:01
We have placed further escalations to seniors managers at our suppler due to the length of time this has been ongoing.
UPDATE 07 – 17:00
Our investigations have concluded after speaking with other wholesale providers that this is a result of a Vodafone wholesale issue and a problem within there core network that they are trying to isolate.
UPDATE 08 – 18:10
We have observed packet loss returning to 0% on affected services. This is been confirmed by our supplier however we are continuing to monitor.
UPDATE 09 – 18:50
We have continued to see a good service. Other ISP using Vodafone backhull have advised the same, however we are containing to monitor.
-Update 08:50- We are starting to see some connections come back into service, we are continuing to monitor the situation and will provide more updates shortly.
We are currently aware of an issue effecting lease line and broadband customers. Some services are currently down, we are working with suppliers to get this resolved as soon as possible and will provide updates as we get them.
Apologies for any inconvenience caused.
We will be taking LNS01 out of service on the 03/01/2019 at 22:00 in order to complete some physical maintenance work for the 04/01/2019. Connections on LNS01 at this time will experience a drop in PPP and reconnect to LNS02.
This work is due to start. Sessions will drop and start to move over to LNS02. If your connection does not come back up within 5 minutes please power cycle your router. If this does not work then please turn off your router for 20 minutes to allow the session to fully close down.
LNS01 has been taking out of service we we have seen 95% of connections move across as expected. Work can now start on this device tomorrow as planned without further impact on services. Updates to follow.
Work has been completed and LNS01 is back in service accepting connections.
Following on from a unexpected reload of LNS02 and after reviewing the logs we will be upgrading the firmware on this device shortly under an emergency maintiance window.
UPDATE01 – 22:10
The emergency works are now complete on LNS02. This device haas been reloaded and is accepting connections once again. We have provided the logs to the hardware vendor.
We will need to do the same with LNS01 however, this currently has a large number of active connections and work to move these across the network is needed. This will be done at a later date as we do not believe this device is at risk of the same failure.
We do apologise for the inconvenience caused.
We are currently aware of an issue effecting broadband connections in the following area codes:
We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.
As part of our THN migration we are due to migrate services from C3 to C2 tonight. There will be no impact on major services such as broadband.
Works are only expected to take 10 minutes per service and affect a small number of bonded customers and hosted voice services.
This work is complete. However, during the works a configuration issue was made on a 10GB port that resulted in traffic being re-routed. This was spotted and resolved within minutes.