Broadband Outage 20/01/2023

We are currently aware of a network issue effecting broadband customers, engineers are already on site in preparation for the works at Telehouse and are currently working on the issue. More updates to follow.

Update 20/01/203 22:02

Engineers are still working to find the root cause of the issue, we will post more updates as they become available.

Update 20/01/203 23:04

We can see connections have now come back online, engineers are still working on the issue and will provide an update shortly.

Update 20/01/203 23:22

If you are still without service please power down your router for at least 30 minutes, this should restore your service.

Update 21/01/2023 10:10

Anyone without service please reboot or power down your router for 20 minutes. We are sorry for the issues caused and a further update will be posted with details as to the cause shortly.

Summery 16/02/2023:

A full RFO has been sent to partners and wholesale customers.

On the night of the 20/01/2023 (during the 2nd planned Telehouse power works) Engineers were on site prepping for the power works and to be there should issues arise. We took the opportunity to proactively replace a PDU bar which was showing signs of having a failing management interface. This PDU was on the “FEED A” (The side Telehouse were working on) so no additional deemed risk was expected.

Power feed A was isolated and taken down shortly before the power works
where due to start and the PDU replaced but not re-powered due to the pending
works by Telehouse.

All platforms were operating as expected on a single power feeds.

Additionally, a planned line card replacement was due to take place which
involved moving DSL subscribers across the network in a controlled manor. The
affected LNS01 and LNS03 where isolated and subscribers were moved across. The isolated LNSs were bought back into service shortly after.

At this point we noticed that new inbound DSL connections where only being routed to LNS02 and LNS04. The migrated configuration was checked and confirmed to be as expected.

At this point LNS02 started to reboot uncontrollably which dropped all connected DSL subscribers in an uncontrolled manor. LNS02 was manually rebooted and returned to service but quickly started to reboot again. LNS02 was taken out of service and powered down.

Services from LNS02 did not reconnect so changes where rolled back  on the line card migration however this did not make any difference.  

Diagnostics on our side did not show the incoming RADIUS proxy requests from our layer 2 provider so we placed a call to there NOC who failed to confirm anything was wrong despite several calls. (This has now been confirmed and was the root cause for the extended outage)

LNS02 was powered backup and diagnostics showed the 12V power rail on the remaining power supply was low and causing the device to reload, however due to the quick reload times on these devices, it was not being flagged on SNMP and due to a combined voltage when both PSUs where energised it did not show as low prior to the event. Power was then swapped over to the other working power supply that was offline due to the power works. This resulted in a stable device.

LNS02 was then bought back into service however no DSL circuits where being routed to us.

Further investigations were taking place when a large volume of inbound DSL connections started to be seen authenticating.

Since the events took place, our wholesale DSL provider confirmed they experienced a Major outage on one of the access routers we are connected to however failed to advise us at the time until many hours after the events took place. A formal complaint has been raised and a RFO has since been provided to confirm a number of devices there side suffered issues and have since been replaced.

While there was a failure of one of our gateways, these are in redundant pairs and would not have caused a complete outage by itself. The events that took place further upstream with our wholesale provider where the root cause of the extended outage.

This was unfortunate timing and had we been advised of the issues, we would have been able to address the outage in another way. We do apologise for the issues seen.

Network Outage 13/01/2023

We are aware of downtime due to an issue at Telehouse London where parts of our network are based. Engineers are already on route and are speaking directly with the data centre to get everyone online again ASAP. Further updates to follow shortly.

Update 14/01/23 00:47am

Services have been restored but remain at Risk while engineers continue to work on the issue. Further updates to follow.

Update 14/01/23 6:53am

Engineers are still at the Telehouse data center replacing failed hardware. Services are still currently at risk but online. More updates to follow.

Update 14/01/23 8:42am

We are starting to see the majority of connections come back online. If you are still having issues please power down your router for at least 20 minutes, then power it back on. This should get the connection working again for you.

Update 17/01/2023 @ 12:33pm FINAL

SUMMERY

This outage was caused by a number of unforeseen cascading events due to the power works undertaken by Telehouse and affects on our power supplies and PDUs. Service was restored upon Structured engineering attendance at site and the replacement of a large amount of hardware.

Further works are planned for the 20th by Telehouse however we will be on site for the duration.

We are also reviewing the events that lead up to the issue and putting in place measures to ensure they do not happen again.

27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.

06/06/2019 10:00 – Broadband Disruption

We are currently seeing packet loss on many broadband connections, this is currently being investigated and updates will be provided shortly. UPDATE 01 – 10:45 We have located a back-hull link within our network between our core in Horsham (HOR) and London (THN) that is operating with packet loss at a low level which is … Continue reading “06/06/2019 10:00 – Broadband Disruption”

We are currently seeing packet loss on many broadband connections, this is currently being investigated and updates will be provided shortly.

UPDATE 01 – 10:45

We have located a back-hull link within our network between our core in Horsham (HOR) and London (THN) that is operating with packet loss at a low level which is affecting services provided by our Horsham facility. This has been removed from service and traffic is flowing via alternative routes.

UPDATE 02 – 11:00

The link removal has restored full service to platforms that operated in part via this link. Investigations are taking place with our fibre backhull provider to ascertain where the fault is. However we suspect this is a common fault further upstream.

UPDATE 03 – 11:29

We are aware some Horsham based broadband services are still seeing issues. Investigations show this to be based around a common fibre link and provider as per our backhull link and investigations have been escalated.

UPDATE 04 – 12:55

We have chased for an update with our suppler who have advised they are seeing issues within there core network affecting other exchanges and services. We have pushed for an escalation due to the severity of the issue.

We apologise for the continued disruption

UPDATE 05 – 14:10

We have been advised the issue is still ongoing and we expect an update by 15:00

UPDATE 06 – 16:01

We have placed further escalations to seniors managers at our suppler due to the length of time this has been ongoing.

UPDATE 07 – 17:00

Our investigations have concluded after speaking with other wholesale providers that this is a result of a Vodafone wholesale issue and a problem within there core network that they are trying to isolate.

UPDATE 08 – 18:10

We have observed packet loss returning to 0% on affected services. This is been confirmed by our supplier however we are continuing to monitor.

UPDATE 09 – 18:50

We have continued to see a good service. Other ISP using Vodafone backhull have advised the same, however we are containing to monitor.

30/05/2019 06:00 – Broadband Disruption

-Update 08:50- We are starting to see some connections come back into service, we are continuing to monitor the situation and will provide more updates shortly. We are currently aware of an issue effecting lease line and broadband customers. Some services are currently down, we are working with suppliers to get this resolved as soon … Continue reading “30/05/2019 06:00 – Broadband Disruption”

-Update 08:50- We are starting to see some connections come back into service, we are continuing to monitor the situation and will provide more updates shortly.


We are currently aware of an issue effecting lease line and broadband customers. Some services are currently down, we are working with suppliers to get this resolved as soon as possible and will provide updates as we get them.

Apologies for any inconvenience caused.

31/10/2018 15:50 – Broadband Outage

We are currently aware of an issue effecting broadband connections in the following area codes: 01293 01306 01372 01444 01737 We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.

We are currently aware of an issue effecting broadband connections in the following area codes:

  • 01293
  • 01306
  • 01372
  • 01444
  • 01737

We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.

11/10/2017 – DSL Performance Issues

We are currently aware of an issue effecting DSL connections, we are looking into this and will update shortly. -Update: 13:24- We are starting to see connections return back to normal speeds, we are continuing to monitor and will update you shortly. -Update: 13:12- Engineers are still working on the issue. We will update with … Continue reading “11/10/2017 – DSL Performance Issues”

We are currently aware of an issue effecting DSL connections, we are looking into this and will update shortly.

-Update: 13:24-
We are starting to see connections return back to normal speeds, we are continuing to monitor and will update you shortly.

-Update: 13:12-
Engineers are still working on the issue. We will update with more information as it becomes available.

EasyBOND – Connectivity Issue 16/03/2017

We are currently looking into an issue with the bonded connectivity, we will update with more information shortly. Update: 10:58 Connections are starting to be reconnect, we are continuing to monitor and will update shortly. Update: 14:41 Engineers found a “stale” routing issue on C2 that was not renewed until a BGP filter was rebuilt … Continue reading “EasyBOND – Connectivity Issue 16/03/2017”

We are currently looking into an issue with the bonded connectivity, we will update with more information shortly.

Update: 10:58
Connections are starting to be reconnect, we are continuing to monitor and will update shortly.

Update: 14:41
Engineers found a “stale” routing issue on C2 that was not renewed until a BGP filter was rebuilt and reset. Traffic routing from C1 was unaffected.

LON01 – EasyIPT – 10/11/2016 – 13:30 *Call Quality*

We are currently aware of an issue effecting call quality across our VoIP Network. Engineers are currently working on this and will provide updates as soon as possible. UPDATE 13:55 10/11/2016 Call quality has returned to normal standards, engineers are still working to confirm the issue is fully clear and will provide updates as soon … Continue reading “LON01 – EasyIPT – 10/11/2016 – 13:30 *Call Quality*”

We are currently aware of an issue effecting call quality across our VoIP Network. Engineers are currently working on this and will provide updates as soon as possible.

UPDATE 13:55 10/11/2016
Call quality has returned to normal standards, engineers are still working to confirm the issue is fully clear and will provide updates as soon as possible.

UPDATE 14:35 10/11/2016
We have now been made aware that a specific Tier One Carrier is currently reporting a major service outage which is affecting customers nationwide at the present time, we are in communication with the supplier in question and will report any additional information to you as and when it is received.

We apologise for any inconvenience this may be causing.