HOR-DC – *at risk*

Our network monitoring has alerted us to multiple circuit failures within our Horsham facility. Initial diagnostics seem to show fibre breaks and we suspect this may be the result of civil contractors. Traffic is flowing across redundant paths in to the building with no loss of primary peering or transit, but should be considered “at risk” due to operating on redundant links.

Ethernet services that terminate in to our Horsham facility will have automatically failed over to backup if purchased.

Faults have been logged with Openreach and we will keep updating this page as we know more.

UPDATE 01 – 12:01

We have seen all our “primary” fibre links recover and service has been restored, however no official update has been provided. We are still awaiting recovery of the other affected fibre links.

UPDATE 02 – 12:10

Openreach engineering teams are on route to our facility.

UPDATE 03 – 14:50

Openreach are on site

UPDATE 04 – 15:00 *FINAL*

All fiber links have been restored. Contractors working on the Openreach network had trapped one of our fibre tubes running that route and caused bends on the groups of affected fibre to the point light was unable to pass.

Tubing and fibres have been re run in the AG Node by Openreach and service has been restored.

Broadband – 11:55 – 13/10/2021

We are aware of a large drop on broadband services across the UK. We are currently investigating as a matter of urgency.

UPDATE01 – 12:05

Internal investigations have concluded the fault is not within our network and we are working with our wholesale providers.

UPDATE02 – 13:05

We are aware some users are seeing a BT Wholesale landing page / getting private WAN IPs. This is where connections are not routing to our network. We are continuing to work with our suppliers to find the root cause but we have see connections restore. Anyone without a connection, please reboot your router by removing power for 5 minutes.

UPDATE03 – 13:38

We have had an update from wholesale to advise the issue appears to be with their RADIUS PROXY servers that relay authentication credentials to our network. This would account for the BT Wholesale holding page as requests are not getting to us.

We have asked for an ETA. But would ask end users reboot there router to see if there connection restores.

UPDATE 04 – 14:30

We have been advised the fix has been applied and we are seeing a large number of circuits reconnect. We are awaiting for an RFO and will publish when this is made available. We apologise for any inconvenience caused.

Any users without connection are advised to power down there hardware for 20 minutes.

Voice Calls – 13/09/2021 – 15:45

We are aware some outbound calls are failing and people are hearing a pre-recorded message advising of a service suspension. This message is not being generated from our network and has been traced further upstream to a 3rd party carrier. We are working with our carriers to identify the root cause and updates will be posted shortly.

UPDATE01 – 16:25

This has been resolved. We apologise for any inconvenience caused.

Broadband – 14:00 – 25/03/2021

We are seeing a number of broadband circuit’s dropping PPP and some intermittent packet loss. We are currently investigating.

UPDATE01 – 15:00

We have raised a case with our wholesale suppler and have moved traffic from Telehouse North to Telehouse East to see if this improves things.

UPDATE02 – 16:00

PPP reconnections have reduced but we are still seeing packet loss spikes to a large number of connections.

UPDATE03 – 18:00

We are still working with our wholesale supplier to understand the root cause of the issue as we have eliminated our network.

UPDATE04 – 22:00

We have chased our wholesale supplier for an update. Traffic remains via Telehouse East. The packet loss we are seeing does not seem to be service affecting at this stage but of course it should not be there.

UPDATE05 – 12:00 (26/03/2021)

We have dispatched an engineer to swap out optics on our side of the link.

UPDATE06 – 13:00

We have escalated this within wholesale as the problem remains. We do apologise for the inconvenience this is causing. We are still pending an optics swap as well.

UPDATE07 – 16:30

We have had a response from our wholesale suppler to advise additional resources have been added to our case. We are also still pending an optic swap from Telehouse.

UPDATE – 17:00

We have provided example cases of circuits where we are not seeing the same issue. We also believe this is only affecting a specific type of traffic.

UPDATE09 – 17:30

We have reached out to our hardware vendor to see if there is additional diagnostic tools can be provided. We apologise for the continued delay in getting this resolved.

UPDATE10 – 20:15

Optics have been changed at Telehouse North. Unfortunately the Telehouse engineer was not given an update from ourselves prior to starting the work which sadly resulted in traffic being dropped. We are continuing to monitor the interface.

UPDATE11 – 20:35 (29/03/2021)

We have observed some stability restore to the network over the past 72 hours. However we are still concerned there may be an issue on both London core devices caused by a Memory Leak and are working towards a maintenance window to eliminate this. Further details will be posted of the maintenance and times / impact as new “events / post” So they are clearly seen.

Broadband – 27/01/2021 – London 020

We are aware of a repeat Openreach fault from yesterday affecting broadband services within the 020 area code. We have raised this with our suppliers and awaiting an update.

UPDATE01 – 13:55

We have been advised the root cause has been found and a fix is being implemented.

UPDATE02 – 16:00

We have asked for an update from our supplier.

UPDATE03 – 17:10

We have been advised an update is not due before 19:00 We have gone back to advise this is unacceptable. Our service account manager at wholesale has stepped in to push Openreach for further details and a resolution time.

We apologize for the inconvenience this is causing.

UPDATE04 – 20:20

We have had updates provided from our wholesale supplier to advise while Openreach have not provided any firm ETA and raw fault detail. They believe the outage is being caused by an aggregated fibre node serving what they refer to as the Parent exchange and the secondary exchanges.

We are continuing to push for updates and proactively getting updates from wholesale now.

UPDATE05 – 02:30

We have been advised the fault has been resolved. We are awaiting an RFO and will publish once provided.

We apologise for the inconvenience.

04/01/2021 22:40 – Broadband Network

Our network monitoring has alerted us to one of our broadband LNS gateways reloading. This has resulted in broadband services connected via this device to disconnect and failover to alternative gateways.

The device has reloaded and returned to service. We are currently investigating the cause internally.

We apologize for any inconvenience this may have caused but do not expect a recurrence.

27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.

05/08/2020 14:23 – Broadband Disruption

We are aware a number of broadband services have been dropping PPP sessions over the past several hours. Initial diagnostics show nothing wrong our side and we have raised this with our suppliers for further investigation.

UPDATE 01 – 14:27

We have received an update to advise there is an issue further upstream and emergency maintenance work is required. Due to the nature of the work, we have been told this will start at 14:30 today. The impact of this will be further session drops while core devices are potentially reloaded carrier side.

We are sorry for the short notice and impact this will have and have requested a RFO already for the incident.

UPDATE 02 – 15:54

We have been advised the work is complete. We are awaiting this to be 100% confirmed.