Incident: Broadband – 18/04/2024 – 13:00

We are aware of a MTU issue on our wholesale upstream broadband provider.

This will be impacting web browsing and other services.

We are working with them to isolate the issue.

We apologise for the inconvenience

UPDATE 01 – 14:00

We have seen services return to normal after changes made by our wholesale provider. We are awaiting details as to the root cause.

We apologise for the inconvenience

Incident: Horsham DC CORE – 18/04/2024 – 12:15

We have identified an issue within our Horsham Data Center that has impacted several services. Service stability has been restored, and we will provide a further update shortly.

We apologize for any inconvenience caused.

UPDATE01 –

At 11:30 AM, our network monitoring system detected a service disruption affecting multiple platforms situated outside our network.

Preliminary diagnostics indicate that the issue was confined to our Horsham data center.

Subsequent investigations identified the root cause as an MTU (Maximum Transmission Unit) anomaly, implicating either the core infrastructure in Horsham or an interconnect linking our two locations.

Our engineering team pinpointed the issue to a specific interconnect and promptly isolated it from service. This action restored full network operations.

Ethernet EAD – 22/07/2022 – 11:30

We are aware there are a small number of layer 2 ethernet services currently down at the moment. Internal investigations show this to be a upstream supplier issue and we are currently engaging with them to locate the fault.

UPDATE01: 11:40

We are seeing services down from BT Wholesale, Openreach EAD Direct, TTB so we suspect this may be a common POP failure between layer2 providers landing circuits in London. Customers with backup will have automatically kicked in and re-routed.

UPDATE02: 12:00

We are still awaiting for an official update from our layer2 provider as to the root cause of this issue but they have advised they are seeing circuits down with a large number of calls on hold to the service desks.

We have already escalated to our management contacts to push for information so we can provide detailed updates.

UPDATE03: 12:25

Our Layer2 provider has now declared a “Major Incident”

They have advised this appears to be related to a “DNS issue” but we have disputed that. We are continuing to chase for updates. We apologise to affected customers and the inconvenience this is causing.

UPDATE04: 12:41

Our layer2 provider has now advised of a major internal core network problem affecting more than just layer2 services. This is currently affecting less than 10% of our overall EAD circuits with this layer2 provider and services delivered via other layer2 partners are unaffected.

We have been advised internal teams are working to identify the root cause and issue a fix. We again apologise to affected customers and the inconvenience this is causing.

UPDATE05: 12:55

Following further pressure to our account manager we have been advised this is affecting layer2 services being delivered to us via a core device located in there network at Interxion London with multiple service tunnels flapping.

We have been advised there will be a further update by 13:30

UPDATE06: 13:40

A further update has been provided to advise they are still working on why network tunnels on this device are “flapping” They have advised a further update by 15:30 but we will keep pushing for information and a ETA on service resolution.

UPDATE07: 15:05

Our network monitoring has shown our end points are reachable again on the affected circuit’s. We have no been provided an official clear yet and services should still be classed as at risk.

UPDATE08: 17:00

Our layer2 provider has advised they have re-routed around the affected device and are currently working with the hardware vendor to establish why the device has failed in the way it has. We suspect there may be a short outage at a later date once services are re-routed back via this device but we will advise at the time.

HOR-DC – *at risk*

Our network monitoring has alerted us to multiple circuit failures within our Horsham facility. Initial diagnostics seem to show fibre breaks and we suspect this may be the result of civil contractors. Traffic is flowing across redundant paths in to the building with no loss of primary peering or transit, but should be considered “at risk” due to operating on redundant links.

Ethernet services that terminate in to our Horsham facility will have automatically failed over to backup if purchased.

Faults have been logged with Openreach and we will keep updating this page as we know more.

UPDATE 01 – 12:01

We have seen all our “primary” fibre links recover and service has been restored, however no official update has been provided. We are still awaiting recovery of the other affected fibre links.

UPDATE 02 – 12:10

Openreach engineering teams are on route to our facility.

UPDATE 03 – 14:50

Openreach are on site

UPDATE 04 – 15:00 *FINAL*

All fiber links have been restored. Contractors working on the Openreach network had trapped one of our fibre tubes running that route and caused bends on the groups of affected fibre to the point light was unable to pass.

Tubing and fibres have been re run in the AG Node by Openreach and service has been restored.

Broadband – 11:55 – 13/10/2021

We are aware of a large drop on broadband services across the UK. We are currently investigating as a matter of urgency.

UPDATE01 – 12:05

Internal investigations have concluded the fault is not within our network and we are working with our wholesale providers.

UPDATE02 – 13:05

We are aware some users are seeing a BT Wholesale landing page / getting private WAN IPs. This is where connections are not routing to our network. We are continuing to work with our suppliers to find the root cause but we have see connections restore. Anyone without a connection, please reboot your router by removing power for 5 minutes.

UPDATE03 – 13:38

We have had an update from wholesale to advise the issue appears to be with their RADIUS PROXY servers that relay authentication credentials to our network. This would account for the BT Wholesale holding page as requests are not getting to us.

We have asked for an ETA. But would ask end users reboot there router to see if there connection restores.

UPDATE 04 – 14:30

We have been advised the fix has been applied and we are seeing a large number of circuits reconnect. We are awaiting for an RFO and will publish when this is made available. We apologise for any inconvenience caused.

Any users without connection are advised to power down there hardware for 20 minutes.

Voice Calls – 13/09/2021 – 15:45

We are aware some outbound calls are failing and people are hearing a pre-recorded message advising of a service suspension. This message is not being generated from our network and has been traced further upstream to a 3rd party carrier. We are working with our carriers to identify the root cause and updates will be posted shortly.

UPDATE01 – 16:25

This has been resolved. We apologise for any inconvenience caused.

Broadband – 14:00 – 25/03/2021

We are seeing a number of broadband circuit’s dropping PPP and some intermittent packet loss. We are currently investigating.

UPDATE01 – 15:00

We have raised a case with our wholesale suppler and have moved traffic from Telehouse North to Telehouse East to see if this improves things.

UPDATE02 – 16:00

PPP reconnections have reduced but we are still seeing packet loss spikes to a large number of connections.

UPDATE03 – 18:00

We are still working with our wholesale supplier to understand the root cause of the issue as we have eliminated our network.

UPDATE04 – 22:00

We have chased our wholesale supplier for an update. Traffic remains via Telehouse East. The packet loss we are seeing does not seem to be service affecting at this stage but of course it should not be there.

UPDATE05 – 12:00 (26/03/2021)

We have dispatched an engineer to swap out optics on our side of the link.

UPDATE06 – 13:00

We have escalated this within wholesale as the problem remains. We do apologise for the inconvenience this is causing. We are still pending an optics swap as well.

UPDATE07 – 16:30

We have had a response from our wholesale suppler to advise additional resources have been added to our case. We are also still pending an optic swap from Telehouse.

UPDATE – 17:00

We have provided example cases of circuits where we are not seeing the same issue. We also believe this is only affecting a specific type of traffic.

UPDATE09 – 17:30

We have reached out to our hardware vendor to see if there is additional diagnostic tools can be provided. We apologise for the continued delay in getting this resolved.

UPDATE10 – 20:15

Optics have been changed at Telehouse North. Unfortunately the Telehouse engineer was not given an update from ourselves prior to starting the work which sadly resulted in traffic being dropped. We are continuing to monitor the interface.

UPDATE11 – 20:35 (29/03/2021)

We have observed some stability restore to the network over the past 72 hours. However we are still concerned there may be an issue on both London core devices caused by a Memory Leak and are working towards a maintenance window to eliminate this. Further details will be posted of the maintenance and times / impact as new “events / post” So they are clearly seen.

Broadband – 27/01/2021 – London 020

We are aware of a repeat Openreach fault from yesterday affecting broadband services within the 020 area code. We have raised this with our suppliers and awaiting an update.

UPDATE01 – 13:55

We have been advised the root cause has been found and a fix is being implemented.

UPDATE02 – 16:00

We have asked for an update from our supplier.

UPDATE03 – 17:10

We have been advised an update is not due before 19:00 We have gone back to advise this is unacceptable. Our service account manager at wholesale has stepped in to push Openreach for further details and a resolution time.

We apologize for the inconvenience this is causing.

UPDATE04 – 20:20

We have had updates provided from our wholesale supplier to advise while Openreach have not provided any firm ETA and raw fault detail. They believe the outage is being caused by an aggregated fibre node serving what they refer to as the Parent exchange and the secondary exchanges.

We are continuing to push for updates and proactively getting updates from wholesale now.

UPDATE05 – 02:30

We have been advised the fault has been resolved. We are awaiting an RFO and will publish once provided.

We apologise for the inconvenience.