Emergency Works 30/03/2021

An engineer is on route to Telehouse to complete a reboot of our 2 London core devices. Works are due to start from 07:30am today. This will be done in succession pending a successful reboot of the first device and to allow services where possible to re-route across the network. Leased line circuits without backup will see disruption when the associated core reloads.

We do apologise for the short notice.

UPDATE01 – 07:35

This work is about to start

UPDATE02 – 08:30

This work is complete.

Broadband – 14:00 – 25/03/2021

We are seeing a number of broadband circuit’s dropping PPP and some intermittent packet loss. We are currently investigating.

UPDATE01 – 15:00

We have raised a case with our wholesale suppler and have moved traffic from Telehouse North to Telehouse East to see if this improves things.

UPDATE02 – 16:00

PPP reconnections have reduced but we are still seeing packet loss spikes to a large number of connections.

UPDATE03 – 18:00

We are still working with our wholesale supplier to understand the root cause of the issue as we have eliminated our network.

UPDATE04 – 22:00

We have chased our wholesale supplier for an update. Traffic remains via Telehouse East. The packet loss we are seeing does not seem to be service affecting at this stage but of course it should not be there.

UPDATE05 – 12:00 (26/03/2021)

We have dispatched an engineer to swap out optics on our side of the link.

UPDATE06 – 13:00

We have escalated this within wholesale as the problem remains. We do apologise for the inconvenience this is causing. We are still pending an optics swap as well.

UPDATE07 – 16:30

We have had a response from our wholesale suppler to advise additional resources have been added to our case. We are also still pending an optic swap from Telehouse.

UPDATE – 17:00

We have provided example cases of circuits where we are not seeing the same issue. We also believe this is only affecting a specific type of traffic.

UPDATE09 – 17:30

We have reached out to our hardware vendor to see if there is additional diagnostic tools can be provided. We apologise for the continued delay in getting this resolved.

UPDATE10 – 20:15

Optics have been changed at Telehouse North. Unfortunately the Telehouse engineer was not given an update from ourselves prior to starting the work which sadly resulted in traffic being dropped. We are continuing to monitor the interface.

UPDATE11 – 20:35 (29/03/2021)

We have observed some stability restore to the network over the past 72 hours. However we are still concerned there may be an issue on both London core devices caused by a Memory Leak and are working towards a maintenance window to eliminate this. Further details will be posted of the maintenance and times / impact as new “events / post” So they are clearly seen.

Broadband – 27/01/2021 – London 020

We are aware of a repeat Openreach fault from yesterday affecting broadband services within the 020 area code. We have raised this with our suppliers and awaiting an update.

UPDATE01 – 13:55

We have been advised the root cause has been found and a fix is being implemented.

UPDATE02 – 16:00

We have asked for an update from our supplier.

UPDATE03 – 17:10

We have been advised an update is not due before 19:00 We have gone back to advise this is unacceptable. Our service account manager at wholesale has stepped in to push Openreach for further details and a resolution time.

We apologize for the inconvenience this is causing.

UPDATE04 – 20:20

We have had updates provided from our wholesale supplier to advise while Openreach have not provided any firm ETA and raw fault detail. They believe the outage is being caused by an aggregated fibre node serving what they refer to as the Parent exchange and the secondary exchanges.

We are continuing to push for updates and proactively getting updates from wholesale now.

UPDATE05 – 02:30

We have been advised the fault has been resolved. We are awaiting an RFO and will publish once provided.

We apologise for the inconvenience.

04/01/2021 22:40 – Broadband Network

Our network monitoring has alerted us to one of our broadband LNS gateways reloading. This has resulted in broadband services connected via this device to disconnect and failover to alternative gateways.

The device has reloaded and returned to service. We are currently investigating the cause internally.

We apologize for any inconvenience this may have caused but do not expect a recurrence.

Network Maintenance 19/10/2020

We will be making some changes to our network on the 19/10/2020 starting from 12:00 noon. These changes are in relation to our LINX peering and no impact is expected to be seen as traffic will be gracefully re-routed via alternative routes prior to work starting.

We will also be taking this opportunity to add additional hardware for future broadband capacity. This will also be non service affecting.

UPDATE 01 – FINAL – 16:50

This work is complete and LINX peering has been restored. No impact was seen to customer traffic.

14/10/2020 12:43– Leased Lines

We are aware of an issue affecting leased lines services delivered via Telehouse North. We are investigating as a matter of urgency. We sorry for any inconvenience being caused.

Services with backup DSL will have been automaticity re-routed.

UPDATE 01 – 12:50

Further investigations have shown this is affecting services delivered from our redundant fibre routes as well. We are continuing to work with our suppliers as the root cause has been identified outside of our network.

UPDATE 02 – 13:02

We are starting to see services recover but services should still be considered at risk.

UPDATE 03 – 14:00

We have had further feedback from our suppliers to advise this has been resolved and they believe this to have been down to a configuration issue there side. We have raised this as concern as this took down multiple fail over links for redundancy.

FINAL – 10:39

We have been advise this issue was cause by human error within our wholesale provider and the failure to follow strict guidelines when undertaking work. Due to the impact this had on ourselves and the loss of our redundant links with this provider, We are undertaking an internal review to ensure we mitigate against this in the future.

14/09/2020 22:00 – Broadband Network

Due to a unexpected reload of a broadband gateway on our network this afternoon, we are seeing a traffic imbalance on part of our broadband network.

We will be taking steps to disconnect sessions gratefully and re balance the affected gateways.

End users will see a PPP reconnect taking approximately 5-20 seconds. In the rare event your connection does not restore. You will need to power OFF and ON your router.

UPDATE01 – 22:10

This work is now complete.

27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.