Network Maintenance 19/10/2020

We will be making some changes to our network on the 19/10/2020 starting from 12:00 noon. These changes are in relation to our LINX peering and no impact is expected to be seen as traffic will be gracefully re-routed via alternative routes prior to work starting.

We will also be taking this opportunity to add additional hardware for future broadband capacity. This will also be non service affecting.

UPDATE 01 – FINAL – 16:50

This work is complete and LINX peering has been restored. No impact was seen to customer traffic.

14/10/2020 12:43– Leased Lines

We are aware of an issue affecting leased lines services delivered via Telehouse North. We are investigating as a matter of urgency. We sorry for any inconvenience being caused.

Services with backup DSL will have been automaticity re-routed.

UPDATE 01 – 12:50

Further investigations have shown this is affecting services delivered from our redundant fibre routes as well. We are continuing to work with our suppliers as the root cause has been identified outside of our network.

UPDATE 02 – 13:02

We are starting to see services recover but services should still be considered at risk.

UPDATE 03 – 14:00

We have had further feedback from our suppliers to advise this has been resolved and they believe this to have been down to a configuration issue there side. We have raised this as concern as this took down multiple fail over links for redundancy.

FINAL – 10:39

We have been advise this issue was cause by human error within our wholesale provider and the failure to follow strict guidelines when undertaking work. Due to the impact this had on ourselves and the loss of our redundant links with this provider, We are undertaking an internal review to ensure we mitigate against this in the future.

14/09/2020 22:00 – Broadband Network

Due to a unexpected reload of a broadband gateway on our network this afternoon, we are seeing a traffic imbalance on part of our broadband network.

We will be taking steps to disconnect sessions gratefully and re balance the affected gateways.

End users will see a PPP reconnect taking approximately 5-20 seconds. In the rare event your connection does not restore. You will need to power OFF and ON your router.

UPDATE01 – 22:10

This work is now complete.

27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.

London Data Center Fire – 07:25 – 18/08/2020

We have seen a number of leased lines and peering sessions go down over night. This has been caused by a possible fire (fire alarms are going off) in at least 1 London data center affecting a number of providers. We are working to obtain further information.

UPDATE 01 – 8:00

We have been advised the London harbor building (Equinix LD8)remains evacuated.

UPDATE 02 – 09:15

Equinix have advised that a fire alarm was triggered by the failure of output static switch from their Galaxy UPS system. This has resulted in a loss of power for multiple customers and Equinix IBX Engineers are working to resolve the issue and restore power. At this moment in time we do not believe there to have been a fire.

UPDATE 03 – 10:15

Equinix IBX Site Staff report that the root cause of the fire evacuation was due to the failure of a Galaxy UPS that triggered the fire alarm. The fire system has been reinstated and IBX Staff have been allowed back in to the building. We are now awaiting updates on restoring services.

UPDATE 04 – 11:15

Equinix Engineers have advised that their IBX team have begun restoring power to affected devices. Unfortunately, at present there remains no estimate resolution time.

UPDATE 05 – 12:15

Equinix have advised that services are started to be restored with equipment being migrated over to other newly installed infrastructure. We have yet to see any of our affected connections restore but keep checking for updates.

UPDATE 06 – 13:15

Equinix IBX Site Staff reports that services have been further restored to more customers and IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure. Equinix advised access will be granted and prioritized to the IBX should any customers need it to work on their equipment.

UPDATE 07 – 14:20

IBX Site Staff reports that services have been further restored to more customers and increasing numbers of those affected are now operational along with the majority of Equinix Network Services. IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure

UPDATE 08 – 15:15

We are please to advise we have just seen all affected services restore. Circuits remain at risk due to the ongoing power issues on site, however we do not expect them to go down again.

05/08/2020 14:23 – Broadband Disruption

We are aware a number of broadband services have been dropping PPP sessions over the past several hours. Initial diagnostics show nothing wrong our side and we have raised this with our suppliers for further investigation.

UPDATE 01 – 14:27

We have received an update to advise there is an issue further upstream and emergency maintenance work is required. Due to the nature of the work, we have been told this will start at 14:30 today. The impact of this will be further session drops while core devices are potentially reloaded carrier side.

We are sorry for the short notice and impact this will have and have requested a RFO already for the incident.

UPDATE 02 – 15:54

We have been advised the work is complete. We are awaiting this to be 100% confirmed.

28/07/2020 – 20:00 SMTP relay

We are aware our unauthenticated SMTP relay cluster has been subject to relay abuse by a compromised client. Currently SMTP services are suspended on the cluster.

UPDATE 01 – 22:30

SMTP services on the cluster remain suspended while we review. Further updates will be provided on the 29/07/2020

UPDATE 02 – 09:15 – 29/07/2020

After a full review. Due to the age of the platform, End of life support on the OS, extremely low usage levels (less than 0.1%) and the lack of support for enhanced security such as DKIM and DMARC. We have decided to withdraw the platform form service.

For customers who where using the service, We would advise migrating to authenticated SMTP provided via your web hosting provider or signing up with a free relay such as https://www.smtp2go.com/

We understand change is unwelcome, but after review we feel this is in the interest of all who still use the platform to protect your domain and others.

11/05/2020 14:32 – Broadband Disruption

Our network monitoring has alerted us to a number of BTW based circuits going offline and prefix withdrawals from suppliers. We are currently investigating.

UPDATE 01 – 14:49

We are seeing reports from other providers that they have experienced other issues. Initial investigations appear to show this as a problem within the “Williams House” Equinix data center in Manchester.

UPDATE 02 – 15:51

Connections are starting to restore. Services affected appear to have been routed via Manchester.

16/04/2020 23:00 – Broadband Maintenance

We will be making some changes to our broadband network tonight in order to isolate 2 upstream gateways we suspect of causing additional latency to circuits routed via them.

This will cause existing connection via these gateways to drop and reconnect. Due to the nature of the change this can take up-to 20 minutes.

UPDATE 01 – 23:03

This work is about to start.

UPDATE 02 – 23:06

Tunnels have been terminated and traffic is starting to move across to other gateways.

UPDATE 03 – 23:28

We have seen an issue with the L2TP control messages not being accepted by the upstream gateways and releasing circuits to other gateways. We have therefore had to revert part of the configuration. Further works will be required at a later date.