Broadband – 27/01/2021 – London 020

We are aware of a repeat Openreach fault from yesterday affecting broadband services within the 020 area code. We have raised this with our suppliers and awaiting an update.

UPDATE01 – 13:55

We have been advised the root cause has been found and a fix is being implemented.

UPDATE02 – 16:00

We have asked for an update from our supplier.

UPDATE03 – 17:10

We have been advised an update is not due before 19:00 We have gone back to advise this is unacceptable. Our service account manager at wholesale has stepped in to push Openreach for further details and a resolution time.

We apologize for the inconvenience this is causing.

UPDATE04 – 20:20

We have had updates provided from our wholesale supplier to advise while Openreach have not provided any firm ETA and raw fault detail. They believe the outage is being caused by an aggregated fibre node serving what they refer to as the Parent exchange and the secondary exchanges.

We are continuing to push for updates and proactively getting updates from wholesale now.

UPDATE05 – 02:30

We have been advised the fault has been resolved. We are awaiting an RFO and will publish once provided.

We apologise for the inconvenience.

04/01/2021 22:40 – Broadband Network

Our network monitoring has alerted us to one of our broadband LNS gateways reloading. This has resulted in broadband services connected via this device to disconnect and failover to alternative gateways.

The device has reloaded and returned to service. We are currently investigating the cause internally.

We apologize for any inconvenience this may have caused but do not expect a recurrence.

27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.

05/08/2020 14:23 – Broadband Disruption

We are aware a number of broadband services have been dropping PPP sessions over the past several hours. Initial diagnostics show nothing wrong our side and we have raised this with our suppliers for further investigation.

UPDATE 01 – 14:27

We have received an update to advise there is an issue further upstream and emergency maintenance work is required. Due to the nature of the work, we have been told this will start at 14:30 today. The impact of this will be further session drops while core devices are potentially reloaded carrier side.

We are sorry for the short notice and impact this will have and have requested a RFO already for the incident.

UPDATE 02 – 15:54

We have been advised the work is complete. We are awaiting this to be 100% confirmed.

28/07/2020 – 20:00 SMTP relay

We are aware our unauthenticated SMTP relay cluster has been subject to relay abuse by a compromised client. Currently SMTP services are suspended on the cluster.

UPDATE 01 – 22:30

SMTP services on the cluster remain suspended while we review. Further updates will be provided on the 29/07/2020

UPDATE 02 – 09:15 – 29/07/2020

After a full review. Due to the age of the platform, End of life support on the OS, extremely low usage levels (less than 0.1%) and the lack of support for enhanced security such as DKIM and DMARC. We have decided to withdraw the platform form service.

For customers who where using the service, We would advise migrating to authenticated SMTP provided via your web hosting provider or signing up with a free relay such as https://www.smtp2go.com/

We understand change is unwelcome, but after review we feel this is in the interest of all who still use the platform to protect your domain and others.

11/05/2020 14:32 – Broadband Disruption

Our network monitoring has alerted us to a number of BTW based circuits going offline and prefix withdrawals from suppliers. We are currently investigating.

UPDATE 01 – 14:49

We are seeing reports from other providers that they have experienced other issues. Initial investigations appear to show this as a problem within the “Williams House” Equinix data center in Manchester.

UPDATE 02 – 15:51

Connections are starting to restore. Services affected appear to have been routed via Manchester.

03/03/2020 – 16:45 – Voice Issues

We are aware of an issue affecting inbound calls with one of our upstream voice carriers. We have re-routed outbound calls around the affected network and calls should be connecting as expected.

We have raised a priority case with the carrier who have confirmed there is an issue and is being dealt with urgently.

We apologize for the disruption and will update this NOC post once further details become available.

UPDATE 01 – 17:10

We have started to see inbound calls on the affected carrier restore and traffic flowing. We have not had official closure yet, so services should be considered at risk still

UPDATE 02 – 17:33

The affected upstream carrier has confirmed services have been restored and this was the result of a data center issue. We have asked for a RFO and this will be provided as requested.

re-routing has been removed and all service are normal.

Once again we apologize for the disruption

FINAL – 04/03/2020 – 14:45

We have been advised the ROOT cause of this incident was the result of a failed network interface on a primary database server within the carrier network. We have been advised the database is redundant but this has highlighted the need for additional redundancy and is already being deployed.

DSL – 14/01/2020 – 08:15

At 08:15 GMT this morning, we were alerted to a number of DSL broadband sessions disconnecting. Initial diagnostics showed there was no fault within our network and this was escalated to our wholesale supplier.

Our wholesale supplier responded to advise a DSL gateway “cr2.th-lon” at 08:15 GMT had dropped a number of sessions however had started to recover at 08:23 GMT. At this time the root cause of the outage is unknown but investigations are continuing. Services should be considered at risk until we ascertain the cause.

UPDATE 01 – 10:50

We have seen a further drop in sessions where sessions have had to re-authenticate. We have requested an update from our supplier to enquire of this is related to the issues seen this morning.

09/05/2019 20:50 – Broadband Disruption

We have observed a small number of broadband services suffering from intermittent connection problems between 19:46 and 20:46 this evening. This issue has been tracked down to one of our wholesale suppliers who suffered a network outage that has since recovered. This has resulted in the affected connections not being able to reach our RADIUS … Continue reading “09/05/2019 20:50 – Broadband Disruption”

We have observed a small number of broadband services suffering from intermittent connection problems between 19:46 and 20:46 this evening.

This issue has been tracked down to one of our wholesale suppliers who suffered a network outage that has since recovered. This has resulted in the affected connections not being able to reach our RADIUS servers for authentication and where simply being terminated on the local BTW RAS with a non route able IP address.

Users who do not have a connection are advised to reboot or power off there router for 20 minutes to recover any stuck sessions.

We apologies for the inconvenience and are awaiting an RFO.

05/04/2019 – SIP VoIP – Outbound Calls

We are aware a small percentage of outbound calls being made on our VoIP network are taking longer than normal to connect. We are aware of this issue and investigating . UPDATE01 – 09:40 Calls have been re-routed over another carrier while we work to understand what is going on with calls on BTIPX UPDATE02 … Continue reading “05/04/2019 – SIP VoIP – Outbound Calls”

We are aware a small percentage of outbound calls being made on our VoIP network are taking longer than normal to connect. We are aware of this issue and investigating .

UPDATE01 – 09:40

Calls have been re-routed over another carrier while we work to understand what is going on with calls on BTIPX

UPDATE02 – 10:06

Calls are being rejected due to changes made at BT in respect to number formatting. We are in the process of making changes to how we present numbers.

UPDATE03 – 13:00

New number scrips have been designed and put in to operation. Outbound calls are now routing correctly once again and this has been marked as closed.

No changes are required on existing customer systems however we will be changing how we configure new systems