27/08/2020 – VoIP MSO [Resolved]

We are currently experiencing an issue with VoIP calls across our network, engineers are working on this and will provide an update asap.

-Update- 27/08/2020 09:48
The issue has been identified with one of our up stream carriers who are currently working on an issue and will provide updates shortly.

-Update- 27/08/2020 09:56 Telstra LHC Outage

All powered equipment in Telstra LHC Data Center went offline at 09:17 BST. We have been informed there are multiple fire engines on site and a suspected UPS fire on the 3rd floor where our comms equipment is located. It seems most likely the fire brigade have ordered building power down as part of electrical safety procedure.

As far as we are aware, this affects all customers and carriers within LHC, and we have confirmation that other carriers with passive connections extending outside the building are also showing offline. This is therefore affecting customers who are directly terminated with active circuits at this location. All passive fibre connections remain unaffected, including those passing through the building.

Updates to follow as they arrive from the DC. We sincerely apologise for any inconvenience this may cause.

-Update- 27/08/2020 11:10

Correspondence from LHC DC:
Due to a localised fire in the LHC, we have lost the Green system. This provides power to the lower half of the building. The fire has tripped the breakers supporting the busbar. Engineers are on-site and are working to restore power via generator as we speak.

-Update- 27/08/2020 13:16

We have been made aware by the DC that the services are starting to restore. We are monitoring this carefully and will provide you with an update as soon as we have more information.

-Update- 27/08/2020 14:08

We are seeing call flows return to normal levels but have yet to hear back from the DC and/or Carrier. We will continue to monitor and provide updates as they become available.

-Update- 27/08/2020 16:07

Engineers are still working to improve stability in the network and restore the remaining services we hope this will be complete with in the next 30 minutes. Most customers should have service restored.

-Update- 27/08/2020 16:40

We can see normal call flow across the network but have still yet to get a clear message from the carrier.

-Update- 27/08/2020 17:45

The carrier reports all services are now restored.

-Update- 28/08/2020 9:30

Services are now fully restored, a full RFO will be posted once made available.

05/08/2020 14:23 – Broadband Disruption

We are aware a number of broadband services have been dropping PPP sessions over the past several hours. Initial diagnostics show nothing wrong our side and we have raised this with our suppliers for further investigation.

UPDATE 01 – 14:27

We have received an update to advise there is an issue further upstream and emergency maintenance work is required. Due to the nature of the work, we have been told this will start at 14:30 today. The impact of this will be further session drops while core devices are potentially reloaded carrier side.

We are sorry for the short notice and impact this will have and have requested a RFO already for the incident.

UPDATE 02 – 15:54

We have been advised the work is complete. We are awaiting this to be 100% confirmed.

28/07/2020 – 20:00 SMTP relay

We are aware our unauthenticated SMTP relay cluster has been subject to relay abuse by a compromised client. Currently SMTP services are suspended on the cluster.

UPDATE 01 – 22:30

SMTP services on the cluster remain suspended while we review. Further updates will be provided on the 29/07/2020

UPDATE 02 – 09:15 – 29/07/2020

After a full review. Due to the age of the platform, End of life support on the OS, extremely low usage levels (less than 0.1%) and the lack of support for enhanced security such as DKIM and DMARC. We have decided to withdraw the platform form service.

For customers who where using the service, We would advise migrating to authenticated SMTP provided via your web hosting provider or signing up with a free relay such as https://www.smtp2go.com/

We understand change is unwelcome, but after review we feel this is in the interest of all who still use the platform to protect your domain and others.

11/05/2020 14:32 – Broadband Disruption

Our network monitoring has alerted us to a number of BTW based circuits going offline and prefix withdrawals from suppliers. We are currently investigating.

UPDATE 01 – 14:49

We are seeing reports from other providers that they have experienced other issues. Initial investigations appear to show this as a problem within the “Williams House” Equinix data center in Manchester.

UPDATE 02 – 15:51

Connections are starting to restore. Services affected appear to have been routed via Manchester.

03/03/2020 – 16:45 – Voice Issues

We are aware of an issue affecting inbound calls with one of our upstream voice carriers. We have re-routed outbound calls around the affected network and calls should be connecting as expected.

We have raised a priority case with the carrier who have confirmed there is an issue and is being dealt with urgently.

We apologize for the disruption and will update this NOC post once further details become available.

UPDATE 01 – 17:10

We have started to see inbound calls on the affected carrier restore and traffic flowing. We have not had official closure yet, so services should be considered at risk still

UPDATE 02 – 17:33

The affected upstream carrier has confirmed services have been restored and this was the result of a data center issue. We have asked for a RFO and this will be provided as requested.

re-routing has been removed and all service are normal.

Once again we apologize for the disruption

FINAL – 04/03/2020 – 14:45

We have been advised the ROOT cause of this incident was the result of a failed network interface on a primary database server within the carrier network. We have been advised the database is redundant but this has highlighted the need for additional redundancy and is already being deployed.

DSL – 14/01/2020 – 08:15

At 08:15 GMT this morning, we were alerted to a number of DSL broadband sessions disconnecting. Initial diagnostics showed there was no fault within our network and this was escalated to our wholesale supplier.

Our wholesale supplier responded to advise a DSL gateway “cr2.th-lon” at 08:15 GMT had dropped a number of sessions however had started to recover at 08:23 GMT. At this time the root cause of the outage is unknown but investigations are continuing. Services should be considered at risk until we ascertain the cause.

UPDATE 01 – 10:50

We have seen a further drop in sessions where sessions have had to re-authenticate. We have requested an update from our supplier to enquire of this is related to the issues seen this morning.

09/05/2019 20:50 – Broadband Disruption

We have observed a small number of broadband services suffering from intermittent connection problems between 19:46 and 20:46 this evening.

This issue has been tracked down to one of our wholesale suppliers who suffered a network outage that has since recovered. This has resulted in the affected connections not being able to reach our RADIUS servers for authentication and where simply being terminated on the local BTW RAS with a non route able IP address.

Users who do not have a connection are advised to reboot or power off there router for 20 minutes to recover any stuck sessions.

We apologies for the inconvenience and are awaiting an RFO.

05/04/2019 – SIP VoIP – Outbound Calls

We are aware a small percentage of outbound calls being made on our VoIP network are taking longer than normal to connect. We are aware of this issue and investigating .

UPDATE01 – 09:40

Calls have been re-routed over another carrier while we work to understand what is going on with calls on BTIPX

UPDATE02 – 10:06

Calls are being rejected due to changes made at BT in respect to number formatting. We are in the process of making changes to how we present numbers.

UPDATE03 – 13:00

New number scrips have been designed and put in to operation. Outbound calls are now routing correctly once again and this has been marked as closed.

No changes are required on existing customer systems however we will be changing how we configure new systems

01/1/2019 – 22:20 – LNS01 Broadband Gateway

We are aware of a unexpected reload of LNS02 which has occurred again. We are currently dealing with our vendor as to the cause and resolution.

UPDATE 01 – 22:44

Our vendor has responded providing some technical information from the resulting crash and we have some additional logging capability in place. We are unable to advise any further at this stage due to ongoing investigations.

22/11/2018 – 09:00 – Outage Vodafone layer2 leased lines

Our network monitoring has alerted us to a small number of metro based layer 2 Vodafone circuits that have no service. This has also had an impact on broadband as well as services in to our Horsham based facility.

This was raised with the relevant teams last night and fibre engineers are already on site working on suspected damaged fibre to our selves and a number of other providers.

We have seen some services restore. Services with backup if affected will be operating on backup and wont be affected.

We will post further updates as they become available.

UPDATE01 – 11:02
Engineers have located the break and this will now progress to splicing.

UPDATE02 – 12:45
Engineers are continuing to work on splicing the damaged cable with spares.

UPDATE03 – 13:00
We are starting to see some services restore.

UPDATE04 – 14:25
As part of ongoing investigation into this issue, Vodafone are sending out additional engineering resources to their data centre.

Vodafone believe the break to be located 43 meters into the fibre handoff, which they are working on to resume full service.

UPDATE05 – 15:30
Engineers at the data centre are continuing to work with Vodafone fibre teams and resolve the second identified fibre break.

UPDATE05 – 16:45
Vodafone fibre engineers continue to examine the length of the fibre for further issues impacting service.

UPDATE06 – 17:55

Vodafone engineers remain on site tracing the underlying infrastructure so that a permanent fix can be put in place for all affected services. We continue to work with the carrier to expedite the resolution with regular communication.

Currently the agreed next stage is for engineers to work to determining the exact location for the fibre fault.

UPDATE07 – 18:45
All services have restored and we are awaiting a RFO. This will be provided on request.