Scheduled Core Network Enhancements for Q1/Q2 2024

In the coming months, we’re excited to announce a comprehensive upgrade to our core network infrastructure at Telehouse in London. This enhancement involves the integration of cutting-edge hardware for improved performance and reliability. Detailed information about the schedule and impacted services will be provided as we approach each phase of the network maintenance.

Broadband Drops – 20/01/2024 – 22:00

We are aware of several PPP drops this evening that has caused loss of service for several minutes at a time. This has been traced back to our DSL upstream provider.

We apologize for the time taken to raise this notice however we where engaged in finding the root cause.

We have seen service stabilise however this is still at risk but we are closely monitoring.

UPDATE 01 – 20/01/2024 – 23:00

We have observed another PPP drop.

UPDATE 02 – 20/01/2024 – 23:15

We have reached out to our wholesale provider for an update but we are continuing to monitor. Our next update will be 21/01/2024.

UPDATE 03 – 21/01/2024 – 07.54

We have seen services restored over night. We are awaiting an RFO that will provided on request one made avaliable.

Anyone without service this morning please power down your router for 20 minutes.

We do apploigse for the service disruption.

Wholesale DSL (Broadband) maintiance 28/03/2023 – 30/03/2023

We have been advised by our wholesale provider they are migrating our layer 2 broadband traffic to new hardware on the dates below:

Change 1: 28/03 23.00 – 29/03 06.00
Change 2: 29/03 23.00 – 30/03 06.00
Change 3: 30/03 23.00 – 31/03 06.00

The plan is to steer all traffic to the new BNGs as priority, migrate 1 of the current BNGs a night (in turn dropping the subscribers/l2tp tunnels for that device, making sure they’re terminate elsewhere) and keeping the other current BNG in steering also as a lower priority for additional redundancy. This will be staged over the 3-night period.

We have checked our network and the new changes being proposed are already covered by existing configuration but we do expect circuits to drop and reconnect during this maintiance window.

If there are any issues, we expect changes to be reverted.

UPDATE01 – 29/03/2023

Change 1 has bee completed without issue and we are seeing traffic via the new BNGs and TEPs

UPDATE02 – 30/03/2023

Change 2 has bee completed without issue and we are seeing traffic via the new BNGs and TEPs

Wholesale DSL maintiance 26/01/2023

Following on from the broadband issues occurring on the 20/02/2023 (report still pending) We have been advised from our wholesale supplier that they are replacing the equipment at fault there side tonight (26/01/2023) from 23:00 – 06:00 hours under an emergency maintenance window which would drop DSL connections for a few minutes.

We as a pre-emptive measure have already moved our traffic away from the device they are working on in Telehouse North over to Telehouse East to avoid the disruption. This has been confirmed by both network teams.

Once the work is complete, we will move traffic back.

We appreciate the hesitation this notice will bring given the recent issues, However this should not cause disruption given the circumstances are very different.

UPDATE01 21:30

We have been advised this work will likely be suspended due to an pre existing network issue within our wholesale providers network. We have seen our Telehouse North link we removed traffic from this afternoon go out of service which we have been indirectly advised of will be down to the pre existing network issue. Telehouse East remains online but should be considered “at risk”

UPDATE02 22:46

We have been indirectly advised that network issues within our wholesale provider are starting to settle down. Our primary Telehouse North link remains offline but is actively trying to reconnect and restore redundancy. Services remain “at risk”

UPDATE03 23:29

We have seen our primary Telehouse North link restore and remain stable for over 25 minutes. We will leave Telehouse East (Backup) as “preferred” until after the advised wholesale maintenance window and review Friday morning. Until this point the “at risk” notice will remain.

UPDATE04 – 27/01/2023 – 10:25

We are continuing to monitor.

Broadband Outage 20/01/2023

We are currently aware of a network issue effecting broadband customers, engineers are already on site in preparation for the works at Telehouse and are currently working on the issue. More updates to follow.

Update 20/01/203 22:02

Engineers are still working to find the root cause of the issue, we will post more updates as they become available.

Update 20/01/203 23:04

We can see connections have now come back online, engineers are still working on the issue and will provide an update shortly.

Update 20/01/203 23:22

If you are still without service please power down your router for at least 30 minutes, this should restore your service.

Update 21/01/2023 10:10

Anyone without service please reboot or power down your router for 20 minutes. We are sorry for the issues caused and a further update will be posted with details as to the cause shortly.

Summery 16/02/2023:

A full RFO has been sent to partners and wholesale customers.

On the night of the 20/01/2023 (during the 2nd planned Telehouse power works) Engineers were on site prepping for the power works and to be there should issues arise. We took the opportunity to proactively replace a PDU bar which was showing signs of having a failing management interface. This PDU was on the “FEED A” (The side Telehouse were working on) so no additional deemed risk was expected.

Power feed A was isolated and taken down shortly before the power works
where due to start and the PDU replaced but not re-powered due to the pending
works by Telehouse.

All platforms were operating as expected on a single power feeds.

Additionally, a planned line card replacement was due to take place which
involved moving DSL subscribers across the network in a controlled manor. The
affected LNS01 and LNS03 where isolated and subscribers were moved across. The isolated LNSs were bought back into service shortly after.

At this point we noticed that new inbound DSL connections where only being routed to LNS02 and LNS04. The migrated configuration was checked and confirmed to be as expected.

At this point LNS02 started to reboot uncontrollably which dropped all connected DSL subscribers in an uncontrolled manor. LNS02 was manually rebooted and returned to service but quickly started to reboot again. LNS02 was taken out of service and powered down.

Services from LNS02 did not reconnect so changes where rolled back  on the line card migration however this did not make any difference.  

Diagnostics on our side did not show the incoming RADIUS proxy requests from our layer 2 provider so we placed a call to there NOC who failed to confirm anything was wrong despite several calls. (This has now been confirmed and was the root cause for the extended outage)

LNS02 was powered backup and diagnostics showed the 12V power rail on the remaining power supply was low and causing the device to reload, however due to the quick reload times on these devices, it was not being flagged on SNMP and due to a combined voltage when both PSUs where energised it did not show as low prior to the event. Power was then swapped over to the other working power supply that was offline due to the power works. This resulted in a stable device.

LNS02 was then bought back into service however no DSL circuits where being routed to us.

Further investigations were taking place when a large volume of inbound DSL connections started to be seen authenticating.

Since the events took place, our wholesale DSL provider confirmed they experienced a Major outage on one of the access routers we are connected to however failed to advise us at the time until many hours after the events took place. A formal complaint has been raised and a RFO has since been provided to confirm a number of devices there side suffered issues and have since been replaced.

While there was a failure of one of our gateways, these are in redundant pairs and would not have caused a complete outage by itself. The events that took place further upstream with our wholesale provider where the root cause of the extended outage.

This was unfortunate timing and had we been advised of the issues, we would have been able to address the outage in another way. We do apologise for the issues seen.

Telehouse – UPS Replacement *At Risk* 20/01/2023 – 21/01/2023

We have been advised by Telehouse that they are undertaking power works to replace both UPS systems feeding the colocation suite where one of our racks is located as part of there Hardware upgrade program.

During these enabling works, Telehouse we will be isolating one UPS extension switchboard at a time covering two separate dates. This will ensure that there is one UPS System supporting our customer’s rack power load on each of these dates, to avoid total loss of power.

All of our hardware at this location is diversely fed by by dual redundant power supplies and we don’t expect any interruption to power or services but this should be classed as *At Risk*. Telehouse have provided a detailed scope of works that we have been asked not to share but they are very comprehensive and details power should not be disrupted for any great length of time.

Due to the unforeseen issues that arose last Friday. Structured engineers will be on site for the duration of the works being completed.

We have also decided to replace the the PSUs in our core devices prior to the works taking place and to manually transfer power away from the power feed being worked on to better manage any unforeseen outages this time.

Structured works will start at 19:30 to replace power supplies

Telehouse works will commence from 20:00

UPDATE01 – 18:00

Structured engineers are on site and prepping / reviewing works.

UPDATE02 – 18:20

Engineers have identified the need to proactively move DSL subscribers away from LNS01 and LNS03. This is being done by gracefully dropping PPP connections.

Network Outage 13/01/2023

We are aware of downtime due to an issue at Telehouse London where parts of our network are based. Engineers are already on route and are speaking directly with the data centre to get everyone online again ASAP. Further updates to follow shortly.

Update 14/01/23 00:47am

Services have been restored but remain at Risk while engineers continue to work on the issue. Further updates to follow.

Update 14/01/23 6:53am

Engineers are still at the Telehouse data center replacing failed hardware. Services are still currently at risk but online. More updates to follow.

Update 14/01/23 8:42am

We are starting to see the majority of connections come back online. If you are still having issues please power down your router for at least 20 minutes, then power it back on. This should get the connection working again for you.

Update 17/01/2023 @ 12:33pm FINAL

SUMMERY

This outage was caused by a number of unforeseen cascading events due to the power works undertaken by Telehouse and affects on our power supplies and PDUs. Service was restored upon Structured engineering attendance at site and the replacement of a large amount of hardware.

Further works are planned for the 20th by Telehouse however we will be on site for the duration.

We are also reviewing the events that lead up to the issue and putting in place measures to ensure they do not happen again.

Telehouse – UPS Replacement *At Risk* 13/01/2023 – 14/01/2023

We have been advised by Telehouse that they are undertaking power works to replace both UPS systems feeding the colocation suite where one of our racks is located as part of there Hardware upgrade program.

During these enabling works, Telehouse we will be isolating one UPS extension switchboard at a time covering two separate dates. This will ensure that there is one UPS System supporting our customer’s rack power load on each of these dates, to avoid total loss of power.

All of our hardware at this location is diversely fed by by dual redundant power supplies and we don’t expect any interruption to power or services but this should be classed as *At Risk*. Telehouse have provided a detailed scope of works that we have been asked not to share but they are very comprehensive and details power should not be disrupted for any great length of time.

Telehouse staff will be on hand in the event of any issues and we ill be monitoring off-site attending in person if required.

Broadband – 27/10/2022 – IPv6

We are aware of an IPv6 issue on the network following on from a firmware upgrade within our core last night.

We have been working with the hardware vendor to resolve the issue but while this is ongoing , we have been reverting back to a previous version of firmware and moving connections between gateways.

Some users will experience a graceful PPP drop of around 5 seconds while there connection re-authenticates.

We do apologise for any inconvenicance

UPDATE01

LNS02, LNS03, LNS04 have been reverted and IPv6 connectivity has been restored.

LNS01 has been isolated for testing. Anyone experiencing slow DNS lookups or applications now loading are advised to reboot there router which will be routed to one of the other gateways with the fix applied.

UPDATE02

Further issues where identified with IPv6 within our core network (Broadband facing) Work to resolve this has now completed and IPv6 should now be fully operational again across all gateways.