Wholesale DSL maintiance 26/01/2023

Following on from the broadband issues occurring on the 20/02/2023 (report still pending) We have been advised from our wholesale supplier that they are replacing the equipment at fault there side tonight (26/01/2023) from 23:00 – 06:00 hours under an emergency maintenance window which would drop DSL connections for a few minutes.

We as a pre-emptive measure have already moved our traffic away from the device they are working on in Telehouse North over to Telehouse East to avoid the disruption. This has been confirmed by both network teams.

Once the work is complete, we will move traffic back.

We appreciate the hesitation this notice will bring given the recent issues, However this should not cause disruption given the circumstances are very different.

UPDATE01 21:30

We have been advised this work will likely be suspended due to an pre existing network issue within our wholesale providers network. We have seen our Telehouse North link we removed traffic from this afternoon go out of service which we have been indirectly advised of will be down to the pre existing network issue. Telehouse East remains online but should be considered “at risk”

UPDATE02 22:46

We have been indirectly advised that network issues within our wholesale provider are starting to settle down. Our primary Telehouse North link remains offline but is actively trying to reconnect and restore redundancy. Services remain “at risk”

UPDATE03 23:29

We have seen our primary Telehouse North link restore and remain stable for over 25 minutes. We will leave Telehouse East (Backup) as “preferred” until after the advised wholesale maintenance window and review Friday morning. Until this point the “at risk” notice will remain.

UPDATE04 – 27/01/2023 – 10:25

We are continuing to monitor.

Telehouse – UPS Replacement *At Risk* 20/01/2023 – 21/01/2023

We have been advised by Telehouse that they are undertaking power works to replace both UPS systems feeding the colocation suite where one of our racks is located as part of there Hardware upgrade program.

During these enabling works, Telehouse we will be isolating one UPS extension switchboard at a time covering two separate dates. This will ensure that there is one UPS System supporting our customer’s rack power load on each of these dates, to avoid total loss of power.

All of our hardware at this location is diversely fed by by dual redundant power supplies and we don’t expect any interruption to power or services but this should be classed as *At Risk*. Telehouse have provided a detailed scope of works that we have been asked not to share but they are very comprehensive and details power should not be disrupted for any great length of time.

Due to the unforeseen issues that arose last Friday. Structured engineers will be on site for the duration of the works being completed.

We have also decided to replace the the PSUs in our core devices prior to the works taking place and to manually transfer power away from the power feed being worked on to better manage any unforeseen outages this time.

Structured works will start at 19:30 to replace power supplies

Telehouse works will commence from 20:00

UPDATE01 – 18:00

Structured engineers are on site and prepping / reviewing works.

UPDATE02 – 18:20

Engineers have identified the need to proactively move DSL subscribers away from LNS01 and LNS03. This is being done by gracefully dropping PPP connections.

Telehouse – UPS Replacement *At Risk* 13/01/2023 – 14/01/2023

We have been advised by Telehouse that they are undertaking power works to replace both UPS systems feeding the colocation suite where one of our racks is located as part of there Hardware upgrade program.

During these enabling works, Telehouse we will be isolating one UPS extension switchboard at a time covering two separate dates. This will ensure that there is one UPS System supporting our customer’s rack power load on each of these dates, to avoid total loss of power.

All of our hardware at this location is diversely fed by by dual redundant power supplies and we don’t expect any interruption to power or services but this should be classed as *At Risk*. Telehouse have provided a detailed scope of works that we have been asked not to share but they are very comprehensive and details power should not be disrupted for any great length of time.

Telehouse staff will be on hand in the event of any issues and we ill be monitoring off-site attending in person if required.

Broadband – 27/10/2022 – IPv6

We are aware of an IPv6 issue on the network following on from a firmware upgrade within our core last night.

We have been working with the hardware vendor to resolve the issue but while this is ongoing , we have been reverting back to a previous version of firmware and moving connections between gateways.

Some users will experience a graceful PPP drop of around 5 seconds while there connection re-authenticates.

We do apologise for any inconvenicance

UPDATE01

LNS02, LNS03, LNS04 have been reverted and IPv6 connectivity has been restored.

LNS01 has been isolated for testing. Anyone experiencing slow DNS lookups or applications now loading are advised to reboot there router which will be routed to one of the other gateways with the fix applied.

UPDATE02

Further issues where identified with IPv6 within our core network (Broadband facing) Work to resolve this has now completed and IPv6 should now be fully operational again across all gateways.

EasyHTTP – 09/09/2022 – 12:30

We have made some changes to our hosted email platform in respect to DNSBL validation for SPAM and listings to the spam services running on the server.

This is in response to some office365 delivery issues that took place last month.

We don’t expect any mail delivery issues as a result of this and changes have been tested with known problematic mailboxes but any NDRs (none delivery reports) should be immediately reported to us via support.

Ethernet EAD – 22/07/2022 – 11:30

We are aware there are a small number of layer 2 ethernet services currently down at the moment. Internal investigations show this to be a upstream supplier issue and we are currently engaging with them to locate the fault.

UPDATE01: 11:40

We are seeing services down from BT Wholesale, Openreach EAD Direct, TTB so we suspect this may be a common POP failure between layer2 providers landing circuits in London. Customers with backup will have automatically kicked in and re-routed.

UPDATE02: 12:00

We are still awaiting for an official update from our layer2 provider as to the root cause of this issue but they have advised they are seeing circuits down with a large number of calls on hold to the service desks.

We have already escalated to our management contacts to push for information so we can provide detailed updates.

UPDATE03: 12:25

Our Layer2 provider has now declared a “Major Incident”

They have advised this appears to be related to a “DNS issue” but we have disputed that. We are continuing to chase for updates. We apologise to affected customers and the inconvenience this is causing.

UPDATE04: 12:41

Our layer2 provider has now advised of a major internal core network problem affecting more than just layer2 services. This is currently affecting less than 10% of our overall EAD circuits with this layer2 provider and services delivered via other layer2 partners are unaffected.

We have been advised internal teams are working to identify the root cause and issue a fix. We again apologise to affected customers and the inconvenience this is causing.

UPDATE05: 12:55

Following further pressure to our account manager we have been advised this is affecting layer2 services being delivered to us via a core device located in there network at Interxion London with multiple service tunnels flapping.

We have been advised there will be a further update by 13:30

UPDATE06: 13:40

A further update has been provided to advise they are still working on why network tunnels on this device are “flapping” They have advised a further update by 15:30 but we will keep pushing for information and a ETA on service resolution.

UPDATE07: 15:05

Our network monitoring has shown our end points are reachable again on the affected circuit’s. We have no been provided an official clear yet and services should still be classed as at risk.

UPDATE08: 17:00

Our layer2 provider has advised they have re-routed around the affected device and are currently working with the hardware vendor to establish why the device has failed in the way it has. We suspect there may be a short outage at a later date once services are re-routed back via this device but we will advise at the time.

HOR-DC – *at risk*

Our network monitoring has alerted us to multiple circuit failures within our Horsham facility. Initial diagnostics seem to show fibre breaks and we suspect this may be the result of civil contractors. Traffic is flowing across redundant paths in to the building with no loss of primary peering or transit, but should be considered “at risk” due to operating on redundant links.

Ethernet services that terminate in to our Horsham facility will have automatically failed over to backup if purchased.

Faults have been logged with Openreach and we will keep updating this page as we know more.

UPDATE 01 – 12:01

We have seen all our “primary” fibre links recover and service has been restored, however no official update has been provided. We are still awaiting recovery of the other affected fibre links.

UPDATE 02 – 12:10

Openreach engineering teams are on route to our facility.

UPDATE 03 – 14:50

Openreach are on site

UPDATE 04 – 15:00 *FINAL*

All fiber links have been restored. Contractors working on the Openreach network had trapped one of our fibre tubes running that route and caused bends on the groups of affected fibre to the point light was unable to pass.

Tubing and fibres have been re run in the AG Node by Openreach and service has been restored.

Broadband – Maintenance (26/11/2021 – 22:00)

We will be upgrading firmware across some of our LNS gateways tonight to correct a RADIUS issue on the broadband platform.

The work will require a reboot of affected LNS once the upgrade is complete. This will take around 30 seconds per LNS. Connected circuits will see a PPP restart while this takes place and may connect back to another gateway.

UPDATE 01 – 22:05

This work is complete.