LON01 – EasyXDSL – 21/07/2016

We have been made aware of issues on BT wholesale Network whereby all customers connecting though London may experience drops in connectivity but then reconnect to our network via other gateways. If you are experiencing issues reconnecting, please power down your equipment for a few minutes to clear any stuck sessions. BT are investigating the … Continue reading “LON01 – EasyXDSL – 21/07/2016”

We have been made aware of issues on BT wholesale Network whereby all customers connecting though London may experience drops in connectivity but then reconnect to our network via other gateways.

If you are experiencing issues reconnecting, please power down your equipment for a few minutes to clear any stuck sessions.

BT are investigating the issue and updates will be provided when we have them.

We apologies any inconvenience this may cause.

UPDATE01 – 09:29

We have discovered the data centre that experienced issues yesterday for BT in respect to power may also be suffering issues again this morning.

EasyIPT – Interconnect – 10/07/2016 – 10.25

Our network monitoring has alerted us to a loss of peering with one of our primary wholesale SIP carriers. Traffic has re-routed as expected and calls are flowing. Our engineers are investigating. UPDATE01 – 13:00 Our engineers have confirmed the fault is not within our network and have logged a priority 2 fault with our … Continue reading “EasyIPT – Interconnect – 10/07/2016 – 10.25”

Our network monitoring has alerted us to a loss of peering with one of our primary wholesale SIP carriers. Traffic has re-routed as expected and calls are flowing.

Our engineers are investigating.

UPDATE01 – 13:00
Our engineers have confirmed the fault is not within our network and have logged a priority 2 fault with our fibre provider as well as the affected wholesale SIP carrier.

Once we know more we will provide an update. At the moment this is not service affecting as traffic is automatically re-routing via alternative paths.

UPDATE02 – 14.05
Our network monitoring has alerted to further failures within this wholesale SIP carrier. The failures now extend to there SIP gateways resulting in us loosing all connectivity with them.

Further redundancy our side has automatically kicked in to re-route outbound calls around that carrier, however numbers provisioned via this carrier are offline. We have escalated this to a priority 1 fault with them.

UPDATE03 – 14.20
We are seeing endpoints at our supplier bounce up and down, We can therefor only conclude that there is a serious issue with our supplier.

UPDATE04 – 14.33
We have been provided a notice via our suppliers internal website that they have suffered “multiple failures at one of our data centers (THN)” There engineers are onsite trying to resolve the issues.

We will keep pressing for updates and update as we know more. We apologise for any inconvenience this may cause.

UPDATE05 – 15.18
Our network monitoring has advised that our end points have now come back in to service, however our private interconnect is still down and traffic is routing via the public internet. We are will awaiting official notification.

UPDATE06 – 15.24
We spoke to soon.. endpoints have stopped responding once again.

UPDATE07 – 16:51 We have observed a period of stability to our endpoints along with our interconnect also coming back in to service. Our management team have also been involved due to the lack of support from our supplier and we will be engaging with them Monday morning. We have still not had an official conformation and continue to chase.

LON01 – Level3 – 30/06/2016 – 15:30

We are aware of a problem with one of our primary transit interconnects with Level3. This is affecting services across the network that transverse over that link. We are currently looking in to this as a priority. UPDATE01 – 15:40 After reviewing our network we can confirm this is a problem with our Adapt transit … Continue reading “LON01 – Level3 – 30/06/2016 – 15:30”

We are aware of a problem with one of our primary transit interconnects with Level3. This is affecting services across the network that transverse over that link.

We are currently looking in to this as a priority.

UPDATE01 – 15:40
After reviewing our network we can confirm this is a problem with our Adapt transit link and not the Level3 side and are taking steps to shutdown this session.

UPDATE02 – 15:45
We have received notice from Adapt that the network issue there side is a result of a high level DDos attack on part of there network.

UPDATE03 – 15:49
Adapt have confirmed the traffic has been mitigated and we have re-enabled the link.

UPDATE04 – 15:50
CORE02 unexpectedly reloaded when this link was re-enabled due to a full BGP table being received and not filtered correctly which would have affected DSL services terminating on this core. DSL services re-routed to CORE01 as expected. We apologise for any inconvenience this may have caused.

UPDATE04 – 15:55
CORE02 has recovered and is accepting connections again.

LON01 – EasyXDSL – 21/06/2016 – 15:32

We are aware lns01.dsl.structuredcommunications.co.uk crashed as a result of a kernel panic within the OS. circuits connected to this LNS would have suffered a drop while they where re-routed to a backup LNS within our network. We are currently engaged with our hardware vendor as to the root cause. lns01.dsl.structuredcommunications.co.uk has recovered to an operation … Continue reading “LON01 – EasyXDSL – 21/06/2016 – 15:32”

We are aware lns01.dsl.structuredcommunications.co.uk crashed as a result of a kernel panic within the OS. circuits connected to this LNS would have suffered a drop while they where re-routed to a backup LNS within our network.

We are currently engaged with our hardware vendor as to the root cause.

lns01.dsl.structuredcommunications.co.uk has recovered to an operation state and is accepting connections again.

We apologise for any inconvenience caused.

UPDATE 01 – 11:00 – 22/06/2016
We have followed up with of hardware vendor as to the progress of this crash report.

LON01 – EasyXDSL – 10/06/2016 – 16:38

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes. Initial reports advise this is / was a BT Wholesale problem. We apologies … Continue reading “LON01 – EasyXDSL – 10/06/2016 – 16:38”

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes.

Initial reports advise this is / was a BT Wholesale problem.

We apologies for any inconvenience caused.

LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier. We applogise for any inconvenience caused. UPDATE01 – 16:10 – … Continue reading “LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*”

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier.

We applogise for any inconvenience caused.

UPDATE01 – 16:10 – FINAL

We have received an update from our carrier to advise a billing script was run against wholesale interconnects (such as ours) that had an unexpected effect on call barring restrictions which prevented all but emergency calls to progress. As the calls where routing correctly to our carrier with no normal failure message at a “network level” our systems did not exclude that carrier on the 1st attempt. Callers that waited until the end of the message would have seen the call connect.

Once again we applogise for any inconvenience caused and have been advised this script wont be run again.

LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab. Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment … Continue reading “LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*”

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab.

Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment that will allow us to isolate the faulty hardware without further risk to services provided via this cabinet going forward. The work will involve the replacement of various power distribution hardware. At this time the POP is operating on its redundant power feed and all services have been re-routed where possible. Ethernet services provided by this cab are considered “at risk” until power has been fully restored. Transit services will automatically re-route in the event of a failure.

Due to the nature of the works, engineers will be working within a live cabinet. No issues are expected and extreme care will be taken while the works are under-taken.

Further updates will be provided in the morning.

UPDATE01 – 10:10
Engineers have started work.

UPDATE02 – 10:55
The new power distribution hardware has been installed and engineers will begin to power up the affected hardware 1 device at a time. Level3 are on site with us in the evn of another problem.

UPDATE03 – 11:10
All hardware has been powered up and the faulty device found (all be it with a bang and fire) Unfortunately the failed hardware is the redundant power supply on the network core within this rack. Redundant hardware of this size is not kept on site and we are currently in the process of sourcing another unit. Further updates to follow.

UPDATE04 – 12:02
Further test have been done and confirmed the PSU unit has failed. Engineers have removed a power supply from another unit from our 4th floor suite and installed it within the 2nd floor POP to confirm it is un-damaged by the recent events. Further updates to follow.

UPDATE05 – 12:45
Engineers have ordered a same day replacement from one of our suppliers. Engineers are going to remain on site to fit and commission the new hardware on arrival

UPDATE06 – 15:25
Engineers remain on site awaiting hardware. ETA was 15:30, however this has been pushed back due to a crash on the A3.

UPDATE07 – 15:25
Enginners remain on site having little fun. We have been advised the part is now in London and will be with us by 17:30

UPDATE08 – 17.35
The replacement part has arrived on site

UPDATE09 – 17.50
Despite our best efforts, our supplier has shipped the wrong part! Discussions had with them have concluded with no further delivery options today. New hardware has been sourced and is being made avaliable to site for a timed delivery. Engineers are attending again in the morning to swap out the (hopefully) correct new part. We do applogise for the delay in getting this resolved, however want to remind customers who route via this device that it is still operating as expected on its redundant supply.

UPDATE10 – 09:30 – 07/04/2016
Engineers have retured to site and are awaiting delivery of the new PSU.

UPDATE11 – 09:46 – 07/04/2016
Delivery update to advise the hardware will be on site before 11am.

UPDATE12 – 10:33 – 07/04/2016
Hardware has arrived on site and engineers have confirmed it is the correct unit this time.

UPDATE13 – 10:47 – 07/04/2016
Engineers have installed the new power supply and confirmed its operation within the core. A series of load tests have been conducted with normal operation observed.

UPDATE14 – 11:00 – 07/04/2016
We are happy the new power supply is operating as expected, however will continue to monitor its operation for the next few hours. The site is no longer classed as “at risk” and this issue will now be closed off.
We apologise one again for the delay this has taken to resolve and will be reviewing our internal procedures on hardware spares of this nature at Goswell Road.

LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide … Continue reading “LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*”

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide updates shortly.

UPDATE01 – 15:33
All devices within the affected cab confirm that they have lost 1 side of there power feed. We are still working with Level3

UPDATE02 – 17:00
Level3 have advised this is being assigned to a local engineer for further investigation.

UPDATE03 – 18:57
Level3 have advised this has been tasked to a field engineer to investigate. No local alarms have been raised on the site so we suspect this is a local power issue at the cab.

UPDATE04 – 21:20
The field engineer has advised a local fuse had blown on our B side power feed (external fuse). We have granted permission for this to be replaced and the for Level3 to conduct a series of tests to see if the fault was caused by a hardware PSU failure. No issues are expected, however the POP should be classed as “high risk” for the duration.

UPDATE05 – 21:40 Tests have been completed and no fault was found with any of our hardware. Field engineer suspect a weak fuse on that PSU. All redundant power has been restored and the site is now longer classed as “at risk”.

UPDATE06 – 22:30 Monitoring has remained stable for the duration and we are happy no further issues are expected, However Level3 remain on standby. As this was not a service affecting fault and within SLA no RFO will be provided.

LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected. UPDATE01 – 15:45 CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused. UPDATE02 – 16:15 A incident … Continue reading “LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*”

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected.

UPDATE01 – 15:45
CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused.

UPDATE02 – 16:15
A incident report is now available HERE

Virgin Media Based Circuits – 16/02/2016 – Incident *Resolved*

We are currently aware of an issue affecting a number of on-net Virgin Media based circuits. We are currently working with our supplier to investigate the issue further. Services with backup media such as DSL will have re-routed. UPDATE01 – 13:28 We have been advised that Virgin have identified this as an NTU failure impacting … Continue reading “Virgin Media Based Circuits – 16/02/2016 – Incident *Resolved*”

We are currently aware of an issue affecting a number of on-net Virgin Media based circuits. We are currently working with our supplier to investigate the issue further.

Services with backup media such as DSL will have re-routed.

UPDATE01 – 13:28
We have been advised that Virgin have identified this as an NTU failure impacting approximately 200 circuits. Work is being undertaken to restore the service.

UPDATE02 – 13:37
Virgin media are and dispatching engineerins to the point of presence at InterXion

UPDATE03 – 15:40
Virgin engineers are continuing to investigate the reported fault to isolate the issue further.

UPDATE04 – 16:27
We have seen services restore, they will be considered at risk until Virgin has closed off the fault with there full findings.

UPDATE05 – 17:11
Virgin has confirmed service has been restored via a temporary work around to bypass a faulty fibre patch at the Poplar Metnet. Virgin Media will schedule future planned maintenance to fix the original fault. Services should no longer be considered at risk however we will update further once the planned maintenance to complete the repair has been scheduled.