LON01 – EasyXDSL – 10/06/2016 – 16:38

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes. Initial reports advise this is / was a BT Wholesale problem. We apologies … Continue reading “LON01 – EasyXDSL – 10/06/2016 – 16:38”

We are aware one of our L2TP tunnels to our supplier has reset and dropped several hundred DSL sessions. Affected circuits have re-routed and are back online. Any customer still experiencing problems are advised to power down there hardware for 20 minutes.

Initial reports advise this is / was a BT Wholesale problem.

We apologies for any inconvenience caused.

LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier. We applogise for any inconvenience caused. UPDATE01 – 16:10 – … Continue reading “LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*”

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier.

We applogise for any inconvenience caused.

UPDATE01 – 16:10 – FINAL

We have received an update from our carrier to advise a billing script was run against wholesale interconnects (such as ours) that had an unexpected effect on call barring restrictions which prevented all but emergency calls to progress. As the calls where routing correctly to our carrier with no normal failure message at a “network level” our systems did not exclude that carrier on the 1st attempt. Callers that waited until the end of the message would have seen the call connect.

Once again we applogise for any inconvenience caused and have been advised this script wont be run again.

LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab. Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment … Continue reading “LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*”

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab.

Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment that will allow us to isolate the faulty hardware without further risk to services provided via this cabinet going forward. The work will involve the replacement of various power distribution hardware. At this time the POP is operating on its redundant power feed and all services have been re-routed where possible. Ethernet services provided by this cab are considered “at risk” until power has been fully restored. Transit services will automatically re-route in the event of a failure.

Due to the nature of the works, engineers will be working within a live cabinet. No issues are expected and extreme care will be taken while the works are under-taken.

Further updates will be provided in the morning.

UPDATE01 – 10:10
Engineers have started work.

UPDATE02 – 10:55
The new power distribution hardware has been installed and engineers will begin to power up the affected hardware 1 device at a time. Level3 are on site with us in the evn of another problem.

UPDATE03 – 11:10
All hardware has been powered up and the faulty device found (all be it with a bang and fire) Unfortunately the failed hardware is the redundant power supply on the network core within this rack. Redundant hardware of this size is not kept on site and we are currently in the process of sourcing another unit. Further updates to follow.

UPDATE04 – 12:02
Further test have been done and confirmed the PSU unit has failed. Engineers have removed a power supply from another unit from our 4th floor suite and installed it within the 2nd floor POP to confirm it is un-damaged by the recent events. Further updates to follow.

UPDATE05 – 12:45
Engineers have ordered a same day replacement from one of our suppliers. Engineers are going to remain on site to fit and commission the new hardware on arrival

UPDATE06 – 15:25
Engineers remain on site awaiting hardware. ETA was 15:30, however this has been pushed back due to a crash on the A3.

UPDATE07 – 15:25
Enginners remain on site having little fun. We have been advised the part is now in London and will be with us by 17:30

UPDATE08 – 17.35
The replacement part has arrived on site

UPDATE09 – 17.50
Despite our best efforts, our supplier has shipped the wrong part! Discussions had with them have concluded with no further delivery options today. New hardware has been sourced and is being made avaliable to site for a timed delivery. Engineers are attending again in the morning to swap out the (hopefully) correct new part. We do applogise for the delay in getting this resolved, however want to remind customers who route via this device that it is still operating as expected on its redundant supply.

UPDATE10 – 09:30 – 07/04/2016
Engineers have retured to site and are awaiting delivery of the new PSU.

UPDATE11 – 09:46 – 07/04/2016
Delivery update to advise the hardware will be on site before 11am.

UPDATE12 – 10:33 – 07/04/2016
Hardware has arrived on site and engineers have confirmed it is the correct unit this time.

UPDATE13 – 10:47 – 07/04/2016
Engineers have installed the new power supply and confirmed its operation within the core. A series of load tests have been conducted with normal operation observed.

UPDATE14 – 11:00 – 07/04/2016
We are happy the new power supply is operating as expected, however will continue to monitor its operation for the next few hours. The site is no longer classed as “at risk” and this issue will now be closed off.
We apologise one again for the delay this has taken to resolve and will be reviewing our internal procedures on hardware spares of this nature at Goswell Road.

LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide … Continue reading “LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*”

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide updates shortly.

UPDATE01 – 15:33
All devices within the affected cab confirm that they have lost 1 side of there power feed. We are still working with Level3

UPDATE02 – 17:00
Level3 have advised this is being assigned to a local engineer for further investigation.

UPDATE03 – 18:57
Level3 have advised this has been tasked to a field engineer to investigate. No local alarms have been raised on the site so we suspect this is a local power issue at the cab.

UPDATE04 – 21:20
The field engineer has advised a local fuse had blown on our B side power feed (external fuse). We have granted permission for this to be replaced and the for Level3 to conduct a series of tests to see if the fault was caused by a hardware PSU failure. No issues are expected, however the POP should be classed as “high risk” for the duration.

UPDATE05 – 21:40 Tests have been completed and no fault was found with any of our hardware. Field engineer suspect a weak fuse on that PSU. All redundant power has been restored and the site is now longer classed as “at risk”.

UPDATE06 – 22:30 Monitoring has remained stable for the duration and we are happy no further issues are expected, However Level3 remain on standby. As this was not a service affecting fault and within SLA no RFO will be provided.

LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected. UPDATE01 – 15:45 CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused. UPDATE02 – 16:15 A incident … Continue reading “LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*”

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected.

UPDATE01 – 15:45
CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused.

UPDATE02 – 16:15
A incident report is now available HERE

Virgin Media Based Circuits – 16/02/2016 – Incident *Resolved*

We are currently aware of an issue affecting a number of on-net Virgin Media based circuits. We are currently working with our supplier to investigate the issue further. Services with backup media such as DSL will have re-routed. UPDATE01 – 13:28 We have been advised that Virgin have identified this as an NTU failure impacting … Continue reading “Virgin Media Based Circuits – 16/02/2016 – Incident *Resolved*”

We are currently aware of an issue affecting a number of on-net Virgin Media based circuits. We are currently working with our supplier to investigate the issue further.

Services with backup media such as DSL will have re-routed.

UPDATE01 – 13:28
We have been advised that Virgin have identified this as an NTU failure impacting approximately 200 circuits. Work is being undertaken to restore the service.

UPDATE02 – 13:37
Virgin media are and dispatching engineerins to the point of presence at InterXion

UPDATE03 – 15:40
Virgin engineers are continuing to investigate the reported fault to isolate the issue further.

UPDATE04 – 16:27
We have seen services restore, they will be considered at risk until Virgin has closed off the fault with there full findings.

UPDATE05 – 17:11
Virgin has confirmed service has been restored via a temporary work around to bypass a faulty fibre patch at the Poplar Metnet. Virgin Media will schedule future planned maintenance to fix the original fault. Services should no longer be considered at risk however we will update further once the planned maintenance to complete the repair has been scheduled.

BT Outage – DSL Authentication Issue – 02/02/2016 – 16:25

We are aware of the national BT DSL outage that is sweeping social media. In addition to this we are now starting to see wholesale DSL services affected, whereby if a connection looses sync or PPP then it wont recover. Further diagnostics shows that BT are not delivering the session to us correctly and is … Continue reading “BT Outage – DSL Authentication Issue – 02/02/2016 – 16:25”

We are aware of the national BT DSL outage that is sweeping social media. In addition to this we are now starting to see wholesale DSL services affected, whereby if a connection looses sync or PPP then it wont recover. Further diagnostics shows that BT are not delivering the session to us correctly and is also affecting other wholesale ISPs.

LON01 – EasyIPT – 28/01/2016 – 15:55 – Voice Disruption *Resolved*

We are aware of an issue affecting inbound and outbound call routing on our network that started at 15:55 and are investigating as a matter of urgency. Initial diagnostics and reports show this to be an upstream carrier issue. UPDATE01 – 16:10 Due to the nature of the outage, Automatic outbound re-routing to an alternative … Continue reading “LON01 – EasyIPT – 28/01/2016 – 15:55 – Voice Disruption *Resolved*”

We are aware of an issue affecting inbound and outbound call routing on our network that started at 15:55 and are investigating as a matter of urgency. Initial diagnostics and reports show this to be an upstream carrier issue.

UPDATE01 – 16:10 Due to the nature of the outage, Automatic outbound re-routing to an alternative carrier has been intermittent as the carrier suffering the issue has been going on and offline. We have therefore manually re-routed outbound calls and can see them terminating OK. We are continuing to investigate.

UPDATE02 – 16:15 Further investigations are showing our BGP session to the affected upstream carrier as being intermittent. While reviewing the link we have received notice from 1 of our fibre providers that they have suffered a fibre issue in the Docklands area and this is causing there network to re-converge. Outbound calls continue to terminate correctly, however inbound calls via this upstream provider may be intermittent.

UPDATE03 – 16:45 We have seen the link stabilise and inbound calls from the carrier terminating OK. We are continuing to route outbound calls over alternative carriers as a precaution.

LON01 – 04/01/2016 – 11:50 – EasyXDSL – PPP Sessions *Resolved*

We are aware that some DSL PPP sessions have dropped and are not re-establishing. We are working with our wholesale supplier to try and resolve the situation. UPDATE01 – 12:20 We have have tracked down the fault to circuits that are provided to us as “off-net” circuits from Openreach. We have raised an escalation with … Continue reading “LON01 – 04/01/2016 – 11:50 – EasyXDSL – PPP Sessions *Resolved*”

We are aware that some DSL PPP sessions have dropped and are not re-establishing. We are working with our wholesale supplier to try and resolve the situation.

UPDATE01 – 12:20

We have have tracked down the fault to circuits that are provided to us as “off-net” circuits from Openreach. We have raised an escalation with our supplier and and working to resolve this as a priority.

UPDATE02 – 12:25

We are stating to see off-net circuits re-establish. We have not yet officially received a clear from our wholesale provider, however are pushing for a update.

UPDATE03 – 12:38

We have spoken to our wholesale NOC engineers who have advised that they added a new DSL gateway in to there network pool following a failure of an existing device. Unfortunately this device was not announced to our network by our supplier within our DSL BGP session, Therefore we where unable to route PPP responses back to this device. The device has been removed from our suppliers device pool and services are terminating back to us via known L2TP gateways. Service has now returned to normal. We apologize for any inconvenience caused.

LON01 – 17/11/2015 – 13:50 – Leased Lines – *Vodafone incident* *Resolved*

We are currently aware of an issue affecting a number of on-net and off-net Vodafone based circuits. We will continue to work with our suppliers and publish updates as they become available. UPDATE 01 – 12:10 We have seen sessions restore, however are still pending an update from Vodafone. UPDATE 02 – 12:40 Telecity have … Continue reading “LON01 – 17/11/2015 – 13:50 – Leased Lines – *Vodafone incident* *Resolved*”

We are currently aware of an issue affecting a number of on-net and off-net Vodafone based circuits.

We will continue to work with our suppliers and publish updates as they become available.

UPDATE 01 – 12:10
We have seen sessions restore, however are still pending an update from Vodafone.

UPDATE 02 – 12:40
Telecity have confirmed that this incident was caused by an isolated power issue. Telecity engineers are working to ensure the underlying issue is fully resolved. Until we receive confirmation from Telecity that all work has completed, previously affected services should be considered as at risk.

Further updates will be provided as they are made available from our suppliers.

UPDATE 03 – 14:57
We have been advised via another supplier of the following:

We are aware of an issue at Telecity Sovereign House. The issue has been confirmed to be a wide-spread power outage affecting the ground floor up to, and including the 4th floor. The power appears to be currently intermittently available and engineers are on site investigating the source of the problem. At this time, the DC are unable to advise the cause or why resilient systems failed to take over.