Easy xDSL – Multiple Exchange Outages *RESOLVED*

We have been made aware of multiple exchanges outages in Sussex affecting 20CN services across multiple providers. At this moment no ETA has been given and we are awaiting further information from our suppliers. We apologize for any inconvenience caused and will provide further updates as they become available. UPDATE 01 – 08:34 We have … Continue reading “Easy xDSL – Multiple Exchange Outages *RESOLVED*”

We have been made aware of multiple exchanges outages in Sussex affecting 20CN services across multiple providers. At this moment no ETA has been given and we are awaiting further information from our suppliers.

We apologize for any inconvenience caused and will provide further updates as they become available.

UPDATE 01 – 08:34
We have been advised that hardware at a patent exchange has been replaced and services should start to restore. Any client still having issues are advised to power cycle there equipment.

TalkTalk Circuts – 15-07/2015 – 09:45 *RESOLVED*

Our network monitoring has alerted us to increased latency and packet loss on some Talk Talk EFM circuits. We have raised a fault with our supplier and are currently awaiting there response. UPDATE01 – 10:55 Our network monitoring is showing a reduction in packet loss and latency, however we have not had an official update. … Continue reading “TalkTalk Circuts – 15-07/2015 – 09:45 *RESOLVED*”

Our network monitoring has alerted us to increased latency and packet loss on some Talk Talk EFM circuits. We have raised a fault with our supplier and are currently awaiting there response.

UPDATE01 – 10:55
Our network monitoring is showing a reduction in packet loss and latency, however we have not had an official update.

UPDATE02 – 11:50
Our network monitoring is again showing latency increasing along with packet loss. We have chased for an update.

UPDATE03 – 13:13
Our supplier has advised this is being caused by an overloaded transit link due to windows updates. We have escalated this response to there management as this level of service should not be contended on a wholesale level. We are awaiting there response.

UPDATE04 – 15:00
Latency has returned back to normal and packet loss has gone. We are still awaiting an official clear and response to our escalation.

LON01 – 12/06/2015 *UK Routing Issue* *RESOLVED*

We are aware of a issue affecting connectivity to multiple networks within the UK. Initial testing had shown this to be a problem with Level3, however they are advising this to be part of a wider problem within the UK. We are currently monitoring and will look at re-routing traffic if this continues, however this … Continue reading “LON01 – 12/06/2015 *UK Routing Issue* *RESOLVED*”

We are aware of a issue affecting connectivity to multiple networks within the UK. Initial testing had shown this to be a problem with Level3, however they are advising this to be part of a wider problem within the UK.

We are currently monitoring and will look at re-routing traffic if this continues, however this would involve shutting down several of our peers and affecting traffic via unaffected links. This would be a last resort option.

Once we know more we will advise.

UPDATE 01 – 11:11

Level3 have updated us to advise they are now tracking a global routing issue. This now seems to be isolated to Level3, however will be affecting any network that peer with Level3. Re-routing traffic is still be an option, however may have no impact to networks that rely on Level3.

UPDATE 02 – 11:36

Our network monitoring has alerted us to routing failures to common websites such as the BBC, Facebook & Google. We have taken the decision to shutdown our Level3 peering with Adapt. While this has had some positive impact, 3rd party networks that are using Level3 will still be having issues.

UPDATE 03 – 11:55

Level3 have advised that services are starting to be restored. We have re-enabled our Level3 peer via Adapt and have noticed they also took the decision to showdown there link to Level3. Traffic are currently routing via Cogent via Adapt.

UPDATE 04 – 20:00

Outage report from a 3rd party monitoring network: http://www.bgpmon.net/massive-route-leak-cause-internet-slowdown/

Internal Systems *RESOLVED*

We are currently experiencing issues with some internal systems that is preventing various customer service / support operations. Engineers are on route to the data centre, however local network rail cancellations and disruptions are delaying engineers from attending. Please also note the above is effectiving email delivery so we are unable to process support tickets. … Continue reading “Internal Systems *RESOLVED*”

We are currently experiencing issues with some internal systems that is preventing various customer service / support operations. Engineers are on route to the data centre, however local network rail cancellations and disruptions are delaying engineers from attending.

Please also note the above is effectiving email delivery so we are unable to process support tickets. We advise calling for any urgent issues.

Apologies for any inconvenience this may cause. Once we know more we will update this notice.

UPDATE01 – 12.00
Engineers have arrived on site.

UPDATE 02 – 12.30
Internal systems have been restored

LON01 – 06-05-2015 – Adapt / Level3 Transit Outage *Resolved*

At 10:01 our network monitoring detected a loss of IP transit to our primary carrier Adapt / Level3. Engineers have shutdown this link and traffic is routing via alternative carriers. We will continue to monitor and update as more information becomes available. We apologize for any inconvenience caused. UPDATE 01 – 11:01 We are continuing … Continue reading “LON01 – 06-05-2015 – Adapt / Level3 Transit Outage *Resolved*”

At 10:01 our network monitoring detected a loss of IP transit to our primary carrier Adapt / Level3. Engineers have shutdown this link and traffic is routing via alternative carriers. We will continue to monitor and update as more information becomes available.

We apologize for any inconvenience caused.

UPDATE 01 – 11:01
We are continuing to try and contact Adapt for an update as there are no issues with the Level3 side of the link. We believe from external testing there is a routing failure within the Adapt network.

Traffic is continuing to route via other links, however due to the volume of traffic being observed latency may be higher than normal. Should this continue then we will look to re-route traffic via our 3rd interconnect on CORE01 that has a higher capacity link.

UPDATE 02 – 11:18
We have been advised that the issue has been resolved, however we are seeking further confirmation before we bring the peer back in to service.

UPDATE 03 – 11:18
We have spoken to the senior network architect at Adapt who has confirmed service has been restored and we have re-enabled the session. We are observing normal traffic flow, however will continue to monitor the link. Once an RFO has been published by Adapt we will provide this.

LON01 – Entanet Transit – 03-03-2015* *COMPLETE*

Our network monitor has alerted us to a failure of a link within our Entanet port channel that we use for Transit to and from Entanet at Goswell Road(LON01). Existing links have taken over the traffic loads and no service impact has been seen. Diagnostic work is underway to bring the link back in to … Continue reading “LON01 – Entanet Transit – 03-03-2015* *COMPLETE*”

Our network monitor has alerted us to a failure of a link within our Entanet port channel that we use for Transit to and from Entanet at Goswell Road(LON01). Existing links have taken over the traffic loads and no service impact has been seen. Diagnostic work is underway to bring the link back in to service and at the moment is suspected to be a fiber break.

UPDATE 01 – 15:30
Further work with Entanet shows there is no light on there side of the link. No work has taken place in the Entanet cabinet or in our suite. Other links on the same bulk are operating without the same failure so this may be a faulty transceiver

UPDATE 02 – 15:45
Further diagnostics show that despite our transceiver showing the link as “Not Connected” in depth diagnostics show the transceiver with a possible fault as its not being reported within the hardware inventory for that line card. Real-time logs don’t show the device as being taken out of service by the IOS and we believe a line card reset may resolve the issue. However this transceiver is currently connected to the Active supervisor for that core. The device does have 2 for redundancy and the load will need to be transferred before a reload can take place. A new emergency maintenance window will now be opened as this may result in several seconds of switching failure.

UPDATE 03 – 20:05
The module has been restarted, however the problem remains. We believe the output given from the Cisco IOS is misleading as it fails to list the device rather than giving DB levels of 0. We have resumed the fiber break theory and have asked our vendor Level3 to confirm if this is the case.

UPDATE 04 – 04/03/2015 – 09:00
A Structured engineer is being sent to site to locate and test the fiber.

UPDATE 05 – 06/03/2015 – 10:00
Engineers discovered this cross connect had been incorrectly unplugged in a 3rd party cabinet. This won’t be an issue going forward as we have our own bulk fibre and all critical circuits will be moved to this.

EasyxDSL Cannot Conenct Issues 12/11/2014 – 09:00 *RESOLVED*

We are currently experiencing an issue that is preventing some DSL users connecting to our service and/or constantly reconnecting to our service. Our initial investigations indicate that this is a Entanet issue and we are working with them to resolve as soon as possible. We will update with more information as it becomes available. — … Continue reading “EasyxDSL Cannot Conenct Issues 12/11/2014 – 09:00 *RESOLVED*”

We are currently experiencing an issue that is preventing some DSL users connecting to our service and/or constantly reconnecting to our service. Our initial investigations indicate that this is a Entanet issue and we are working with them to resolve as soon as possible. We will update with more information as it becomes available.

— Update 09:17 —
Entanet have identified a hardware failure and advise engineers have been dispatched to replace the faulty hardware. We are still working for a ETA.

— Update – 09:21 —
Any DSL bonded circuits may see reduced capacity but service should remain stable as we use 2 wholesale ISPs with our own interconnects.

— Update – 13:21 —
The issue is still currently on going, Entanet have advised engineers are still en route to replace the hardware. Due to the re routing of traffic you may experience slow speeds and/or packet loss. Please accept our apologies for any inconvenience caused and we will continue to provide updates as they become available.

— Update – 14:01 —
Entanet have advised engineers have arrived on site to replace the faulty hardware.

LON01 – C4L Frontier Interconnect *Resolved*

Our network monitoring has just alerted us to a loss of our C4L layer2 circuit to Frontier. Some active VoIP calls over this carrier / link will have dropped. C4L have just issued a Connectivity Issue alert on their NOC. While the interconnect has been restored we are, given historic outages on this link manually … Continue reading “LON01 – C4L Frontier Interconnect *Resolved*”

Our network monitoring has just alerted us to a loss of our C4L layer2 circuit to Frontier. Some active VoIP calls over this carrier / link will have dropped.

C4L have just issued a Connectivity Issue alert on their NOC.

While the interconnect has been restored we are, given historic outages on this link manually re-routing all outbound traffic away from Frontier to avoid any repeats on outbound traffic flow caused by the C4L issue.

If a re-occurrence does occur then we will drop the circuit completely which will result in our network re-routing ALL VoIP traffic over alternative links. This will have an impact while BGP re-converges and is a last resort option.

We apologies for any inconvenience this may have caused.

UPDATE01 16:01
We have been contacted by our account manager at C4L to advise this is a STP problem once again. A complaint has been issued with C4L and we continue to route outbound calls via an alternative carrier. Inbound remains via the effected link and will be transferred if the link suffers another drop. This advised will case a loss of service while BGP re-converges.

UPDATE02 16:45
The link has remained stable since the initial alert and inbound traffic has remained active without any further loss. C4L has advised that primary POPs have been restored. We will re-enable this link for outbound traffic later tonight.

UPDATE02 18:00
Outbound traffic has now been re-enabled on this link and we are monitoring. Once and RFO is provided we will issue it via the NOC.

LON01 – Frontier Interconnect *Resolved*

Our network monitoring has alerted us to a loss of BGP peering with Frontier Systems. Initial calls to our Layer2 provider for this link (C4L) advise they are aware of a spanning tree issue within their network. Traffic has automatically been re-routed around the link and no service disruption is being seen. UPDATE01 – 10:20 … Continue reading “LON01 – Frontier Interconnect *Resolved*”

Our network monitoring has alerted us to a loss of BGP peering with Frontier Systems. Initial calls to our Layer2 provider for this link (C4L) advise they are aware of a spanning tree issue within their network.

Traffic has automatically been re-routed around the link and no service disruption is being seen.

UPDATE01 – 10:20 – 05/10/3014
After a call to C4L following our link still being offline, we had been advised our VLAN had not been rebuilt on the trunk port where STP had been disabled. The service has now been restored and we have esculated this to our account manager.

EasyXDSL – Entanet WBC & IPSC Cannot connect Issues – 30/09/2014 – 20:10 *RESOLVED*

We are aware of a DSL issue affecting our wholesale provider Entanet. This is not localised to any part of the UK but as a result people are not able to connect to the internet. Customers on our EasyBOND platform with dual ISP reliance will be seeing a reduced service. Once more information becomes available … Continue reading “EasyXDSL – Entanet WBC & IPSC Cannot connect Issues – 30/09/2014 – 20:10 *RESOLVED*”

We are aware of a DSL issue affecting our wholesale provider Entanet. This is not localised to any part of the UK but as a result people are not able to connect to the internet.

Customers on our EasyBOND platform with dual ISP reliance will be seeing a reduced service.

Once more information becomes available we will update this notice.

UPDATE01 – 20:18
We have received an update to advise Entanet are seeing an issue with telehouse-east.core.enta.net and are working to resolve the issue.

UPDATE02 – 20:25
We have seen all DSL tails return to service but are awaiting official confirmation. We have also requested an RFO from Entanet

UPDATE03 – 03/10/2014
RFO can be found HERE