LON01 – EasyXDSL – 02/12/2016 – 16:20

Our networking monitoring has alerted us to a large number of DSL circuits going off-line. initial checks shows this to be a supplier outage and we are already engaged with them. UPDATE01 – 17:00 We have chased our suppliers for an update who advise they are still working to isolate the fault UPDATE02 – 17:37 … Continue reading “LON01 – EasyXDSL – 02/12/2016 – 16:20”

Our networking monitoring has alerted us to a large number of DSL circuits going off-line.

initial checks shows this to be a supplier outage and we are already engaged with them.

UPDATE01 – 17:00
We have chased our suppliers for an update who advise they are still working to isolate the fault

UPDATE02 – 17:37
We are continuing to chase for an update. We have been advised this has been made more complex due to another possible wholesale fault

We have confirmed the fault is not within our network and this now appears to only be affecting connections in the South of England.

UPDATE03 – 19:00

our supplier has advised there is a line card failure within there backhull network. Replacement hardware is required and a ETA of 21:30 has been placed on service restoration for affected users.

UPDATE04 – 21:25
We are seen services restore and have requested an RFO from our supplier. We apologise for the issues caused.

TR069 – Routers

We are aware a very small proportion of new ZyXel AMG1202-T10B routers supplied by us are subject to the TR069 exploit that is currently in the media. As well as the main stream providers our network engineers have also been able to devise a resolution. Clients with a ZyXel AMG1202-T10B who currently have no internet … Continue reading “TR069 – Routers”

We are aware a very small proportion of new ZyXel AMG1202-T10B routers supplied by us are subject to the TR069 exploit that is currently in the media. As well as the main stream providers our network engineers have also been able to devise a resolution.

Clients with a ZyXel AMG1202-T10B who currently have no internet access are advised to contact us during working hours so we can log your case as they are dealt with manually due to requiring engineering resources.

LON01- Leased Lines – BT Wholesale – 01/12/2016 – 09:37

Our networking monitoring has alerted us to a failure of several BT wholesale provided leased lines. Initial calls to them advise they have a failure of a core network device. Engineers have been dispatched to site. circuits with a backup connection will have re-routed. UPDATE01 01 – 10:00 We have observed a number of connections … Continue reading “LON01- Leased Lines – BT Wholesale – 01/12/2016 – 09:37”

Our networking monitoring has alerted us to a failure of several BT wholesale provided leased lines. Initial calls to them advise they have a failure of a core network device. Engineers have been dispatched to site.

circuits with a backup connection will have re-routed.

UPDATE01 01 – 10:00
We have observed a number of connections restore with a number still offline. We are chasing our supplier for an update as they advise this looks to be power related at there rack.

UPDATE01 02 – 11:00
We have been advised this has been resolved and are awaiting and RFO.

LON01 – EasyIPT – 18/11/2016 – 13:10

We are aware of a problem with one of our media gateways affecting inbound and outbound calls. We are currently investigating as a matter of priority. UPDATE01 – 13:13 Engineers have found the media gateway to be suffering from a sudden high CPU load. We have been able to stabilize the system remotely and are … Continue reading “LON01 – EasyIPT – 18/11/2016 – 13:10”

We are aware of a problem with one of our media gateways affecting inbound and outbound calls. We are currently investigating as a matter of priority.

UPDATE01 – 13:13
Engineers have found the media gateway to be suffering from a sudden high CPU load. We have been able to stabilize the system remotely and are currently monitoring.

UPDATE02 – 13:20
We have seen a repeat of high CPU demand and are in the process of isolating the service.

UPDATE03 – 13:25
The service has been found and the root fault isolated with remedial works planned over the weekend. We apologies for the issues caused.

Additional media gateways had already been planned for deployment over December to provide additional redundancy, this will now be bought forward.

UPDATE04 – 15:00
We are still seeing some calls fail to route due to stuck sessions on customer trunks. This was caused by the SQL service being overloaded and log entry’s being missed. Work is being planned for tonight to reload the affected services.

UPDATE05 – 20:00
Work has been undertaken to stabilise the platform.

UPDATE06 – 21/11/2016 12:00
Engineers have completed the install of additional hardware and a migration plan will be designed in due course.

LON01 – EasyXDSL – 17/11/2016 – 10:58 *At Risk*

Our network monitoring has alerted us to CPU errors being generated on lns01.dsl.structuredcommunications.co.uk We have raised a ticket with our hardware vendor for them to investigate and advise. The device remains in operation and routing data as expected, however it should be classed as “at rick” during this time due to the possibility of an … Continue reading “LON01 – EasyXDSL – 17/11/2016 – 10:58 *At Risk*”

Our network monitoring has alerted us to CPU errors being generated on lns01.dsl.structuredcommunications.co.uk

We have raised a ticket with our hardware vendor for them to investigate and advise. The device remains in operation and routing data as expected, however it should be classed as “at rick” during this time due to the possibility of an unexpected failure.

Other LNSs within the pool are unaffected. Should the above LNS fail then circuits will automatically re-terminate on another device.

UPDATE01 – 11:30
Our hardware vendor has reviewed the logs we have provided and concluded the device could be suffering from a physical hardware issue. We are looking at options on replacing this device.

UPDATE02 – 11:52
While options are being reviewed to replace this device, we have taken the decision to remove it from the active termination pool. This means it will not accept any new PPP connection attemps, however will continue to service existing established ones.

UPDATE03 – 13:05
Replacement hardware has been agreed by the vendor. We will advise of a maintenance window in due course.

UPDATE04 -23/11/2016
The LNS was replaced without issue and is now back in service.

LON01 – EasyIPT – 12/10/2016 – 10:45 *Inbound Calls*

We are aware one of our upstream providers is suffering a network issue affecting a small number of 020 prefixed inbound calls routed via them. This issue has been raised with the respective team and they are dealing with it as a priority. We apologies for any inconvenience caused and will continue to follow this … Continue reading “LON01 – EasyIPT – 12/10/2016 – 10:45 *Inbound Calls*”

We are aware one of our upstream providers is suffering a network issue affecting a small number of 020 prefixed inbound calls routed via them.

This issue has been raised with the respective team and they are dealing with it as a priority.

We apologies for any inconvenience caused and will continue to follow this up with them posting updates as they become available.

UPDATE 01 – 11:45

We are starting to see inbound calls connect again from the affected 020 numbers. We have not been officially advised this has been resolved but services from this carrier are starting to restore.

LON01 – EasyIPT – 13/09/2016 – 10:07 *Outbound Calls*

We are aware of a problem with outbound calls. This has been isolated to an upstream carrier and removed from our carrier pool. Traffic is re-routing as expected. We apologise for any inconvenience caused. UPDATE01 – 10:30 Discussions with this upstream carrier have uncovered our account being suspended incorrectly which was re-enabled promptly when there … Continue reading “LON01 – EasyIPT – 13/09/2016 – 10:07 *Outbound Calls*”

We are aware of a problem with outbound calls. This has been isolated to an upstream carrier and removed from our carrier pool. Traffic is re-routing as expected.

We apologise for any inconvenience caused.

UPDATE01 – 10:30
Discussions with this upstream carrier have uncovered our account being suspended incorrectly which was re-enabled promptly when there mistake was discovered. Further discussions are being had with there management as no notice was given to ourselves at the time.

We have also discovered the upstream carrier was sending back a 404 error to our softwitch. This error code prevented our system for trying another route as a 404 is “not found” or number incorrect.

UPDATE02 – 11:44
We have bought this carrier back in to service and as above are conducting a review with there management team in to there actions.

UPDATE03 – 10:03
We are aware this issue reoccurred again this morning between the hours of 09:30 and 10:00. The same upstream carrier was at fault and we have further escalations in place for a response.

LON01 – CORE03 – 21/08/2016 – 08:33 *At Risk*

Our network monitoring has alerted us to a fault on CORE03 within our Goswell Road network. This fault is a re-occurrence of an issue identified yesterday that was resolved without impact. Additional logging was added at the time to further assist should it be required. The issue has been tracked down to the “Ethernet Out … Continue reading “LON01 – CORE03 – 21/08/2016 – 08:33 *At Risk*”

Our network monitoring has alerted us to a fault on CORE03 within our Goswell Road network. This fault is a re-occurrence of an issue identified yesterday that was resolved without impact. Additional logging was added at the time to further assist should it be required.

The issue has been tracked down to the “Ethernet Out of Band Channel” (EOBC) control channel on the devices back plane.

Due to the number of line cards automatically taken out of service by device, we are currently investigating to see if this is part of a common hardware fault such as the current active supervisor module.

We diversely route our internal backhull fibre up-links across each core to insure that a single line card failure does not result in an outage. This is currently in operation however we have lost a number of links due to the fault and the device is classed as at risk along with any directly connected equipment.

We are currently reviewing the logs and will update with further information / action plan asap.

UPDATE01 – 09:49
After reviewing the logs we have concluded the next action step is to swap between active supervisors within that device. This will cause a brief outage to all services connected to that device. We will monitor the device closely after the change to see if the same issue occurs. This reload has been scheduled for 10:00 today.

UPDATE02 – 10:08
The swap completed as expected, however despite this supervisor showing OK and passing diagnostics, it failed to fully take the system load and was reverted back. We suspect this is now a possible backplane issue on this device. Further updates to follow.

UPDATE03 – 14:08
Further observations have been made and the log files reviewed at depth. At this stage we can advise the backup supervisor within CORE03 has been reporting errors however the Cisco IOS listed these as “Non-fatal” and as such have not been flagged up within our monitoring platform.

We suspect a fault had occurred on the standby supervisor which had not been picked up on by the devices internal diagnostics until we bought the card fully in to operation. This fault we suspect was having an impact on the EOBC reporting and thus causing line cards to be disabled. As the previous fault took 24 hours to resurface we are continuing to monitor. An emergency maintenance window is also going to be scheduled for CORE03 to replace the suspected failed card, along with an IOS update.

UPDATE04 – 14:30 – 22/08/2016
Despite seeing the device operate for over 24 hours without further errors, we have just observed the fault conditions triggering line cards to be disabled. We therefore now suspect this a problem with the chassis its-self and the backplane. We will now be replacing the entire device as a matter of course to prevent this escalating to an outage. Further works will be scheduled and notified via the NOC as we dont have a pre-built device on site.

EasyIPT – 0207 Numbers – 08/08/2016 – 16:56

We are aware of some intermittent inbound call issues on 0207 numbers from a single carrier. We have raised a support query with this carrier and are awaiting an update from them. At the moment we are not aware of this affecting outbound calls, nor are we aware of any other affected inbound area codes. … Continue reading “EasyIPT – 0207 Numbers – 08/08/2016 – 16:56”

We are aware of some intermittent inbound call issues on 0207 numbers from a single carrier.

We have raised a support query with this carrier and are awaiting an update from them. At the moment we are not aware of this affecting outbound calls, nor are we aware of any other affected inbound area codes. If you are having issues we would advise you to contact us so your examples can be logged to further assist with a quicker resolution.

BT Wholesale incident – 21/07/2016

We reported this morning we where seeing issues with our DSL circuits, this has now escalated to issues within BT wholesale themselves affecting multiple services to providers across the UK. We believe the same London datacenter that experienced issue yesterday is once again experiencing issues today however is now affecting other providers. Services affected include … Continue reading “BT Wholesale incident – 21/07/2016”

We reported this morning we where seeing issues with our DSL circuits, this has now escalated to issues within BT wholesale themselves affecting multiple services to providers across the UK.

We believe the same London datacenter that experienced issue yesterday is once again experiencing issues today however is now affecting other providers.

Services affected include DSL circuits dropping and not re-establishing, along with poor quality calls if they can be made / received.

Once again this is affecting providers across the UK and is not just limited to our selves, we are working with BT and other providers to assist BT in restoring services.

This is also affecting our phones and we advise clients to email support.

Once we know more we will provide an update. We apologise for any inconvenience caused.

UPDATE01 – 09:55
We are seeing call quality return to normal, however this has been officially confirmed as resolved. We are still seeing a number of DSL circuits down however.

UPDATE02 – 10:10
As advised in the previous post, we can confirm this is due to another power issue is London that has further affected BT wholesale.

UPDATE03- 10:15
Voice issues have returned and we are pushing BT wholesale for an update.

UPDATE04- 10:21
We have been advised the data centre affected is Telehouse North. We do not have any equipment of our own at Telehouse North, however key service providers do including BT which is why this issue is affecting so many providers and services. Telehouse North is a critical part of the UK infrastructure.

UPDATE05- 10:51
We have started to see all services restore, however no official conformation yet.

UPDATE06- 11:51
Services still remain stable and no further reports have been seen, however the incident has still not been closed off officially.

UPDATE07 – 12:45
This is the latest information we have received from BT about their issue: Due to a power incident at Telehouse North (London) IP Exchange Direct Access has been severely impacted since 07.45 this morning (21/07/2016).Service has been restored for all but a small subset of customers who are connected through a hardware card that requires replacing after the restoration of power. Local spares are being sought. There are still wider power restoration activities on going on site which is still expected to take until approx. 1pm BST. We will update with more information on the next update.

UPDATE08 – 13:07
No further issues have been seen and we are closing this incident for now. We will continue to monitor for the next 24 hours.