We are aware of a problem with one of our media gateways affecting inbound and outbound calls. We are currently investigating as a matter of priority.
UPDATE01 – 13:13
Engineers have found the media gateway to be suffering from a sudden high CPU load. We have been able to stabilize the system remotely and are currently monitoring.
UPDATE02 – 13:20
We have seen a repeat of high CPU demand and are in the process of isolating the service.
UPDATE03 – 13:25
The service has been found and the root fault isolated with remedial works planned over the weekend. We apologies for the issues caused.
Additional media gateways had already been planned for deployment over December to provide additional redundancy, this will now be bought forward.
UPDATE04 – 15:00
We are still seeing some calls fail to route due to stuck sessions on customer trunks. This was caused by the SQL service being overloaded and log entry’s being missed. Work is being planned for tonight to reload the affected services.
UPDATE05 – 20:00
Work has been undertaken to stabilise the platform.
UPDATE06 – 21/11/2016 12:00
Engineers have completed the install of additional hardware and a migration plan will be designed in due course.
Our network monitoring has alerted us to CPU errors being generated on lns01.dsl.structuredcommunications.co.uk
We have raised a ticket with our hardware vendor for them to investigate and advise. The device remains in operation and routing data as expected, however it should be classed as “at rick” during this time due to the possibility of an unexpected failure.
Other LNSs within the pool are unaffected. Should the above LNS fail then circuits will automatically re-terminate on another device.
UPDATE01 – 11:30
Our hardware vendor has reviewed the logs we have provided and concluded the device could be suffering from a physical hardware issue. We are looking at options on replacing this device.
UPDATE02 – 11:52
While options are being reviewed to replace this device, we have taken the decision to remove it from the active termination pool. This means it will not accept any new PPP connection attemps, however will continue to service existing established ones.
UPDATE03 – 13:05
Replacement hardware has been agreed by the vendor. We will advise of a maintenance window in due course.
The LNS was replaced without issue and is now back in service.
We have been advised by one of our wholesale providers that they will be replacing hardware within there core network tonight that will physically disconnect one of our host-link cables we use to terminate DSL traffic with BT. To avoid any unnecessary PPP resets to DSL sessions we will be manually re-routing traffic via alternative paths before the work starts. No outages or session drops are expected, however DSL services should be classed as “at risk” until these works are fully completed.
UPDATE01 – 09:00
This work was completed as expected and we have restored traffic over this hostlink.
We are currently aware of an issue effecting call quality across our VoIP Network. Engineers are currently working on this and will provide updates as soon as possible.
UPDATE 13:55 10/11/2016
Call quality has returned to normal standards, engineers are still working to confirm the issue is fully clear and will provide updates as soon as possible.
UPDATE 14:35 10/11/2016
We have now been made aware that a specific Tier One Carrier is currently reporting a major service outage which is affecting customers nationwide at the present time, we are in communication with the supplier in question and will report any additional information to you as and when it is received.
We apologise for any inconvenience this may be causing.
We have been advised by one of our wholesale providers they will be undertaking upgrade works on part of there core in Goswell Road that delivers some of our Leased Lines and Transit services. This upgrade work involves replacement of their hardware and will directly affect any services we provide via core01.lon01.structuredcommunications.co.uk. Customers who take backup DSL from us will see see there connection automatically re-route via this as required. Customers who take redundant fibre connections to our border only or customers without backup will see up-to a 30 minutes downtime.
Peering an transit services will also automatically re-route via alternative paths on our network.