We are aware of issues with customers routed via sip03.easyipt.co.uk for VoIP calls. We are currently looking at this as a matter of urgency.
UPDATE 01 – 13:39
We have discovered an issue with the Database running on the media gateway and will be performing a reboot. Any active calls will drop.
UPDATE 02 – 13:45
The reboot has completed. However we have lost a number of critical services and are working to restore them.
UPDATE 03 – 13:58
We have been able to recover the services, however concerned about stability and why these services did not automatically start as expected. We are undertaking further reviews and the platform should still be considered as “at risk” until further notice.
UPDATE 04 – 14:47
We have been able to automatically recover a number of services, however we are still seeing some services fail to load on boot. This is something we need to look in too and believe it to be part of a race condition on the server. The media gateway has remained stable and processing calls as expected.
A full review of the media gateway will take place next week to ensure all startup services recover as expected.
We are currently seeing packet loss on many broadband connections, this is currently being investigated and updates will be provided shortly.
UPDATE 01 – 10:45
We have located a back-hull link within our network between our core in Horsham (HOR) and London (THN) that is operating with packet loss at a low level which is affecting services provided by our Horsham facility. This has been removed from service and traffic is flowing via alternative routes.
UPDATE 02 – 11:00
The link removal has restored full service to platforms that operated in part via this link. Investigations are taking place with our fibre backhull provider to ascertain where the fault is. However we suspect this is a common fault further upstream.
UPDATE 03 – 11:29
We are aware some Horsham based broadband services are still seeing issues. Investigations show this to be based around a common fibre link and provider as per our backhull link and investigations have been escalated.
UPDATE 04 – 12:55
We have chased for an update with our suppler who have advised they are seeing issues within there core network affecting other exchanges and services. We have pushed for an escalation due to the severity of the issue.
We apologise for the continued disruption
UPDATE 05 – 14:10
We have been advised the issue is still ongoing and we expect an update by 15:00
UPDATE 06 – 16:01
We have placed further escalations to seniors managers at our suppler due to the length of time this has been ongoing.
UPDATE 07 – 17:00
Our investigations have concluded after speaking with other wholesale providers that this is a result of a Vodafone wholesale issue and a problem within there core network that they are trying to isolate.
UPDATE 08 – 18:10
We have observed packet loss returning to 0% on affected services. This is been confirmed by our supplier however we are continuing to monitor.
UPDATE 09 – 18:50
We have continued to see a good service. Other ISP using Vodafone backhull have advised the same, however we are containing to monitor.
-Update 08:50- We are starting to see some connections come back into service, we are continuing to monitor the situation and will provide more updates shortly.
We are currently aware of an issue effecting lease line and broadband customers. Some services are currently down, we are working with suppliers to get this resolved as soon as possible and will provide updates as we get them.
Apologies for any inconvenience caused.
We will be taking LNS01 out of service on the 03/01/2019 at 22:00 in order to complete some physical maintenance work for the 04/01/2019. Connections on LNS01 at this time will experience a drop in PPP and reconnect to LNS02.
This work is due to start. Sessions will drop and start to move over to LNS02. If your connection does not come back up within 5 minutes please power cycle your router. If this does not work then please turn off your router for 20 minutes to allow the session to fully close down.
LNS01 has been taking out of service we we have seen 95% of connections move across as expected. Work can now start on this device tomorrow as planned without further impact on services. Updates to follow.
Work has been completed and LNS01 is back in service accepting connections.
Following on from a unexpected reload of LNS02 and after reviewing the logs we will be upgrading the firmware on this device shortly under an emergency maintiance window.
UPDATE01 – 22:10
The emergency works are now complete on LNS02. This device haas been reloaded and is accepting connections once again. We have provided the logs to the hardware vendor.
We will need to do the same with LNS01 however, this currently has a large number of active connections and work to move these across the network is needed. This will be done at a later date as we do not believe this device is at risk of the same failure.
We do apologise for the inconvenience caused.
We are currently aware of an issue effecting broadband connections in the following area codes:
We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.
As part of our THN migration we are due to migrate services from C3 to C2 tonight. There will be no impact on major services such as broadband.
Works are only expected to take 10 minutes per service and affect a small number of bonded customers and hosted voice services.
This work is complete. However, during the works a configuration issue was made on a 10GB port that resulted in traffic being re-routed. This was spotted and resolved within minutes.
We are currently aware of an issue effecting DSL connections, we are looking into this and will update shortly.
We are starting to see connections return back to normal speeds, we are continuing to monitor and will update you shortly.
Engineers are still working on the issue. We will update with more information as it becomes available.
We are currently looking into an issue with the bonded connectivity, we will update with more information shortly.
Connections are starting to be reconnect, we are continuing to monitor and will update shortly.
Engineers found a “stale” routing issue on C2 that was not renewed until a BGP filter was rebuilt and reset. Traffic routing from C1 was unaffected.
We are currently aware of an issue effecting call quality across our VoIP Network. Engineers are currently working on this and will provide updates as soon as possible.
UPDATE 13:55 10/11/2016
Call quality has returned to normal standards, engineers are still working to confirm the issue is fully clear and will provide updates as soon as possible.
UPDATE 14:35 10/11/2016
We have now been made aware that a specific Tier One Carrier is currently reporting a major service outage which is affecting customers nationwide at the present time, we are in communication with the supplier in question and will report any additional information to you as and when it is received.
We apologise for any inconvenience this may be causing.