We are aware a small percentage of outbound calls being made on our VoIP network are taking longer than normal to connect. We are aware of this issue and investigating .
UPDATE01 – 09:40
Calls have been re-routed over another carrier while we work to understand what is going on with calls on BTIPX
UPDATE02 – 10:06
Calls are being rejected due to changes made at BT in respect to number formatting. We are in the process of making changes to how we present numbers.
UPDATE03 – 13:00
New number scrips have been designed and put in to operation. Outbound calls are now routing correctly once again and this has been marked as closed.
No changes are required on existing customer systems however we will be changing how we configure new systems
On the 23rd February 2019 starting from 08:00 our UPS vendor Vertiv will be undertaking a planned upgrade to the system adding additional battery capacity to our Horsham facility.
Normally with this type of work the UPS would be placed on “bypass” however the nature of the works require a complete decommission of the system which will require a complete isolation and removal of power to feed “B”
Each cabinet is supplied with a redundant “A” supply. All of our critical services and client facing platforms are dual feed so no outages are expected however should be considered at risk until the completion of the works.
We will be contact clients with co-location who have devices that are not dual redundant.
UPDATE 01 – 08:00
Engineer’s are now on site. Work will start shortly.
UPDATE 02 – 08:30
Work has started and power isolated.
UPDATE 03 – 13.00
Work is now complete and redundancy restored.
We are aware of a unexpected reload of LNS02 which has occurred again. We are currently dealing with our vendor as to the cause and resolution.
UPDATE 01 – 22:44
Our vendor has responded providing some technical information from the resulting crash and we have some additional logging capability in place. We are unable to advise any further at this stage due to ongoing investigations.
We will be taking LNS01 out of service on the 03/01/2019 at 22:00 in order to complete some physical maintenance work for the 04/01/2019. Connections on LNS01 at this time will experience a drop in PPP and reconnect to LNS02.
This work is due to start. Sessions will drop and start to move over to LNS02. If your connection does not come back up within 5 minutes please power cycle your router. If this does not work then please turn off your router for 20 minutes to allow the session to fully close down.
LNS01 has been taking out of service we we have seen 95% of connections move across as expected. Work can now start on this device tomorrow as planned without further impact on services. Updates to follow.
Work has been completed and LNS01 is back in service accepting connections.
Following on from a unexpected reload of LNS02 and after reviewing the logs we will be upgrading the firmware on this device shortly under an emergency maintiance window.
UPDATE01 – 22:10
The emergency works are now complete on LNS02. This device haas been reloaded and is accepting connections once again. We have provided the logs to the hardware vendor.
We will need to do the same with LNS01 however, this currently has a large number of active connections and work to move these across the network is needed. This will be done at a later date as we do not believe this device is at risk of the same failure.
We do apologise for the inconvenience caused.
Our network monitoring has flagged high CPU usage on LNS01 that is starting to affect its operation. We are undertaking an emergency reboot to prevent it crashing completely. This will drop active sessions and force them on to other gateways.
UPDATE01 – 21:35
The reboot is complete. Services transferred to redundant gateways as expected. LNS01 is now back in operation. If you do not have service please power down your router for 20 minutes.
We apologise for any inconvenience caused.
Our network monitoring has alerted us to a small number of metro based layer 2 Vodafone circuits that have no service. This has also had an impact on broadband as well as services in to our Horsham based facility.
This was raised with the relevant teams last night and fibre engineers are already on site working on suspected damaged fibre to our selves and a number of other providers.
We have seen some services restore. Services with backup if affected will be operating on backup and wont be affected.
We will post further updates as they become available.
UPDATE01 – 11:02
Engineers have located the break and this will now progress to splicing.
UPDATE02 – 12:45
Engineers are continuing to work on splicing the damaged cable with spares.
UPDATE03 – 13:00
We are starting to see some services restore.
UPDATE04 – 14:25
As part of ongoing investigation into this issue, Vodafone are sending out additional engineering resources to their data centre.
Vodafone believe the break to be located 43 meters into the fibre handoff, which they are working on to resume full service.
UPDATE05 – 15:30
Engineers at the data centre are continuing to work with Vodafone fibre teams and resolve the second identified fibre break.
UPDATE05 – 16:45
Vodafone fibre engineers continue to examine the length of the fibre for further issues impacting service.
UPDATE06 – 17:55
Vodafone engineers remain on site tracing the underlying infrastructure so that a permanent fix can be put in place for all affected services. We continue to work with the carrier to expedite the resolution with regular communication.
Currently the agreed next stage is for engineers to work to determining the exact location for the fibre fault.
UPDATE07 – 18:45
All services have restored and we are awaiting a RFO. This will be provided on request.
We are currently aware of an issue effecting broadband connections in the following area codes:
We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.
We are aware of an issue affecting a large number of subscribers not being able to connect to the internet.
Initial reports and investigations seem to suggest this may be a problem within the wider BT Wholesale network as a number of other ISPs are reporting the same issues.
We’re continuing to investigate with our supplier and will provide further information when we have it.
We can see that service was restored on the majority of circuits at approximately 1pm. We are still in correspondence with our suppliers in order to obtain information on the cause.
We have been advised by one of our fibre providers that they will be conducting maintenance on there hardware from the 8th June 2018 from 00:01 at Telehouse. This will affect a number of leased lines we provide via this location.
Please note this will affect clients with redundant fibre services whereby they terminate into the same core device on our network (carrier only redundant). Clients with DSL backup and or redundant fibre to our diverse cores at Telehouse (full redundancy) will not be affected.
Downtime is expected to be between 15 and 60 minutes all being well.