We will be taking LNS01 out of service on the 03/01/2019 at 22:00 in order to complete some physical maintenance work for the 04/01/2019. Connections on LNS01 at this time will experience a drop in PPP and reconnect to LNS02.
This work is due to start. Sessions will drop and start to move over to LNS02. If your connection does not come back up within 5 minutes please power cycle your router. If this does not work then please turn off your router for 20 minutes to allow the session to fully close down.
LNS01 has been taking out of service we we have seen 95% of connections move across as expected. Work can now start on this device tomorrow as planned without further impact on services. Updates to follow.
Work has been completed and LNS01 is back in service accepting connections.
Following on from a unexpected reload of LNS02 and after reviewing the logs we will be upgrading the firmware on this device shortly under an emergency maintiance window.
UPDATE01 – 22:10
The emergency works are now complete on LNS02. This device haas been reloaded and is accepting connections once again. We have provided the logs to the hardware vendor.
We will need to do the same with LNS01 however, this currently has a large number of active connections and work to move these across the network is needed. This will be done at a later date as we do not believe this device is at risk of the same failure.
We do apologise for the inconvenience caused.
Our network monitoring has flagged high CPU usage on LNS01 that is starting to affect its operation. We are undertaking an emergency reboot to prevent it crashing completely. This will drop active sessions and force them on to other gateways.
UPDATE01 – 21:35
The reboot is complete. Services transferred to redundant gateways as expected. LNS01 is now back in operation. If you do not have service please power down your router for 20 minutes.
We apologise for any inconvenience caused.
Our network monitoring has alerted us to a small number of metro based layer 2 Vodafone circuits that have no service. This has also had an impact on broadband as well as services in to our Horsham based facility.
This was raised with the relevant teams last night and fibre engineers are already on site working on suspected damaged fibre to our selves and a number of other providers.
We have seen some services restore. Services with backup if affected will be operating on backup and wont be affected.
We will post further updates as they become available.
UPDATE01 – 11:02
Engineers have located the break and this will now progress to splicing.
UPDATE02 – 12:45
Engineers are continuing to work on splicing the damaged cable with spares.
UPDATE03 – 13:00
We are starting to see some services restore.
UPDATE04 – 14:25
As part of ongoing investigation into this issue, Vodafone are sending out additional engineering resources to their data centre.
Vodafone believe the break to be located 43 meters into the fibre handoff, which they are working on to resume full service.
UPDATE05 – 15:30
Engineers at the data centre are continuing to work with Vodafone fibre teams and resolve the second identified fibre break.
UPDATE05 – 16:45
Vodafone fibre engineers continue to examine the length of the fibre for further issues impacting service.
UPDATE06 – 17:55
Vodafone engineers remain on site tracing the underlying infrastructure so that a permanent fix can be put in place for all affected services. We continue to work with the carrier to expedite the resolution with regular communication.
Currently the agreed next stage is for engineers to work to determining the exact location for the fibre fault.
UPDATE07 – 18:45
All services have restored and we are awaiting a RFO. This will be provided on request.
We are currently aware of an issue effecting broadband connections in the following area codes:
We believe this maybe an exchange outage and are working with Openreach to resolve the issue asap. More updates will be provided shortly.
We are aware of an issue affecting a large number of subscribers not being able to connect to the internet.
Initial reports and investigations seem to suggest this may be a problem within the wider BT Wholesale network as a number of other ISPs are reporting the same issues.
We’re continuing to investigate with our supplier and will provide further information when we have it.
We can see that service was restored on the majority of circuits at approximately 1pm. We are still in correspondence with our suppliers in order to obtain information on the cause.
We have been advised by one of our fibre providers that they will be conducting maintenance on there hardware from the 8th June 2018 from 00:01 at Telehouse. This will affect a number of leased lines we provide via this location.
Please note this will affect clients with redundant fibre services whereby they terminate into the same core device on our network (carrier only redundant). Clients with DSL backup and or redundant fibre to our diverse cores at Telehouse (full redundancy) will not be affected.
Downtime is expected to be between 15 and 60 minutes all being well.
As per the various news reports there is a DNS Vulnerability affecting DrayTek routers. Over the past few days we have been auditing clients who have vulnerable models and a valid maintenance contract with ourselves. Work is already underway to patch devices in this instance however, we would advise all clients with Draytek hardware to check if there device is affected and update firmware as required:
Please see below 3rd party link for full details.
DNS Vulnerability Strikes Popular DrayTek Broadband ISP Routers
As a measure we will be blocking the DNS server used for the exploit on our network. However this could result is a loss of service should your device already be compromised. This unfortunately is a better option than using the compromised DNS server as detailed in the above news report.
If you have any questions please do not hesitate to contact us.
We have been advised our suppliers will be performing essential works on our interconnects which will cause a large number of circuts to lose their connection.
Work is anticipated to start at 02:00 and impact is expected to be 1 hour.
Our suppliers have advised this could be extended to 4 hours downtime.
Not all circuits will lose connectivity. However, if session disconnects during the work, they may not be able to reconnect until the work is complete.
We apologies for any inconvenience this may cause.
As part of our planed network migration to Telehouse North and our own facility. We are now ready to migrate existing Ethernet fibre and EFM services from Goswell Road to Telehouse.
This work is planned to take place from 20:00 tomorrow night and is expected to take around 1 hour to complete with each service seeing around 5 minutes of downtime while VLANs are moved around the network to there new home.
Customers with backup ADSL or FTTC are unlikely to notice the drop as services will automatically re-route for the duration. Customers that have dual leased lines are also unlikely to notice a service drop as the move is being done in groups, so no 2 services are in the same migration window.
Updates to be posted on the night. Existing test services have been transferred so we do not expect any problems.
UPDATE01 – 19:45
Engineers are getting ready to start
UPDATE02 – 20:00
Work has started
UPDATE03 – 20:10
A problem has been found in our suppliers scripted code. They are attempting to fix this. Unfortunately this does mean additional downtime for the affected circuits.
UPDATE04 – 20:22
Code has been fixed however we are seeing MTU issues. They are being sorted for both this batch of migrations and the 2nd set.
UPDATE05 – 20:30
Stage 1 has been migrated and BGP sessions are back online. Stage 2 will now start.
UPDATE06 – 20:45
Stage 2 is complete, however there is a problem with 2 circuits.
UPDATE07 – 21:10
This work is now complete and all circuits are showing as online.