Our network monitoring has alerted us to a number of BTW based circuits going offline and prefix withdrawals from suppliers. We are currently investigating.
UPDATE 01 – 14:49
We are seeing reports from other providers that they have experienced other issues. Initial investigations appear to show this as a problem within the “Williams House” Equinix data center in Manchester.
UPDATE 02 – 15:51
Connections are starting to restore. Services affected appear to have been routed via Manchester.
We are aware of an issue affecting inbound calls with one of our upstream voice carriers. We have re-routed outbound calls around the affected network and calls should be connecting as expected.
We have raised a priority case with the carrier who have confirmed there is an issue and is being dealt with urgently.
We apologize for the disruption and will update this NOC post once further details become available.
UPDATE 01 – 17:10
We have started to see inbound calls on the affected carrier restore and traffic flowing. We have not had official closure yet, so services should be considered at risk still
UPDATE 02 – 17:33
The affected upstream carrier has confirmed services have been restored and this was the result of a data center issue. We have asked for a RFO and this will be provided as requested.
re-routing has been removed and all service are normal.
Once again we apologize for the disruption
FINAL – 04/03/2020 – 14:45
We have been advised the ROOT cause of this incident was the result of a failed network interface on a primary database server within the carrier network. We have been advised the database is redundant but this has highlighted the need for additional redundancy and is already being deployed.
At 08:15 GMT this morning, we were alerted to a number of DSL broadband sessions disconnecting. Initial diagnostics showed there was no fault within our network and this was escalated to our wholesale supplier.
Our wholesale supplier responded to advise a DSL gateway “cr2.th-lon” at 08:15 GMT had dropped a number of sessions however had started to recover at 08:23 GMT. At this time the root cause of the outage is unknown but investigations are continuing. Services should be considered at risk until we ascertain the cause.
UPDATE 01 – 10:50
We have seen a further drop in sessions where sessions have had to re-authenticate. We have requested an update from our supplier to enquire of this is related to the issues seen this morning.
We have observed a small number of broadband services suffering from intermittent connection problems between 19:46 and 20:46 this evening.
This issue has been tracked down to one of our wholesale suppliers who suffered a network outage that has since recovered. This has resulted in the affected connections not being able to reach our RADIUS servers for authentication and where simply being terminated on the local BTW RAS with a non route able IP address.
Users who do not have a connection are advised to reboot or power off there router for 20 minutes to recover any stuck sessions.
We apologies for the inconvenience and are awaiting an RFO.
We are aware a small percentage of outbound calls being made on our VoIP network are taking longer than normal to connect. We are aware of this issue and investigating .
UPDATE01 – 09:40
Calls have been re-routed over another carrier while we work to understand what is going on with calls on BTIPX
UPDATE02 – 10:06
Calls are being rejected due to changes made at BT in respect to number formatting. We are in the process of making changes to how we present numbers.
UPDATE03 – 13:00
New number scrips have been designed and put in to operation. Outbound calls are now routing correctly once again and this has been marked as closed.
No changes are required on existing customer systems however we will be changing how we configure new systems
We are aware of a unexpected reload of LNS02 which has occurred again. We are currently dealing with our vendor as to the cause and resolution.
UPDATE 01 – 22:44
Our vendor has responded providing some technical information from the resulting crash and we have some additional logging capability in place. We are unable to advise any further at this stage due to ongoing investigations.
Our network monitoring has alerted us to a small number of metro based layer 2 Vodafone circuits that have no service. This has also had an impact on broadband as well as services in to our Horsham based facility.
This was raised with the relevant teams last night and fibre engineers are already on site working on suspected damaged fibre to our selves and a number of other providers.
We have seen some services restore. Services with backup if affected will be operating on backup and wont be affected.
We will post further updates as they become available.
UPDATE01 – 11:02
Engineers have located the break and this will now progress to splicing.
UPDATE02 – 12:45
Engineers are continuing to work on splicing the damaged cable with spares.
UPDATE03 – 13:00
We are starting to see some services restore.
UPDATE04 – 14:25
As part of ongoing investigation into this issue, Vodafone are sending out additional engineering resources to their data centre.
Vodafone believe the break to be located 43 meters into the fibre handoff, which they are working on to resume full service.
UPDATE05 – 15:30
Engineers at the data centre are continuing to work with Vodafone fibre teams and resolve the second identified fibre break.
UPDATE05 – 16:45
Vodafone fibre engineers continue to examine the length of the fibre for further issues impacting service.
UPDATE06 – 17:55
Vodafone engineers remain on site tracing the underlying infrastructure so that a permanent fix can be put in place for all affected services. We continue to work with the carrier to expedite the resolution with regular communication.
Currently the agreed next stage is for engineers to work to determining the exact location for the fibre fault.
UPDATE07 – 18:45
All services have restored and we are awaiting a RFO. This will be provided on request.
We are aware of an issue affecting a large number of subscribers not being able to connect to the internet.
Initial reports and investigations seem to suggest this may be a problem within the wider BT Wholesale network as a number of other ISPs are reporting the same issues.
We’re continuing to investigate with our supplier and will provide further information when we have it.
We can see that service was restored on the majority of circuits at approximately 1pm. We are still in correspondence with our suppliers in order to obtain information on the cause.
We are aware of issues affecting SMTP email delivery on smtp02.structuredcommunications.co.uk
We always advise anyone using smtp02.structuredcommunications.co.uk to also setup smtp01.structuredcommunications.co.uk (this server is currently unaffected)
Due to the blocks in place on smtp02.structuredcommunications.co.uk it will take several days for the server reputation to recover.
We have been made aware via a number of external mail partners that they are seeing odd patterns of mail flow from this platform which is affecting SMTP delivery to various destinations.
Initial investigations have shown this to be down to a rouge EXE on the server likely due to a compromised site. Scans are currently underway to isolate the EXE as well as integrity checks on the platform to correct any miss configuration on client folders.
Temempory measures have been added to hopefully prevent it from executing and causing further issues.
As a result of the above scans and audit the platform is under high load until this is resolved. Currently there is no ETA, however we will provide updates once we know more.
UPDATE01 – 20:00
Detailed scans are still underway. Due to the volume of files this is taking longer than planned. Server usage remains high as a result
UPDATE02 – 23:55
Scan progress is at 61% and will continue in to the night / morning. Backups will still run however this will only add to the already excessive load.
UPDATE03 – 04/04/2018 – 09:30
Both scans completed and results are being reviewed.
UPDATE04 – 04/04/2018 – 10:30
Removal works are complete and we are planning to reboot the server from 10:30 to restore a number of security policys.