We are aware of a unexpected reload of LNS02 which has occurred again. We are currently dealing with our vendor as to the cause and resolution.
UPDATE 01 – 22:44
Our vendor has responded providing some technical information from the resulting crash and we have some additional logging capability in place. We are unable to advise any further at this stage due to ongoing investigations.
Our network monitoring has alerted us to a small number of metro based layer 2 Vodafone circuits that have no service. This has also had an impact on broadband as well as services in to our Horsham based facility.
This was raised with the relevant teams last night and fibre engineers are already on site working on suspected damaged fibre to our selves and a number of other providers.
We have seen some services restore. Services with backup if affected will be operating on backup and wont be affected.
We will post further updates as they become available.
UPDATE01 – 11:02
Engineers have located the break and this will now progress to splicing.
UPDATE02 – 12:45
Engineers are continuing to work on splicing the damaged cable with spares.
UPDATE03 – 13:00
We are starting to see some services restore.
UPDATE04 – 14:25
As part of ongoing investigation into this issue, Vodafone are sending out additional engineering resources to their data centre.
Vodafone believe the break to be located 43 meters into the fibre handoff, which they are working on to resume full service.
UPDATE05 – 15:30
Engineers at the data centre are continuing to work with Vodafone fibre teams and resolve the second identified fibre break.
UPDATE05 – 16:45
Vodafone fibre engineers continue to examine the length of the fibre for further issues impacting service.
UPDATE06 – 17:55
Vodafone engineers remain on site tracing the underlying infrastructure so that a permanent fix can be put in place for all affected services. We continue to work with the carrier to expedite the resolution with regular communication.
Currently the agreed next stage is for engineers to work to determining the exact location for the fibre fault.
UPDATE07 – 18:45
All services have restored and we are awaiting a RFO. This will be provided on request.
We are aware of an issue affecting a large number of subscribers not being able to connect to the internet.
Initial reports and investigations seem to suggest this may be a problem within the wider BT Wholesale network as a number of other ISPs are reporting the same issues.
We’re continuing to investigate with our supplier and will provide further information when we have it.
We can see that service was restored on the majority of circuits at approximately 1pm. We are still in correspondence with our suppliers in order to obtain information on the cause.
We are aware of issues affecting SMTP email delivery on smtp02.structuredcommunications.co.uk
We always advise anyone using smtp02.structuredcommunications.co.uk to also setup smtp01.structuredcommunications.co.uk (this server is currently unaffected)
Due to the blocks in place on smtp02.structuredcommunications.co.uk it will take several days for the server reputation to recover.
We have been made aware via a number of external mail partners that they are seeing odd patterns of mail flow from this platform which is affecting SMTP delivery to various destinations.
Initial investigations have shown this to be down to a rouge EXE on the server likely due to a compromised site. Scans are currently underway to isolate the EXE as well as integrity checks on the platform to correct any miss configuration on client folders.
Temempory measures have been added to hopefully prevent it from executing and causing further issues.
As a result of the above scans and audit the platform is under high load until this is resolved. Currently there is no ETA, however we will provide updates once we know more.
UPDATE01 – 20:00
Detailed scans are still underway. Due to the volume of files this is taking longer than planned. Server usage remains high as a result
UPDATE02 – 23:55
Scan progress is at 61% and will continue in to the night / morning. Backups will still run however this will only add to the already excessive load.
UPDATE03 – 04/04/2018 – 09:30
Both scans completed and results are being reviewed.
UPDATE04 – 04/04/2018 – 10:30
Removal works are complete and we are planning to reboot the server from 10:30 to restore a number of security policys.
We are aware of outbound SMTP delivery issues on server01.r02.easyhttp.co.uk This has been caused by a compromised user account on this server that has been used to send out a large volume of SPAM. The account has been terminated and is being dealt with however, it has resulted in temporary delivery restrictions from some remote hosts such as Yahoo, Hotmail and AOL.
We are monitoring the situation. Any failed emails depending on the remote host will be queued for re-delivery. If delivery restrictions remain for over 24 hours then we will look to divert outbound email flow via another internal SMTP array on the sort term.
We apologize for any inconvenience caused.
UPDATE01 – 16:00
SMTP traffic from server01.r02.easyhttp.co.uk is being relayed via another smart host as this is affecting internal support sites.
We are aware 1 of our media gateways is not releasing channels once a call has cleared down. This is causing BUSY tones or limit exceeded messages
We are currently working to resolve this ASAP.
UPDATE 01 – 19:00
Emergency Works have started. Any active calls on SIP02 have been dropped. We are sorry for any inconvenience caused.
UPDATE 02 – 19:03
SIP02 has reloaded and all services have restored. We will now look at SIP01
UPDATE 03 – 19:05
Emergency Works have started. Any active calls on SIP01 have been dropped. We are sorry for any inconvenience caused.
UPDATE 04 – 19:03
SIP01 has reloaded and all services have restored.
Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide updates shortly.
UPDATE01 – 11.57
Level3 have responded to advise tier 2 Technicians are to investigate the incident and provide feedback on their findings within 1 hour.
UPDATE02 – 12:11
Level3 have responded to advise they have sent a subcase to thier techs in Goswell Road to continue to investigate.
UPDATE03 – 13:20
Level3 have issued the following statement:
A non-service affecting partial power rack failure at the Goswell Gateway in London, England is ongoing. The European NOC has advised that an electrical contractor has been engaged and is onsite working to resolve the power issue. There is no estimate as to when the power issue will be resolved; however, the European NOC has confirmed that this issue is not impacting services. Please be advised that updates for this event will be relayed at a minimum of hourly unless otherwise noted.
We do have a number of servers and core network equipment operating in redundancy mode due to the power loss and remain at risk.
UPDATE04 – 14:25
Level3 have issued the following update:
Event The European NOC has advised that the electrical contractor has replaced the faulty power cabling onsite clearing all of the alarms related to the partial power failure. At this time, efforts to contact the impacted clients to confirm service restoral are now underway.
We are still seeing network alarms in relation to power and have asked Level3 to tech to step in as we suspect one of our circuit breakers may have tripped. We will advise once further updates are given.
UPDATE05 – 18:30
We have chased Level3 for an update as no response has been given to our request. A level 1 escalation has also been raised.
UPDATE06 – 20:30
We have chased Level3 for an update as no response has been given to our request. A level 2 escalation has also been raised.
UPDATE07 – 21:58
Power has been restored and services are no longer “at risk” initial investigations advise it was more than a circuit breaker at fault. A complaint has been raised.
We are aware of outbound SMTP delivery issues on server01.r02.easyhttp.co.uk This has been caused by a compromised user account on this server that has been used to send out a large volume of SPAM this morning. The account has been terminated and is being dealt with however, it has resulted in temporary delivery restrictions from some remote hosts such as Yahoo.
We are monitoring the situation. Any failed emails depending on the remote host will be queued for re-delivery. If delivery restrictions remain then we will look to divert outbound email flow via another internal SMTP array on the sort term.
We apologize for any inconvenience caused.
UPDATE01 – 14:43
We are no longer seeing outbound emails queue on the server what have failed delivery due to the server reputation. Due to this we have agreed not to re-direct outbound SMTP. If users are still seeing bounce back emails then please email support and we will review on a case by case biases.
We are aware of a network affecting issue. Engineers are currently working to identify the root cause and affected services.
UPDATE 01 – 14:15
Following an audit of our network monitoring we can confirm this only affected our DSL services. Initial diagnostics show this to be a Zen WHOLESALE issue further upstream with one of our providers as we observed a number of L2TP Tunnels shutdown and BGP prefix withdrawals from THN.
A fault has been logged with their NOC department who confirmed that their was an issue that affected Wholesale L2TP subscribers along with there channel and direct retail customers.
Services have restored and we have provided our diagnostic information to there NOC. At this time we are unaware of the root cause and services should be considered at risk for now.
UPDATE 02 – 21:30
We are awaiting an RFO and have been advised no further repeats are expected.