LON01 – SMTP Decommission- 95.87.104.26 – 04/06/2016

On the 04/06/2016 we will be decommissioning SMTP relay 95.87.104.26 as it has been relocated within the network to provide greater redundancy in line with our network migration & upgrade plans during the 1st, 2nd and 3rd of June 2016. (Further updates to follow on that) We have commissioned 95.87.111.24 in place of the above … Continue reading “LON01 – SMTP Decommission- 95.87.104.26 – 04/06/2016”

On the 04/06/2016 we will be decommissioning SMTP relay 95.87.104.26 as it has been relocated within the network to provide greater redundancy in line with our network migration & upgrade plans during the 1st, 2nd and 3rd of June 2016. (Further updates to follow on that)

We have commissioned 95.87.111.24 in place of the above and is actively taking requests as of today. SMTP users who use one of our “round robin” DNS records will automatically see the updated address within the next 24-48 hours.

Customers relaying directly to the various mail server IPs we provide including the above, are advised to updated there configuration to a round robin address as below or the new IP address.

smtp.dsl.structuredcommunications.co.uk
smtp.easybond.co.uk

Failure to make this update may result in you loosing SMTP service and or redundancy.

If you have any doubt please contact us.

UPDATE 01 – 11/06/2016 – 11:30 – FINAL

This SMTP relay has now been decommissioned. Users are advised to update there secondary SMTP to 95.87.111.25 to avoid any loss of service. We have updated our round robin DNS so users making use of:

smtp.dsl.structuredcommunications.co.uk
smtp.easybond.co.uk

should be unaffected.

LON01 – DNS Decommission- 95.87.104.25 – 04/06/2016

On the 04/06/2016 we will be decommissioning DNS resolver 95.87.104.25 as it has been relocated within the network to provide greater redundancy in line with our network migration & upgrade plans during the 1st, 2nd and 3rd of June 2016. (Further updates to follow on that) We have commissioned 95.87.111.25 in place of the above … Continue reading “LON01 – DNS Decommission- 95.87.104.25 – 04/06/2016”

On the 04/06/2016 we will be decommissioning DNS resolver 95.87.104.25 as it has been relocated within the network to provide greater redundancy in line with our network migration & upgrade plans during the 1st, 2nd and 3rd of June 2016. (Further updates to follow on that)

We have commissioned 95.87.111.25 in place of the above and is actively taking requests as of today. DSL users who rely on PPP to auto-negotiate these servers will automatically see the updated address within the next 24 hours. Customers with a Structured Communications managed CPE, firewall and or system will also automatically see these changes prior to the 4th.

Customers with there own provided hardware using a static configuration (not related to static IPs) may need to make this update manually.

Failure to make this update may result in you loosing DNS service and or redundancy.

If you have any doubt please contact us.

UPDATE 01 – 11/06/2016 – 11:30 – FINAL

This DNS resolver has now been decommissioned. Users are advised to update there secondary DNS to 95.87.111.25 to avoid any loss of service. Managed CPEs whereby we still have access have been updated.

LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier. We applogise for any inconvenience caused. UPDATE01 – 16:10 – … Continue reading “LON01 – EasyIPT – 06/05/2016 – 15:50 *Resolved*”

We are aware some customers may have received an automated message advising they where unable to dial out due to there account be suspended. This is not a message originating from our network and we have re-routed calls while investigations are ongoing with that carrier.

We applogise for any inconvenience caused.

UPDATE01 – 16:10 – FINAL

We have received an update from our carrier to advise a billing script was run against wholesale interconnects (such as ours) that had an unexpected effect on call barring restrictions which prevented all but emergency calls to progress. As the calls where routing correctly to our carrier with no normal failure message at a “network level” our systems did not exclude that carrier on the 1st attempt. Callers that waited until the end of the message would have seen the call connect.

Once again we applogise for any inconvenience caused and have been advised this script wont be run again.

LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab. Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment … Continue reading “LON01 – POP-C002.017 – 06/04/2016 – 10:00 – 12:00 – *Emergency Work* *COMPLETE*”

Further to our NOC notice posted on the 31/03/2016 in respect to power at one of our POPs “C002.017 – Goswell Road”, Our network monitoring has alerted us to another loss of power on the secondary feed at this cab.

Structured engineers are attending site tomorrow morning within the above maintenance window to install equipment that will allow us to isolate the faulty hardware without further risk to services provided via this cabinet going forward. The work will involve the replacement of various power distribution hardware. At this time the POP is operating on its redundant power feed and all services have been re-routed where possible. Ethernet services provided by this cab are considered “at risk” until power has been fully restored. Transit services will automatically re-route in the event of a failure.

Due to the nature of the works, engineers will be working within a live cabinet. No issues are expected and extreme care will be taken while the works are under-taken.

Further updates will be provided in the morning.

UPDATE01 – 10:10
Engineers have started work.

UPDATE02 – 10:55
The new power distribution hardware has been installed and engineers will begin to power up the affected hardware 1 device at a time. Level3 are on site with us in the evn of another problem.

UPDATE03 – 11:10
All hardware has been powered up and the faulty device found (all be it with a bang and fire) Unfortunately the failed hardware is the redundant power supply on the network core within this rack. Redundant hardware of this size is not kept on site and we are currently in the process of sourcing another unit. Further updates to follow.

UPDATE04 – 12:02
Further test have been done and confirmed the PSU unit has failed. Engineers have removed a power supply from another unit from our 4th floor suite and installed it within the 2nd floor POP to confirm it is un-damaged by the recent events. Further updates to follow.

UPDATE05 – 12:45
Engineers have ordered a same day replacement from one of our suppliers. Engineers are going to remain on site to fit and commission the new hardware on arrival

UPDATE06 – 15:25
Engineers remain on site awaiting hardware. ETA was 15:30, however this has been pushed back due to a crash on the A3.

UPDATE07 – 15:25
Enginners remain on site having little fun. We have been advised the part is now in London and will be with us by 17:30

UPDATE08 – 17.35
The replacement part has arrived on site

UPDATE09 – 17.50
Despite our best efforts, our supplier has shipped the wrong part! Discussions had with them have concluded with no further delivery options today. New hardware has been sourced and is being made avaliable to site for a timed delivery. Engineers are attending again in the morning to swap out the (hopefully) correct new part. We do applogise for the delay in getting this resolved, however want to remind customers who route via this device that it is still operating as expected on its redundant supply.

UPDATE10 – 09:30 – 07/04/2016
Engineers have retured to site and are awaiting delivery of the new PSU.

UPDATE11 – 09:46 – 07/04/2016
Delivery update to advise the hardware will be on site before 11am.

UPDATE12 – 10:33 – 07/04/2016
Hardware has arrived on site and engineers have confirmed it is the correct unit this time.

UPDATE13 – 10:47 – 07/04/2016
Engineers have installed the new power supply and confirmed its operation within the core. A series of load tests have been conducted with normal operation observed.

UPDATE14 – 11:00 – 07/04/2016
We are happy the new power supply is operating as expected, however will continue to monitor its operation for the next few hours. The site is no longer classed as “at risk” and this issue will now be closed off.
We apologise one again for the delay this has taken to resolve and will be reviewing our internal procedures on hardware spares of this nature at Goswell Road.

LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide … Continue reading “LON01 – LON01 – C002.017 – 31/03/2016 – 15:00 – *At Risk* *RESOLVED*”

Our network monitoring had alerted us to a power issues within one of our Cross-Connect / POP cabinets on the 2nd floor within Goswell Road. Our 4th floor suite is unaffected, however all services routing via C002.017 should be considered at risk until this is resolved. We are current engaged with Level3 and will provide updates shortly.

UPDATE01 – 15:33
All devices within the affected cab confirm that they have lost 1 side of there power feed. We are still working with Level3

UPDATE02 – 17:00
Level3 have advised this is being assigned to a local engineer for further investigation.

UPDATE03 – 18:57
Level3 have advised this has been tasked to a field engineer to investigate. No local alarms have been raised on the site so we suspect this is a local power issue at the cab.

UPDATE04 – 21:20
The field engineer has advised a local fuse had blown on our B side power feed (external fuse). We have granted permission for this to be replaced and the for Level3 to conduct a series of tests to see if the fault was caused by a hardware PSU failure. No issues are expected, however the POP should be classed as “high risk” for the duration.

UPDATE05 – 21:40 Tests have been completed and no fault was found with any of our hardware. Field engineer suspect a weak fuse on that PSU. All redundant power has been restored and the site is now longer classed as “at risk”.

UPDATE06 – 22:30 Monitoring has remained stable for the duration and we are happy no further issues are expected, However Level3 remain on standby. As this was not a service affecting fault and within SLA no RFO will be provided.

LON01 – EasyXDSL – 24/03/2016 – 14:30 – *Emergency Maintenance*

We have just been made aware that our carrier in conjunction with BT will be conducting some emergency maintenance on several FTTC gateways within the South East area. This will cause connections to drop, however should reconnect almost instantly. We apologize for the short notice given for these works and will advise once complete.

We have just been made aware that our carrier in conjunction with BT will be conducting some emergency maintenance on several FTTC gateways within the South East area. This will cause connections to drop, however should reconnect almost instantly.

We apologize for the short notice given for these works and will advise once complete.

LON01 – EasyHTTP – 02/03/2016 – 17:30 – *Maintenance* *Complete*

Our network monitoring has alerted us to a memory problem with “Server01” within our EasyHTTP platform. The server is on-line and processing requests, however is becoming more unresponsive to our monitoring systems as time progresses. To avoid a complete failure of the system, the decision has been made to power cycle the server at 17:30 … Continue reading “LON01 – EasyHTTP – 02/03/2016 – 17:30 – *Maintenance* *Complete*”

Our network monitoring has alerted us to a memory problem with “Server01” within our EasyHTTP platform. The server is on-line and processing requests, however is becoming more unresponsive to our monitoring systems as time progresses.

To avoid a complete failure of the system, the decision has been made to power cycle the server at 17:30

SMTP and WEB services on this server will be unavailable for the duration. We will review the system for stability once the reboot is complete and take further action where required.

We apologise for any inconvenience this may cause.

LON01 – EasyIPT – 29/02/2016 – 21:30 till 22:00 *Emergency Maintenance* *Complete*

Following on from reports today of SIP channel limits being reached for both inbound and outbound calls affecting some of our managed PBX systems, work has been undertaken on our core softswitch to try and identify the root cause of this issue. Changes have been made to the platform however a emergency reboot is required. … Continue reading “LON01 – EasyIPT – 29/02/2016 – 21:30 till 22:00 *Emergency Maintenance* *Complete*”

Following on from reports today of SIP channel limits being reached for both inbound and outbound calls affecting some of our managed PBX systems, work has been undertaken on our core softswitch to try and identify the root cause of this issue.

Changes have been made to the platform however a emergency reboot is required. Due to the size of the platform this reboot will take up-to 15 minutes to complete. During this time inbound and outbound calls will be limited.

We will advise once service is restored and calls are routing again.

Apologies for any inconvenience this may cause.

UPDATE01 – 21:40 softswitch reboot is complete, we are monitoring traffic flow

UPDATE02 – 22:55 A random check on managed PBX systems are showing registration, however this is not reflected on our softswitch, traffic is flowing however.

UPDATE03 – 22:10 Systems continue to show inaccurate data about the status of some registrations. We suspect stuck SIP sessions on some managed PBXs as this is affecting a wide range of PBX software versions.

UPDATE04 – 21:40 All affected managed PBXs are showing as on-line correctly.

LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected. UPDATE01 – 15:45 CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused. UPDATE02 – 16:15 A incident … Continue reading “LON01 – 26/02/2016 – 15:40 – CORE01.LON01 *INCIDENT* *RESOLVED*”

We are aware CORE01.LON01 within our network has suffered a software reload affecting all services directly connected to it. Other parts of the network are unaffected.

UPDATE01 – 15:45
CORE01.LON01 has now recovered and we are reviewing the crash logs for the root cause. We apologize for any inconvenience caused.

UPDATE02 – 16:15
A incident report is now available HERE

LON01 – 22/02/2016 – IPv6 Changes

We have been running IPv6 on our core network for over 2 years now, and during this time offering test IPv6 subnets to Leased Line customers as well as enabling some of our back end platforms. Over the next 4 weeks we will be re-configuring IPv6 on our core network to enable this connectivity on … Continue reading “LON01 – 22/02/2016 – IPv6 Changes”

We have been running IPv6 on our core network for over 2 years now, and during this time offering test IPv6 subnets to Leased Line customers as well as enabling some of our back end platforms.

Over the next 4 weeks we will be re-configuring IPv6 on our core network to enable this connectivity on the various DSL infrastructure for testing purposes.

No service interruption is expected to be seen for IPv4 based connectivity, however IPv6 users may see some degraded service while various parts of the core are reconfigured. Any service affecting works are expected to take place outside of hours.

UPDATE01 21:00 – 23/02/2016
We have reconfigured our core to the new permanent (we hope) IPv6 addressing scheme. BGP sessions have been reconfigured and are back on-line. The next Phase is to reconfigure our Ethernet services.

UPDATE02 22:30 – 05/03/2016
Both LNS01 and LNS02 now have IPv6 built and enabled on there respective interfaces. We plan to build and enable the IPv6 BGP sessions back to our core shortly. This will enable us to enable test accounts with a IPv6 prefix.

UPDATE03 23:30 – 05/03/2016
IPv6 BGP is now enabled on LNS01 & LNS02 between our core. IPv6 prefixes are now available for testing on our DSL network.

UPDATE04 01/04/2016
standard Ethernet services are now also fully enabled for IPv6 and we welcome test clients.
:
UPDATE05 22/04/2016 – 21:19
ALL Ethernet services (services including a backup media) are now also fully enabled for IPv6. Fail over has been tested without issue. We also added some IPv4 config tweaks to these circuits. These again have been tested without fault, however clients may have been a “blip” or two while we checked these config changes worked as expected. We apologise for any inconvenience caused.