LON01 – 26/06/2015 – 13:00 – CORE01.LON01 *EMERGENCY WORKS* COMPLETE*

Our network monitoring has alerted us to a intermittent fan failure on PSU 1 within CORE01.LON01. This device has redundant PSUs so no impact is expected.

Engineers will attend this afternoon to swap out the failing power supply. Again no impact is expected to be seen, however the device will be marked as “at risk” for the duration enginners are working on it.

Updates will be provided once available.

UPDATE01 – 14:00

This work has been completed without any issue.

LON01 – 26-06-2015 – 20:00 – 22:00 – Network Maintenance (CORE03.LON01) *COMPLETE*

In the above window we will be re-configuring BGP to provide a full IPv6 routing table to CORE03.LON01. No outage is expected, however as routing tables update within our network users may see routes change around our network.

LON01 – 24-06-2015 – 20:00 – 22:00 – Network Maintenance (IPv4) *COMPLETE*

In the above window we will be making changes to ACCESS01.lON01 converting this to our 3rd core device (CORE03.LON01) We will then be expanding BGP to the device to provide a full IPv4 routing table. No outage is expected, however as routing tables update within our network users may see routes changes around our network.

This work was part prosponed due to enginner avalibity and will take place on Friday along with the IPv6 works.

LON01 – 12/06/2015 *UK Routing Issue* *RESOLVED*

We are aware of a issue affecting connectivity to multiple networks within the UK. Initial testing had shown this to be a problem with Level3, however they are advising this to be part of a wider problem within the UK.

We are currently monitoring and will look at re-routing traffic if this continues, however this would involve shutting down several of our peers and affecting traffic via unaffected links. This would be a last resort option.

Once we know more we will advise.

UPDATE 01 – 11:11

Level3 have updated us to advise they are now tracking a global routing issue. This now seems to be isolated to Level3, however will be affecting any network that peer with Level3. Re-routing traffic is still be an option, however may have no impact to networks that rely on Level3.

UPDATE 02 – 11:36

Our network monitoring has alerted us to routing failures to common websites such as the BBC, Facebook & Google. We have taken the decision to shutdown our Level3 peering with Adapt. While this has had some positive impact, 3rd party networks that are using Level3 will still be having issues.

UPDATE 03 – 11:55

Level3 have advised that services are starting to be restored. We have re-enabled our Level3 peer via Adapt and have noticed they also took the decision to showdown there link to Level3. Traffic are currently routing via Cogent via Adapt.

UPDATE 04 – 20:00

Outage report from a 3rd party monitoring network: http://www.bgpmon.net/massive-route-leak-cause-internet-slowdown/

LON01 – 07-06-2015 – 19:00 – 22:00 – Network Maintenance (IPv6) *COMPLETE*

In the above maintenance window we will be adding IPv6 support back to ACCESS01.LON01 after its removal during our previous network upgrades. We will also be enabling IPv6 on CORE02.LON01 to support IPv6 on our upcoming DSL network. IPv6 is already available on CORE01.LON01 for directly connected services and leased line circuits.

No disruption is expected to IPv4 traffic or the network general.

UPDATE 01 – 19:00
Engineers have started the works.

UPDATE 02 – 19:50
IPv6 BGP sessions to our primary peer “Adapt” have been established along with our tertiary peer “Zen” Prefixes are being received on CORE01.lON01. A BGP session will now be created between CORE01.LON & CORE02.LON01

UPDATE 03 – 20:30
A BGP session between CORE01.LON & CORE02.LON01 has been created and is now in testing.

UPDATE 04 – 20:45
Testing has completed OK, however there is a problem with routing traffic to both Adapt & Zen. We are unable to pass IPv6 traffic to Adapt and we are not receiving a default route from Zen. Tickets have been raised with each carrier. At this moment in time we are not passing IPv6 traffic due to this fault. If a quick resolve is not provided then we will drop the effected peers.

UPDATE 05 – 21:20
While waiting for responses from upstream peers, we continued to investigate and discovered the ranges we where testing with where not announced within our internal V6 BGP session due to the use of a GRE tunnel. Network statements have been added and traffic is flowing via Adapt without issue from our test locations. No traffic loss would have been seen other than from our test locations and retract the previous statement about not passing traffic. We are still awaiting a response from Zen.

UPDATE 06 – 21:45
Due to time lost we have suspended the integration of ACCESS01.LON01. While technically there is less work to complete on this and could be completed within the window. We feel this should be rescheduled to avoid human error given this late hour and the concentration required when its comes to BGP!

Internal Systems *RESOLVED*

We are currently experiencing issues with some internal systems that is preventing various customer service / support operations. Engineers are on route to the data centre, however local network rail cancellations and disruptions are delaying engineers from attending.

Please also note the above is effectiving email delivery so we are unable to process support tickets. We advise calling for any urgent issues.

Apologies for any inconvenience this may cause. Once we know more we will update this notice.

UPDATE01 – 12.00
Engineers have arrived on site.

UPDATE 02 – 12.30
Internal systems have been restored