We will be shutting down our Frontier interconnect in preparation of moving the circuit from our current legacy access device to CORE01 as part of our ongoing network upgrade works. The circuit will be physically moved on the 27/03/2015 while and engineer is on site and bought back in to service that evening outside of hours.
Call traffic will be routed away from this interconnect several hours prior to the works, However any calls still active on this link will drop once disconnected. Routing to Frontier will then be via one of our indirect peers after the disconnection. No further impact is expected after this.
UPDATE 01 – 20:30
The subject maintenance this evening has been postponed. New dates will be announced in due course.
Apologies for any inconvenience caused.
UPDATE 02 – 05/04/2015
This work has been re-booked for Tuesday the 07th April 2015 at the same time. A new maintenance window will be created.
In order to allow for extra IPv4 capacity in Rack03 to accommodate our new ESXi platform for VoIP we will be merging both Vlan30 & Vlan40 in to a single /25 subnet.
Customers who have unmanaged co-location with us in the 22.214.171.124/26 range will need to update the following on there devices on the day:
New gateway: 126.96.36.199
New subnet: 255.255.255.128
Once these changes are made, customers will loose access to there devices until we finalize the amended VLAN. We will be making these changes at 20:05 so would suggest customers make these changes as close to 20:00 as possible.
Please remember to commit these changes to memory once access has been re-gained as we will be moving power feeds / racks to reflect this around 21:00
We will be relocating both our ROOT name servers in the above window to another rack within our suite. This is to enable us to remove an old 10/100 line card and replace with a Fiber GBIC line card in our current Cisco Core in preparation for other upgrade works.
Each server will be migrated in turn to avoid any serious disruption, however there may be a few moments where both forwarding and reverse query’s are refused while the servers switch between primary and backup modes as they are taken on and offline.
As part of our upgrade plans which include our own DSL and leased line network, We will be adding 2 new Cisco 6509 devices in to our core network. This work involves bring up new BGP sessions and moving Fiber interconnects between core devices.
No impact is expected to be seen while bring up new BGP sessions between devices, However routing changes and network instability will be seen for a short period while we relocate interconnects and re-coverage our routing tables to reflect the new interconnect locations.
This work forms only part of the full scope of works and other maintenance windows will be booked to relocate other interconnects and re-purpose existing core devices to access nodes.
UPDATE 01 – 23:55
Primary interconnects have been relocated, however we have not re-enabled the sessions due to a potential problem. Works will continue tomorrow in a limited fashion and they will be re-enabled outside of hours.
UPDATE 02 – 22/03/2015 – 20:51
Works to re-enable establish new internal BGP sessions is complete. Works to re-enable our external BGP sessions via these news cores is complete, However will require reconfiguration at a later date once CORE-2 is online.
We will be upgrading our shared EasyIPT platform (Hosted VoIP) over the course of Friday the 05th March. This upgrade work involves the replacement of the current server to a much larger device with an upgrade of OS and PBX software. The new server will be built and bought online along side the current one. Migration works will start after 20:00 hours on a client by client biases.
As part of the works, each shared session will now be given its own IPv4 address and trunk. This will eliminate the billing issues on call forwarding that have been an issue for some time.
The works also improve redundancy via a dual network topology setup to the main server and enables IPv6 going forward. This work is inline with our upgrade program and allows us to provide a truly unique setup and experience.
Once a client is migrated the site firewall rules and handsets will be updated to reflect the new session via our management network. No action is needed where this is in place. Any site operating outside of this will need to contact us or if you are using a 3rd party firewall with no VPN.
UPDATE 01 – 22:00 – 08/03/2015
Many of the systems have migrated without issue as expected however due to the volume of migrations we have been unable to complete them all within the time allotted. Migrations will continue over the course of Monday next week out of hours as based on the weekend work, there should be no service disruptions seen. The old server will then be disconnected at 22:00 on Tuesday the 10th March
We will be making some changes to our core SIP soft-switch tonight to resolve the issues we have been seeing with some BTIPX calls. This will involve a reload of the session controller and will drop any active calls. This should only need to happen once. We apologize in advance for any inconvenience this may cause.
Following on from the previous NOC advisory it has become necessary to migrate the existing “Active” supervisor to “Hot Standby” so a reload can take place on that slot. This may effect backplane switching on CORE-01 and packet loss may be seen for several seconds while the Standby supervisor becomes the Active unit.
Our network monitor has alerted us to a failure of a link within our Entanet port channel that we use for Transit to and from Entanet at Goswell Road(LON01). Existing links have taken over the traffic loads and no service impact has been seen. Diagnostic work is underway to bring the link back in to service and at the moment is suspected to be a fiber break.
UPDATE 01 – 15:30
Further work with Entanet shows there is no light on there side of the link. No work has taken place in the Entanet cabinet or in our suite. Other links on the same bulk are operating without the same failure so this may be a faulty transceiver
UPDATE 02 – 15:45
Further diagnostics show that despite our transceiver showing the link as “Not Connected” in depth diagnostics show the transceiver with a possible fault as its not being reported within the hardware inventory for that line card. Real-time logs don’t show the device as being taken out of service by the IOS and we believe a line card reset may resolve the issue. However this transceiver is currently connected to the Active supervisor for that core. The device does have 2 for redundancy and the load will need to be transferred before a reload can take place. A new emergency maintenance window will now be opened as this may result in several seconds of switching failure.
UPDATE 03 – 20:05
The module has been restarted, however the problem remains. We believe the output given from the Cisco IOS is misleading as it fails to list the device rather than giving DB levels of 0. We have resumed the fiber break theory and have asked our vendor Level3 to confirm if this is the case.
UPDATE 04 – 04/03/2015 – 09:00
A Structured engineer is being sent to site to locate and test the fiber.
UPDATE 05 – 06/03/2015 – 10:00
Engineers discovered this cross connect had been incorrectly unplugged in a 3rd party cabinet. This won’t be an issue going forward as we have our own bulk fibre and all critical circuits will be moved to this.