The VMs were deleted on September 26th and while service resumed a few days later, Cisco was left to do all sorts of work to restore all functionality.
As of 4:05PM Tuesday, Sydney time, Cisco sounded the all-clear. A new status update declared “This incident is now resolved.”
“Metrics and monitors continue to be positive, and services are operating as expected.”
But if this is resolved, we’d hate to see what an incident in progress looks like because the new update also explained “two known issues that remain after the repairs were completed”.
One of the issues means WebEx will “show the list of users in the space as the space name, rather than the correct name of the space.”
The other issue means “Some spaces may still be missing some historical content. While the data is available in the Teams data set in the cloud, the client is not always picking up the updated content and refreshing its cache.”
Happily, fixes are easy: the first problem can be addressed by manually re-naming a space; the second will be resolved by a client software update.
The incident is surely embarrassing to Cisco because, as the company admitted, it caused the outage by deleting the cloud-hosted virtual machines that ran WebEx, a staggering operational failure.
Cisco increasingly pushes cloud services as the way to collaborate and manage network services. The company probably counts itself lucky that the latter class of services wasn’t impacted by this incident: unavailable collaboration tools is an inconvenience because workers can still use email or phones.
But networking hardware bricked by a self-deleting cloud would have called into question Cisco’s strategy.