Amazon offers explanation to recent outages

By on
Amazon offers explanation to recent outages

Elastic load balancer bug caused flood of requests.

Failed generators and a lengthy server reboot process exacerbated outages to Amazon Web Services over the weekend.

A detailed explanation of the outage released by the cloud provider this week attempted to minimise the event — saying a mere seven percent of virtual machine instances in the Virginia availability region were affected by the incident. However, the company conceded the recovery time for those affected was extended by a bottleneck in its server reboot process.

The outage on June 29, caused primariily by a huge thunder storm passing through Virginia, saw several large internet sites including Netflix, Instagram and Reddit fail, and led to widespread criticism of Amazon’s hosting service.

While backup equipment at most of the ten data centres in Amazon’s US East-1 region kicked in as designed — allowing the facility to cope with mains grid power fluctuations caused by the storm — the generators at one site failed to provide stable voltage.

As the generators failed, battery-backed uninterruptible powers supplies took over at the data centre keeping servers running. However, a second power outage roughly half an hour late meant that the already depleted UPSs started to drop off, and servers began shutting down.

Power was restored 25 minutes after the second outage but Elastic Cloud Compute (EC2) instances and Elastic Block Store (EBS) volumes in the affected data centre were unavailable to customers.

For an hour, customers were unable to launch new EC2 instances or create EBS volumes as the control planes for these two resources keeled over during the power failure.

The Relation Database Services was also hit by a bug that meant some multi-availability zone instances did not complete failover process.

EBS volumes that were in use were hobbled when brought back online, and all activity on them paused so customers could verify data consistency.

It took several hours to recover some EBS volumes completely, a process Amazon said it was working to improve.

At the same time, Amazon’s Elastic Load Balancers (ELBs) encountered a previously unknown bug that caused a flood of requests and created a backlog in other AWS zones not affected by the storm.

Amazon said it would break load balance processing into multiple queues in future to avoid a similar incident as well as to allow faster processing of time-sensitive actions.

It also intends to develop a DNS reweighting backup system that will quickly shift all load balance traffic away from an availability zone in trouble.

Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

You must be a registered member of iTnews to post a comment.
| Register

Poll

New Windows 10 users, are you upgrading from...
Windows 8
Windows 7
Windows XP
Another operating system
Windows Vista
How should the costs of Australia's piracy scheme be split?
Rights holders should foot the whole bill
50/50
ISPs should foot the whole bill
Government should chip in a bit
Other
View poll archive

Whitepapers from our sponsors

What will the stadium of the future look like?
What will the stadium of the future look like?
New technology adoption is pushing enterprise networks to breaking point
New technology adoption is pushing enterprise networks to breaking point
Gartner names IBM a 'Leader' for Disaster Recovery as a Service
Gartner names IBM a 'Leader' for Disaster Recovery as a Service
The next era of business continuity: Are you ready for an always-on world?
The next era of business continuity: Are you ready for an always-on world?

Log In

Username:
Password:
|  Forgot your password?