The Australian Olympics Committee (AOC) has redesigned its web infrastructure to tap a "pre-warmed" pool of cloud resources in case of any unexpectedly heavy traffic.
The team decided to redevelop its caching software and applications prior to the Summer Olympics in London this year after suffering a day-long outage during the Vancouver Winter Olympics two years ago.
Lorenzo Modesto from the AOC’s web hosting provider Bulletproof Networks explained that AOC had not expected its Winter Olympics site to attract as much traffic as it did.
As a result, the site became “resource bound” during the 2010 opening ceremony, triggering an “irregular” software bug that took a full day to resolve.
In preparation for this year’s Summer Olympics in London, the AOC decided to rebuild its caching system to take some load off application servers.
Jason Barnes from the AOC’s web consultancy Daemon said that caching was a particular challenge because of user demand for real-time results – or as near to it as possible.
So far, the team has received, stored and processed more than 357,000 xml data files containing results to be published online.
Barnes said it was a challenge “keeping cache ‘expiries’ as low as possible [to provide near-real-time data] without putting too much load on the servers”.
According to Modesto, the new caching system had decreased the load on servers by 90 percent, while making the website more responsive.
The AOC committed to spending 30 percent more to host the site on Bulletproof’s ‘managed cloud’ over the London Olympics period, after which it hoped to scale back its spend.
Modesto said the AOC had a “significant number of ‘pre-warmed’ public managed cloud server instances” in case of traffic spikes but did not reveal details.
Bulletproof has tapped Amazon Web Services servers for previous, international customers, including Movember websites.
Daemon’s Barnes said the team conducted significant load testing prior to the current Olympics season, which has attracted more than a million page views to the AOC site a day.
“It's been a smooth run so far from a technology and architecture point of view. We are actually using less servers than we did for Vancouver while handling almost sixteen times the traffic,” he said.