Amazon Web Services senior VP Andy Jassy is obsessed with waste.
For the past eight years, Jassy has zeroed in on every opportunity to remove costs from the AWS business, with a business philosophy straight from the rulebook of its e-commerce parent – not that of a typical IT supplier.
Amazon Web Services has reduced its prices on over 40 consecutive occasions, at times leading the charge, at other times countering similar moves by Microsoft and Google. Most recently, AWS revealed cuts as high as 40 percent.
Those cost reductions have had a dramatic impact on Australia’s domestic web hosting market and threaten to do the same more broadly across the sector. The race to the bottom has gotten to the point that low-cost focused vendor Dell recently ruled itself out of entering this "crowded space".
If the ultimate winner of the price war comes down to who squeezes the last dollar out of their cost base, AWS would have to be considered a strong contender.
Below are four of the reasons AWS can afford to cut its prices:
1. Custom hardware
Like Facebook, Google and other web-scale entities, AWS tends to shun the usual brand names that manufacture general purpose hardware, and instead designs its own hardware specifically for the hosting of its services.
Beyond an acknowledgement that it uses Intel chips, AWS has not publicly revealed any of its vendor partners or systems integrators.
That said, Jassy was happy to share some guiding principles around the company’s hardware designs while in Australia last week for the AWS Summit.
AWS sources its own components – everything from processors to disk drives to memory and network cards, he said, and uses contract manufacturing to package those components into custom machines.
“Designing some of your own hardware helps [lower costs],” he explained, because AWS strips out any components that aren’t required to offer a service.
If instead it bought hardware from an OEM vendors – hardware designed to be broadly suitable to any application – AWS would “end up paying for components customers aren’t using”, he said.
The same philosophy extends to storage. AWS has configured its own storage arrays in the name of density.
James Hamilton, senior VP and engineering lead at AWS, has previously boasted that the best disk density on offer from OEMs would see around 600 high-capacity disk drives fitted into a standard 19-inch rack, which equates to three quarters of a tonne of disk per rack. He has claimed AWS’s design offers well above a tonne of disk per rack.
"We also built our own networking hardware, and have our own protocol stack, and the price point has changed phenomenally," Hamilton said at AWS' Re:Invent 2014 conference.
"The ASICs that form the heart of one of these routers doesn’t change, but when we started playing with them, it was 24 ports, then it was 36, then it was 48, and 96 is just around the corner.”
Hamilton said network performance reports are forwarded all the way to Jassy.
“The cost of deploying a service is all infrastructure. The difference between the infrastructure being an anchor around the neck of AWS shareholders, versus a group that is capable of helping reduce prices 38 times all comes down to the costs of the infrastructure. All of our engineers are focused on that problem.”
2. Supply chain efficiencies
Analysts note that even using custom hardware, Amazon’s storage prices have dropped at a pace that well and truly outstrips Moore’s Law.
Two theories have been put forward to explain the phenomenon – neither of which have directly been confirmed by the company.
First, spinning disk has a high failure rate, so to account for returns, manufacturers usually build the cost of processing, shipping and replacing product into the price of each unit.
iTnews understands that AWS negotiated a discounted price with at least one of its disk suppliers [Seagate] to purchase bulk hard drives under the understanding that none will ever be returned. When a drive breaks at AWS, it is swapped out at speed.
Amazon has confirmed that no disk drive – or any component used to supply a customer a service – leaves an AWS facility without being "demagnetised and shredded". A spokesperson said this process is driven by the data privacy needs of customers. It nonetheless drives a better price for AWS.
The other key theory on Amazon’s storage prices concerns whether it is feasible the ludicrously low-cost Glacier archive storage service could be based on disk. In Sydney, Glacier is priced at just $0.012 per GB.
iTnews has speculated in the past that the cost and performance of the service suggests it can only be based, at least, partly, on tape.
Jassy wouldn’t be drawn on whether the theory is correct, saying only that automation was the key difference between traditional tape libraries and “what we do with Glacier”.
“It’s pretty different [to a standard tape library], I’ll leave it at that,” he said.
Negotiating discounts on streamlined hardware is only possible for those organisations that have achieved significant scale.
AWS began its cloud journey with considerable bargaining power as an online retailer, and eight years later, its scale is staggering.
Analyst body Canalys last year predicted AWS's 2013 revenues would top $3.8 billion – more than the next five major IaaS players combined. Gartner has also stated that AWS has five times the compute capacity of the next 14 providers combined. Amazon does not break out AWS results.
Hamilton noted that each day, AWS adds as much server capacity to its compute cloud as the whole of Amazon’s retail arm utilised in 2003, which at the time was a US$7 billion business.
That said, he insisted scale doesn’t equate to AWS powering thousands of servers that sit under-utilised at any point in time. One of the primary reasons the company uses contract manufacturing and sources its own components is to ensure a just-in-time build of infrastructure and “not have resources lying fallow in the data centre that yet to deliver value.”
Read on for more...