SuperChoice is reworking its applications to run across multiple public clouds, enabling the company to take advantage of “spot pricing” and significantly lower its infrastructure costs.
Chief information officer Ian Gibson told iTnews that the company, which provides hosted superannuation management and clearing house software, conducted its own research and pilots and correlated the results with third-party research before embarking on the project.
Gibson will present the results of his internal research – and of the company’s shift to the public cloud – at Cloud & DC Edge 2017 in March, which is co-organised by iTnews.
Spot pricing enables users to set a maximum price they are willing to pay for cloud capacity.
When a cloud provider has excess capacity, they can put a price on it, sometimes orders of magnitude below the regular on-demand price. Customers whose set maximum doesn’t exceed the asking price may be allocated the spare capacity.
Gibson said SuperChoice’s own tests of spot pricing across 10 cloud instance configurations showed it could save “up to 84 percent” compared to the regular cost of hosting its sample workloads. He said external studies had achieved similar results.
“If you mix and match between cloud service providers and take advantage of their various pricing arrangements, you get a far more cost-effective outcome than if you just go with one service provider,” Gibson said.
“Each of them has strengths and weaknesses. But particularly if you start tapping in with pre-production workloads - where there’s the greatest opportunity to use spot pricing - the savings can be quite substantial.
“You need to understand the characteristics of your load, and if you do we think there’s a significant opportunity in taking advantage of essentially what becomes marginally priced compute capacity.”
In a sign of how seriously Gibson is taking spot pricing, SuperChoice has recently “created a new role [in IT that looks] specifically … at these aspects of pricing”.
Presently, SuperChoice runs staging work in a VMware-based private cloud and on AWS.
It is starting to migrate some workloads into Azure, and Gibson said the company may also use the Google cloud, should an Australian zone be created.
“Our strategy is very much about cloud service provider independence,” Gibson said.
To enable this, Gibson and his team have spent significant time repackaging code to run on any public cloud, and has embraced a microservices architecture. The company also uses RightScale to manage its multi-cloud environment.
Readying its applications and processes to run entirely in the cloud had been a useful exercise in identifying and fixing any parts that were broken, Gibson said.
“Migrating to the cloud is a fantastic process because we can press a button and in an hour we can create some servers in the public cloud, download our application, boot it all up and have it working,” he said.
“But the trade-off is the [underlying] process becomes very brittle. If every one of those steps isn’t exactly right, it all fails.”
Case in point: Gibson found a piece of code written two years ago that never compiled properly.
“Unbeknownst to me, every time we had to deploy that code, our ops guys were doing manual workarounds to get it to work,” Gibson said.
“When you automate the process completely, which is what we’ve done as part of migrating to the cloud, your deployment process becomes very brittle.
“We went back and got that code modified in order to get it to build properly in order to migrate it.”
Ian Gibson will be a speaker at Cloud & DC Edge 2017, which is being held at Royal Pines, Gold Coast, on 14-16 March 2017. The event – which is co-organised by iTnews and Adapt – will feature insights from leading cloud and data centre users and experts. You can request the agenda and purchase tickets here.