‎Monash Uni builds southern hemisphere's largest Ceph storage cluster

By

VicNode gets 5 petabytes of software-defined storage.

Monash University’s research cloud operating centre has implemented a 5 petabyte software-defined storage cluster as part of the VicNode research cloud partnership.

‎Monash Uni builds southern hemisphere's largest Ceph storage cluster

Ceph is a Linux-based distributed software-defined storage platform that provides interfaces for object, block and file-level storage.

According to the university, the Ceph storage cloud is the largest deployment of its kind in the southern hemisphere.

It supports a range of higher level storage systems including Windows shares, discoverable data citations using Macmillan FigShare, the university’s instrument data storage platform MyTardis, and the OwnCloud open source cloud storage platform.

Monash eResearch Centre deputy director Steve Quenette told iTnews Ceph was needed to support the university’s research workload.

“We have a strong focus on imaging and data science around imaging, but it doesn’t stop there. We’re also very strong at engineering, public health, genomics, proteomics and various other things,” Quenette said.

“So our solution is about dealing with long-tail research as well as supporting peak research that might be high in petabytes or high in IOPS to exist in the one infrastructure.

“This is why Ceph is really great. We can just scale the infrastructure underneath without having to worry about buying or procuring lots of different products for lots of different styles of storage."

Monash started playing with Ceph in 2012 through a partnership with Inktank - now a Red Hat subsidiary - with a system that provided block storage devices for its research cloud.

The most recent Ceph build went into production during the first quarter of this year.

Quenette said the project took longer than usual as a result of organisational and process change with Monash's vendors, but it generally takes around six months from design to implementing a new system for researchers to use.

“Scaling and upgrading these systems is quick. If we make an order for parts and they arrive the order of a month, and within another month they’ll be ingested into the system,” Quenette said.

“When we scale, as we anticipate we’ll do later this year, we anticipate it will take in the order of two months."

To ensure its reliability, R@CMon tests its new hardware as if it’s a component for one of its supercomputers, such as the Massive-3 system it implemented last year.

“We test every piece of our infrastructure as if it was going to be a production HPC system. These are very rigorous tests and very quickly expose flaws in consumer grade equipment,” Quenette said.

“That leaves us in a position where our failure rates after we commission this equipment are lower."

Quenette pointed out that 5 petabytes was the amount of raw storage in the cluster, and that various levels of resiliencies for various tasks means the amount of presentable storage ultimately available to researchers will be lower.

Storage cloud and Massive

The university already has a large amount of storage in its OpenStack storage platform integrated as part of Monash’s Massive HPC supercomputers.

“We’ve basically got two technologies that define our infrastructure-as-a-service layer of things. That’s Ceph, and OpenStack,” Quenette said.

“The Ceph storage underpins all the working space data of our research cloud environment. And M3, which is the latest iteration of Massive, is completely built around our OpenStack environment, making it HPC on the cloud."

Quenette estimated that Monash has around 10 petabytes of total presentable disk and tape-based storage.

Options for researchers

The new Ceph storage cluster is part of the VicNode collaboration, which provides storage cloud services for Victoria’s universities. The cloud is hosted out of two operating centres, one at the University of Melbourne and one at Monash Uni.

Researchers choose which two nodes to host their research project out of, based on their specific requirements.

“Storage is not necessarily useful by itself – researchers do things with data. So [allocation of workloads to one of the operating centres] tends to come down to what the data of interest needs to connect to in terms of compute, other data and how the data will be sustained," Quenette said.

“Melbourne and Monash have around 90 percent of the category one research activity in Victoria, and this is split roughly 50/50 between the two. The net effect of the VicNode model is most researchers have an operating centre on their campus, close to the owner, instruments, compute and data experts.

“While there’s no hard-and-fast rules, Monash tends to do more with RMIT and for example the University of Melbourne tends to work with LaTrobe."

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Victoria's first government tech chief steps down

Victoria's first government tech chief steps down

Microsoft had three staff at Australian data centre campus when Azure went out

Microsoft had three staff at Australian data centre campus when Azure went out

NAB live-streamed the end of its Teradata platform, thousands tuned in

NAB live-streamed the end of its Teradata platform, thousands tuned in

Interview: Inside the Commonwealth Bank's cloud

Interview: Inside the Commonwealth Bank's cloud

Log In

  |  Forgot your password?