Monash University has boosted the performance of its infrastructure-as-a-service computing facility for researchers and debuted what it says is the world's first end to end 100 gigabit per second cloud.

Its Research at Cloud Monash - R@CMon - uses software defined networking with unified, self-service system resources that range from commodity hardware to high-performance configurations of networking, computing and storage.
On top of R@CMon, the high-speed interconnects have helped the university establish several virtual laboratories for data-intensive characterisation and analysis, using virtual desktops and Docker container tools linked up to data sources and computing resources.
These are becoming the standard operating environment for Monash Uni researchers, alongside access to general purpose high-performance computing, Hadoop, interactive visualisation and other resources.
Steve Quenette, deputy director of Monash eResearch Centre, said the new Open Ethernet based solution enables the most efficient use of data for fast analytics and intelligence, and gives the university freedom to grow so as to support current and future workloads.
"Our researchers are building the 21st century equivalence of microscopes to inspect and make sense of large amounts of data to derive insights, and our cloud-based e-research platform must be able to support their initiatives in every possible way," Quenette said in a statement.
Monash University picked network hardware from Mellanox for the project, including Open Ethernet switches, high speed interface cards and 100G cables.
R@CMon is funded by the Australian government's Super Science iniatiative, and started through the national e-collaboration tools and resources research cloud (NeCTAR) project.
Last month, the university announced two new Nvidia GPU-based supercomputers, aimed at advanced 3D modelling and data visualisation.
Them multi-modal Australian sciences imagine and visualisation environment (MASSIVE) supercomputers provide multi-tera floating point operations per second performance, and drive a wall of 80 3D monitors.