Video: Building a supercomputer for Happy Feet 2, Mad Max 4

By , on
Video: Building a supercomputer for Happy Feet 2, Mad Max 4

Builds 6,000 core grid in E3's new Sydney data centre.

Digital production house Dr. D. Studios is in the early stages of building a supercomputer grid cluster for the rendering of the animated feature film Happy Feet 2 and visual effects in Fury Road the long-anticipated fourth film in the Mad Max series.

The supercompute grid cluster, based on HP blades servers housed within an APC HACS pod, is already running in excess of 1000 cores and is expected to reach over 6000 cores during peak rendering by mid-2011.

The technical team at Dr D Studios (which comprises members from the first Happy Feet production) spent a 20-hour shift bringing the APC HACS pod into production and the first 1000 cores online last week.

The cluster for the first production was hosted in one of Sydney's prominent "traditional" data centres.

This cluster boasted 4096 cores, taking it into the top 100 on the list of Top 500 supercomputers in the world in 2007 (it now sits at 447).

According to Doctor D infrastructure engineering manager James Bourne, "High density compute clusters provide an interesting engineering exercise for all parties involved. Over the last few years the drive to virtualise is causing data centres to move down a medium density path."

The sequel to Happy Feet will require more compute power, but will be housed in a smaller configuration of machines thanks to massive advances in compute density over the past five years.

Each cabinet in the new supercompute build has sufficient power and cooling to hold four double-density blade chassis (128 nodes). Whilst, the original Happy Feet supercomputer required approximately 150 chassis, Dr D's new machine will be housed within just 24.

According to Bourne, the demand for higher density computing has increased. The commoditisation of cluster compute nodes, the storage access layer and network bandwidth combined with the physical consolidation afforded by blade servers and virtualisation has spawned a new market that begs for higher power density hosting.

"We can fit 1000 cores in a rack now," Bourne said. "Back [in 2005], we were working with single-core [Intel] Xeons; as hyperthreading was of little benefit. Each blade had two cores then - now they have eight - and 16 per node should hyper-threading be turned on. Memory density has increased as well now from 1.5GB per node to 24GB."

Bourne said that the water cooled pod housing the servers - which offers 30kW of power per rack - was "absolutely essential to get this level of density.

"Some form of targeted liquid cooling is essential for this level of density. The APC pod gives that and a lot of flexibility."

"We will be deploying 6,000 cores in a space of no more than 10 x 5 metres," he said.

Dr D talked to seven data centre providers before hearing of the new build at E3 in Alexandria.

"None of us were too happy about building another cluster in a traditional data centre given their inherent limitations," Bourne said.

"We even looked at CDC in Canberra, which does high density; but the communications costs were too high. At the eleventh hour we heard about the E3 build."

The E3 data centre is located within reasonable proximity of Global Switch, Equinix and a Dr D. facility at Carriageworks, Redfern - all of which will be connected under a deal with custom ISP Cinenet.

Will it list?

Bourne isn't confident the new machine will make the world's Top 500 supercomputer list, at least in the short term.

Today, just to get on the list requires a machine with 8500 cores so the current configuration of 6,000 will fall short.

However, Dr D still has the option, in terms of the floor space, power and cooling available within its suite at E3 to scale the cluster to 12,000 cores if necessary.

Bourne won't rule such an expansion out, as he suspects the visual effects job booked in for the new Mad Max film might require more processing power.

"Generally, visual effects are more challenging than animation," Bourne said. "Rendered real human environments can be more complex than animated ones - take Avatar as an example."

Bourne said that the compute needs for filmmaking are always expanding as the studios try to produce bigger and better effects. An emerging area is research into the applicability of massively parallel computing in generating usable simulations traditionally MPI clusters and now GPU clusters.

"I expect we will need more than 6,000 cores, but it's hard to know at such an early stage in the production", he said.

Got a news tip for our journalists? Share it with us anonymously here.

Most Read Articles

Log In

  |  Forgot your password?