According to analyst firm IDC, revenue from HPCs -- which are commonly termed ‘supercomputers’ -- will grow by $6.4 billion during the next five years.
Industries experiencing the most growth in HPC applications are expected to be software engineering, mechanical design, weather, financial and digital animation.
Universities, which in 2007 spent more than $2.1 billion on HPCs, will continue to lead HPC spending with a forecasted spending of $3.2 billion in 2012.
Although optical and quantum technologies have received much attention as potential supercomputing technologies, most HPCs today are comprised of high performance configurations of ordinary technologies.
“Most high performance computers today are based on commodity technologies that are readily available in the open market,” explained Steve Conway, who is IDC’s Research Vice President of Technical Computing and Steering Committee Member of the HPC User Forum.
Speaking with iTnews in the lead up to IDC’s HPC roadshow in Sydney this month, Conway said that common HPC configurations include standard x86 microprocessors from AMD or Intel, and the standard Linux or Microsoft Windows operating system.
“The trick is in how these technologies are connected together in supercomputers that may have 100,000 or more processors each in some cases,” he said. “Increasingly, fibre optical connections are used to link the components together.”
Outside of the academic sphere, current supercomputers have found uses in aircraft design for Boeing, assembly line modelling for Proctor and Gamble, and manufacturing simulation for Whirlpool.
IBM and Hewlett-Packard lead the fray, each occupying 32.9 percent of the HPC market. Dell currently owns 17.8 percent market share, while Cray owns 1.1 percent.
During the next decade, Conway expects supercomputers to reach speeds 1,000 times faster than today’s best by 2017.
These ‘exaflop’ computers will perform 10^18 calculations per second and will be able to process the informational equivalent of all 20 million volumes in the New York Public Library system in less than one second, Conway said.
But as energy concerns come to the fore, analysts expect the cost and availability of electricity to be biggest challenge for the HPC market.
While petascale supercomputers slated for delivery in the 2010 timeframe are expected to require as much as 20MW of power, similar projects could demand 60MW in the 2015-2017 timeframe.
“Today's biggest supercomputers consume enough electricity to power a small city,” Conway said. “In some cases where the infrastructure is lacking, adequate power is simply not available to an HPC site at any price.”
“The biggest challenge will likely be the cost and availability of adequate electricity to operate computers this large and power-hungry,” he said.
“When the sharply rising costs of oil and gas, it is no surprise that power and cooling has become one of the top few issues for HPC users.”
Supercomputing revenue to grow $6.4b by 2012
By Liz Tay on Sep 12, 2008 1:14PM