US researchers have proposed new benchmarks to test supercomputers' data handling capabilities rather than speed.

Designed to complement the popular Top500 list, Graph500 ranked computers on their ability to perform complex, data-intensive analytics involved in medical research and social networks today.
It ranked computers within six input categories - huge (1.1PB), large (140TB), medium (17TB), small (1TB), mini (140GB), and toy (17GB) - and used 'edges per second' to denote the number of data transfers that took place.
When researchers tested nine supercomputers against the new benchmark, none were able to handle problems in the huge or large categories.
Argonne National Laboratory's BlueGene/P, dubbed Intrepid, topped the Graph500 list last week.
Intrepid was ranked 13th on the Top500, which used the Linpack benchmark to measure supercomputers' speed in solving a basic numerical problem.
"Top500 has really succeeded in getting computer manufacturers to care about FLOPS [floating operations per second]," said lead researcher Richard Murphy of Sandia National Laboratory.
"We'd love to influence the same group, many of whom are on our steering committee, to care more about data movement."
Murphy explained that while traditional supercomputing examined data - for example, simulating how chemical compounds interact - data-intensive supercomputing would discover relationships within a data set.
Instead of testing a hypothesis, data-intensive supercomputing was about "asking the computer to find hypotheses for us", he told iTnews.
He expected data-intensive computing to come to the fore on cybersecurity, medical informatics, data enrichment, social networks and symbolic networks in the coming years.
"Think of trying to keep track of every piece of cargo moving around the planet, examining huge medical research databases to find patterns, or simulating the human cerebral cortex," he said.
"These are very different problems from looking at how chemical compounds interact or crash-testing a car.
"Many of us on the Graph500 steering committee believe that data intensive problems will dominate the application set over the next decade."
No Australian supercomputers were tested against the Graph500 benchmark, which favoured strong interconnection networks and memory systems.
The rankings were:
- Intrepid, Argonne National Laboratory
- Franklin, National Energy Research Scientific Computing Center
- cougarxmt, Pacific Northwest National Laboratory
- graphstorm, Sandia National Laboratories
- Endeavor, Intel Corporation (256 node, 512 core Westmere X5670 2.93)
- Erdos, Oak Ridge National Laboratory
- Red Sky, Sandia National Laboratories
- Jaguar, Oak Ridge National Laboratory
- Endeavor, Intel Corporation (128 node, 256 core Westmere X5670 2.93)