The new lab will develop techniques, tools and training materials allowing software engineers to harness the parallelism of multiple processors, which are already available in almost all new computers.
Nvidia said that the investment complements its ongoing strategy to solve some of the world's most computationally intensive problems with its GPU.
The company has enjoyed significant success to date with its Tesla line of GPU computing hardware platforms and its Cuda technology environment.
Cuda gives developers access to the massively parallel architecture of the GPU through the industry-standard C language.
"Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fuelled the computing industry, and several related industries, for the past 40 years," said Bill Dally, chairman of the Computer Science Department at Stanford.
Until recently, computer installations delivering massive parallelism could be deployed only in large-scale computer centres with hundreds to thousands of separate computer systems.
The recent introduction of many-core processors, such as the GPU and the multi-core CPU, means that most new computer systems come with multiple processors that require new software techniques to exploit parallelism.
Without new software techniques, computer scientists are concerned that rapid increases in the speed of computing could stall.
Nvidia touts parallel computing
By Clement James on May 1, 2008 11:06PM