Nvidia said that the investment complements its ongoing strategy to solve some of the world's most computationally intensive problems with its GPU.
The company has enjoyed significant success to date with its Tesla line of GPU computing hardware platforms and its Cuda technology environment.
Cuda gives developers access to the massively parallel architecture of the GPU through the industry-standard C language.
"Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fuelled the computing industry, and several related industries, for the past 40 years," said Bill Dally, chairman of the Computer Science Department at Stanford.
Until recently, computer installations delivering massive parallelism could be deployed only in large-scale computer centres with hundreds to thousands of separate computer systems.
The recent introduction of many-core processors, such as the GPU and the multi-core CPU, means that most new computer systems come with multiple processors that require new software techniques to exploit parallelism.
Without new software techniques, computer scientists are concerned that rapid increases in the speed of computing could stall.
Cyber Resilience Summit
iTnews Executive Retreat - Security Leaders Edition
Huntress + Eftsure Virtual Event -Fighting A New Frontier of Cyber-Fraud: How Leaders Can Work Together
iTnews Cloud Covered Breakfast Summit
Live & Hands On Demo: Navigating the BMC AMI DevX Platform to Understand Code Faster Using AI



