According to the New York Times, the partnership between the University and the six rival computer and chip makers will be formally announced this Friday, and the project will be dubbed the “Pervasive Parallelism Lab”.
In mid March of this year, Intel and Microsoft announced that they’d be funnelling a combined US$20 million into building specialised labs at Berkeley’s University of California and Urbana-Champaign’s University of Illinois for parallel computing research, which would effectively tackle the same problem.
The massive amounts of funding and effort being channelled by the big players into such research programmes shows just how worried the software industry actually is about future microprocessors with 8, 16 or more cores on a single chip. The concern is that the software would not be able to work properly with the new hardware because without optimal programming, applications don’t profit from added chip power, and in some cases can even become slower because of it. This means that customers could just decide that it’s not worth their while to upgrade their system.
Clockspeed is no longer as important as performance per watt in computing, and hiking performance is now the domain of multicores, with most corporate server microprocessors and gaming machines having about eight cores.
To help the software take better advantage of the increased number of cores, the competing teams of boffins are going to have to experiment with new programming languages and tweaks in the hardware, as well as going back to the drawing board on things like operating systems and compilers (which translates programming gibberish into commands a computer can actually understand).
With all the effort going into it, it definitely seems that sequential programming will soon be a thing of the distant past, whilst the day of parallel programming is yet to come.
Industry suddenly realises multi-cored chips are useless unless used
By Sylvie Barak on May 1, 2008 7:53AM