"This is a huge computational saving. It lets you render explosions, smoke, and the architectural lighting design in the presence of these kinds of visual effects much faster," said Wojciech Jarosz, the UC San Diego computer science PhD candidate who led the study.
Currently rendering realistic computer generated images with smoke, fog, clouds or other 'participating media' (some of the light is actually absorbed or reflected by the material, thus the term 'participating') generally requires computational 'heavy lifting', lots of processor time, or both.
"Being able to accurately and efficiently simulate these kinds of scenes is very useful," said Jarosz.
With the boffins’ new approach, when smoke, clouds, fog or other participating media vary smoothly across a scene, the lighting is computed accurately at a small set of locations and then this information is used to interpolate the lighting at nearby points.
This approach, which is an extension of 'irradiance caching', cuts the number of computations along the line of sight that need to be done to render an image.
US boffins claim graphics processing breakthrough
By Robert Jaques on Aug 14, 2007 1:50PM