Facebook will contribute several deep learning tools to open source artificial intelligence project Torch in an effort to help developers and companies build AI products and services.
The social media giant has a history of offering its tools to the public, and hopes that numerics, machine learning and computer vision will be propelled using its deep learning modules.
Deep learning mimics how the brain works in order to extract meaningful information.
The social media giant will contribute a number of tools to the Torch project:
- GPU-optimised modules for convolutional networks and tools for natural language processing and speech recognition
- Containers to use multiple GPUs at the same time to train networks in parallel
- An optimised lookup table
- A module to speed up training over large data classes
- A 'cross-map pooling' technique to create visual and text modules.
Facebook said the modules were "significantly faster" than the default Torch modules and had allowed the company to train larger neural networks of machines in less time.
The most significant module within the contribution is the GPU layer code, which is claimed to speed up the training of ConvNets, or convolutional networks, Facebook said.
"Since improving training time of these models translates to faster research and development, we've spent considerable engineering effort to improve the GPU convolution layers," the company wrote in a blog post.
"The work has produced notable results, achieving speed increases of up to 23.5x compared to the fastest publicly available code. As far as we can tell, our code is faster than any other publicly available code when used to train popular architectures such as a typical deep ConvNets for object recognition on the ImageNet data set."
Torch is currently used by the likes of Google, Twitter, Intel, AMD and NVidia as well as various academic organisations.