Demand for compute to train artificial intelligence models has shot up enormously over the past six years and is showing no signs of slowing down.

Not for profit research firm OpenAI - which is sponsored by Peter Thiel, Elon Musk, Microsoft and Amazon Web Services, among others - published an analysis that showed the amount of compute used for the largest AI training runs has doubled every three-and-a-half months since 2012.
This means compute amounts have grown by more than 300,000 times over the past six years, OpenAI said.
In comparison, the well-known Moore's Law, which observed the number of transistors in an integrated circuit would double every year-and-a-half, would yield only a twelve-fold increase in performance over the same period.
Part of the reason AI models still have enough compute is because of the use of massively parallel video cards or graphics processing units (GPUs) that can have thousands of cores per unit.
Furthermore, over the past two years, optimisations such as huge batch sizes, architecture search and expert iteration using improved and specialised hardware such as Tensor processing units (TPUs) and fast data interconnects have increased past limits for algorithmic parallelism.
OpenAI noted that between 2012 and 2014, GPUs were uncommon for training AI models, resulting in compute amounts of 0.001 to 0.1 peta floating point operations per second and days.
Now, the AlphaGo Zero model uses around 1800 PFLOP/s a day.
The trend of using increasing amounts of compute is set to continue in the short term, OpenAI believes.
"Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities," the researchers said.