AI is growing fast – and isn’t set to slow down
Fri 10 Dec 2021
There’s been a great deal of discussion in recent years around the so-called end of Moore’s Law. Moore’s Law states that the number of transistors on a microchip doubles about every two years and several experts have claimed that the law no longer holds up.
From Stanford University’s 2019 AI Index report finding that AI is outpacing Moore’s Law to the newest edition of the MLPerf training benchmark results maintaining that the gains to AI training performance have managed to dramatically outstrip Moore’s Law; evidence has rapidly built to show that AI training is having an acute impact on AI performance.
MLCommons, an open engineering consortium that works with the technology industry to track the performance of machine learning, released the MLPerf training benchmark results and found AI training to be very successful. Dell, Inspur, Microsoft Azure and Supermicro all set records for the speed they train AI models, with these technology firms using NVIDIA AI.
“Looking back, the numbers show performance gains on our A100 GPUs of over 5x in just the last 18 months. That’s thanks to continuous innovations in software, the lion’s share of our work these days,” NVIDIA explains in a blog post.
“NVIDIA’s performance has increased more than 20x since the MLPerf tests debuted three years ago. That massive speedup is a result of the advances we make across our full-stack offering of GPUs, networks, systems and software.”
Stanford’s 2019 AI Index report, that was produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that “prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months,” the report explains.
Another part of the report found that in just 18 months, the time it takes to train a large image classification system on cloud infrastructure has significantly fallen from around three hours in October 2017 to about just 88 seconds in July, 2019. The training price was also reduced by a very large amount.
Not all AI progress in recent years has been level across different tasks. For example, while advancements are seen on a number of natural-language processing classification tasks, as captured in the SuperGLUE and SQuAD2.0 benchmarks, the performance remains lower on some natural language processing tasks which need reasoning or human-level concept learning.
Much like humans, AI performs task by learning from experience and then adapting to new data. This process requires huge amounts of data to be fed through AI solutions for training, which also requires AI training datasets. As the quality of datasets is vital to ensuing good AI training, there is strong demand for such datasets.
According to a study from ResearchAndMarkets.com, the global AI training dataset market size is forecast to total $3.1 billion by 2027, growing at more than 17% a year over this period, indicating the strength of this sector.
One of the central reasons for the solid growth of AI performance has been the increasing access to high-quality data. Not only is the performance improved but data preparation time is reduced and prediction quality is enhanced. As major enterprises continue to process massive amounts of data on a daily basis, the need for AI tools will only increase.
There’s no question that AI is already having a major impact on businesses across the world. As time goes on, and AI performance increases, so too will the value of these technologies to enterprises.
Hungry for more tech news?
Sign for your weekly tech briefings to be in with the chance to win a £50 gift card!