Recent report reveals why AI progress shows no signs of decelerating

Priya Walia

Artificial Intelligence

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Time, the global news platform, elucidates how advancements in AI have been significantly dictated by three key determinants: Computation, Data, and Algorithms.

In a recent report, the publications explained how the exponential strides in AI over the last seven decades have emanated from the relentless pursuit to harness increased computational power, feed AI systems with more data, and undertake algorithmic innovations that streamline the data and computational needs while achieving equivalent results.

Compute: Bringing Greater Intelligence to AI

Time cites the example of the world’s first artificial neural network, Perceptron Mark I, developed in 1957. It was a modest structure with 1,000 artificial neurons that performed around 700,000 operations. Fast forward to today, OpenAI’s GPT-4 language model relies on a staggering 21 septillion operations for training, underlining the incredible progress made in computational capabilities.

Aligned with Moore’s law, the falling cost of computational power has made it increasingly affordable to train larger AI models. AI developers now spend millions, like OpenAI’s CEO Sam Altman, who claimed he spent over $100 million on training GPT-4. Such investment fetches massive returns, leading to AI models that undergo training in computing. AI giants like OpenAI and Anthropic, backed by significant investments from tech giants Microsoft and Google, respectively, spearhead this movement.

Data: The Lifeblood of AI

The report further explains how AI systems are essentially learning mechanisms that model relationships within their training data. The greater the data points, the better the AI system performs. Take the instance of LlaMa, a large language model developed by Meta. Llama was trained on approximately a billion data points, drastically improving the quality of its output as compared to its predecessor, Perceptron Mark I, which was trained only on six data points.

The limitations on how much data an AI system can efficiently process are moderated by the availability of computing. Interestingly, recent trends suggest that the amount of data used to train AI systems is outpacing the creation of new data on the internet. However, AI developers are optimistic about overcoming this challenge by innovating and finding ways to use lower-quality language data.

Algorithms: Getting More Out of Less

Algorithms dictate how AI systems utilize the available compute to model data relationships. Much of algorithmic progress has been derived from compute usage, which has augmented the overall performance of AI systems. Future algorithmic advancements may likewise need to focus on compensating for limitations in data availability.

Experts widely concur that the rate of AI progress, driven by the interconnected play of computing, data, and algorithms, will likely continue unhindered for the next few years. However, this rapid growth has elicited concerns about the possibility of misuse.