Five A.I. Predictions for 2024

iSolutions
8 min readFeb 2, 2024

Our 5 best predictions of what to expect in AI in 2024

Five AI Predictions for 2024

TL;DR:

- NVIDIA’s increased production of AI GPUs will significantly reduce wait times, enabling broader and faster AI development.
- AI will integrate deeper, logical reasoning, enhancing its ability to handle complex tasks and improve decision-making.
- Large Language Models will evolve to form the core of new computing paradigms, expanding their roles and capabilities.
- AI systems will learn and improve on their own through techniques like reinforcement learning, surpassing human expertise in specific areas.
- AI models will continuously update in real-time, maintaining relevance and effectiveness in rapidly changing environments.

Prediction 1: No More Compute Bottlenecks

In 2023, AI model trainers faced delivery wait times ranging from 36 to 52 weeks for NVIDIA’s AI-focused A100 and H100 processors. However, it’s important to highlight that NVIDIA is actively ramping up production of its AI GPUs. As a result, customers like Meta Platforms are anticipating significantly larger quantities of these chips this year.

According to Meta Platforms’ CEO, Mark Zuckerberg, the social media giant is poised to acquire 350,000 units of NVIDIA’s flagship H100 graphics cards by the end of 2024. In 2023, Meta received an estimated 150,000 H100s, as reported by market research firm Omdia. If this estimate holds true, Zuckerberg’s statement suggests that Meta expects an additional 200,000 units of this processor in 2024.

NVIDA indicated they would build another 1.5 MILLION H100s in 2024. That’s a quarter of a trillion operations per second per person which means we could be processing on the order of one word per second on a hundred billion parameter model for everyone on earth.

This leap in computing capability by NVIDIA can be likened to the transformation brought about by the expansion of broadband internet. Just as the widespread availability of broadband enabled a revolution in internet usage and accessibility, NVIDIA’s advancements in GPU technology are poised to remove compute bottlenecks in AI adoption.

In the past, just like the internet’s growth was throttled by the slow rollout of physical infrastructure like cable networks, AI’s potential had been bottlenecked by the availability and capability of computing resources.

However, with NVIDIA’s announcement and the expected rollout of these powerful GPUs, compute power will no longer be a limiting factor in AI adoption but rather a force multiplier, enabling more complex and powerful AI models to be trained and deployed more efficiently.

The landscape seems poised for a significant change, where compute resources will be abundant and more accessible, thus driving the AI field forward at an unprecedented pace.

Prediction 2: Embracing “System 2” in AI

In the rapidly evolving landscape of artificial intelligence, a significant trend is emerging in 2024: the integration of “System 2” thinking into AI applications, moving beyond the quick, intuitive responses characterized by System 1.

This shift mirrors the dual-process theory in psychology, popularized by Daniel Kahneman in his book “Thinking, Fast and Slow,” which delineates two modes of thought: the fast, instinctive, and emotional System 1, and the slower, more deliberate, and logical System 2.

“I need an answer, but I want you to take your time to use a tree of thought and reflection and convert response time to an accuracy factor.”

Presently, AI, especially in the form of Large Language Models (LLMs) like ChatGPT, predominantly operates in the System 1 mode. These models quickly generate responses based on identifying patterns in vast training datasets. This rapid, pattern-based processing is efficient but sometimes prone to and inaccuracies.

As we venture into 2024, there’s a growing emphasis on incorporating System 2 methodologies into AI. This involves supplementing the quick response generation of LLMs with more structured, logical reasoning capabilities.

The integration of System 2 thinking aligns with the development of neuro-symbolic AI models. These models aim to unify the compositional and causal reasoning strengths of symbolic models (akin to System 2) with the pattern-recognition capabilities of deep learning (akin to System 1).

The resultant AI systems are expected to handle tasks involving complex correlations and causal structures more effectively, addressing the limitations of purely deep learning-based approaches.

Incorporating System 2 thinking into AI can significantly enhance the technology’s application across various domains. It promises not only faster and intuitive responses but also more sophisticated reasoning and deeper understanding, particularly in complex scenarios where quick pattern-based responses fall short. This could lead to AI systems that are better at inferencing, problem-solving, and even more aligned decision-making.

In 2024, iSolutions will be releasing our “System 2” execution environment for business use called, iNtuition. For more information in iNuition, converse with our dedicated GPT here:

Prediction 3: LLMs as the Modern Operating System

As we move into 2024, a revolutionary trend is emerging in the field of artificial intelligence — the transformation of Large Language Models (LLMs) from mere chatbot functionalities to the foundational core of a new kind of Operating System (OS).

This shift gives a glimpse into a new era in computing, analogous to the transition from viewing early computers as simple calculators to recognizing their potential as comprehensive digital platforms.

One of my favorite tweets of 2023 was from Andrej Karpathy where he equated LLMs as not a chatbot, but rather the kernel process of a new Operating System:

The capabilities of LLMs are expanding far beyond basic chatbots. They are increasingly taking on roles akin to various elements of traditional operating systems:

- DISK= The Internet and Embeddings as Data Repositories: LLMs are utilizing the internet and embeddings much like a disk in an OS, serving as vast storage spaces for information and internal memory.
- RAM=Context Window as Active Memory: The context window in LLMs is mirroring the function of RAM in a computer, handling the immediate processing and temporary storage of data.
- SOFTWARE=Tools and Applications as Software: Various tools developed on the LLM framework are becoming analogous to software in an OS, showcasing the versatility of LLMs in performing a range of tasks from content creation to complex decision-making processes.
- PERIPHERALS=Multimodal Inputs and Outputs: LLMs’ ability to process and respond to diverse inputs, including text, audio, and vision, reflects the functionality of an OS in managing different peripherals.
- I/O=Collaboration Amongst LLMs: The interaction and integration of different LLMs resemble the network operations in an OS, emphasizing the collaborative and interconnected nature of these AI systems.

The current “single-threaded” execution of LLMs, reminiscent of the early days of computing, hints at the untapped possibilities for more sophisticated, multi-threaded operations in the future.

2024 should prove to be a transformative phase in AI and computing. The progression of LLMs from basic chatbot functions to the core of a new operating system paradigm marks a significant leap in the way we interact with and perceive AI technologies.

This shift is not just a step forward; it’s the beginning of a whole new era in computing, where the possibilities and potential applications of LLMs are vast and still largely unexplored.

Prediction 4: Self-Improvement in AI: Learning Like AlphaGo Zero

The concept of “Self-Improvement” using reinforcement learning in AI is best described by the remarkable journey of AlphaGo Zero.

The AlphaGo Zero Breakthrough
This AI system, developed by DeepMind, represents a significant leap in the field, showcasing an AI’s ability to teach itself and excel beyond human expertise.

AlphaGo Zero’s learning method was a radical departure from traditional AI training approaches. Unlike its predecessors, AlphaGo Zero didn’t learn from human games or human interaction. Instead, it learned the game of Go from scratch through a process of self-play, using a single neural network combined with a powerful search algorithm.

The unique aspect of AlphaGo Zero’s learning was its method of reinforcement learning, where it essentially became its own teacher. Starting with no knowledge of the game, it played against itself, progressively tuning and updating its neural network to predict moves and determine the eventual winner. This process of iterative self-improvement led to rapid advancements in its capabilities.

One of the most astonishing aspects of AlphaGo Zero’s development was the speed at which it surpassed human-level play. In just three days, it defeated the previous version of AlphaGo, which had itself defeated a world Go champion.

After 40 days of self-training, AlphaGo Zero reached an even higher level of play, surpassing the “Master” version of AlphaGo, which was considered the world’s best player.

This is the same approach used by University of California — Berkeley applied when they built a human-sized bot that uses artificial intelligence (AI) techniques to teach itself how to walk in the physical world. UC researchers Ilija Radosavovic and Bike Zhang wondered if “reinforcement learning,” a concept made popular by large language models (LLMs) last year, could also teach the robot how to adapt to changing needs. To test their theory, the duo started with one of the most basic functions humans can perform — walking.

AlphaGo Zero’s approach to learning and self-improvement leveraging LLMs and “reinforcement learning” carries profound implications for the future of AI.

It demonstrates that AI can develop an understanding of complex systems and strategies without external data or human expertise, relying solely on self-generated data and learning algorithms.

Prediction 5: Continuous Training

The concepts of continuous training in AI can be explored through the lens of models having dynamic, real-time access to training datasets. This approach is set to revolutionize the way AI systems adapt and personalize content and experiences.

The evolving field of AI now enables models to continuously learn and adapt in real-time, enhancing their performance and relevance. This is achieved through continuous training, where AI models are retrained to adapt to changes in data before being redeployed. The trigger for a rebuild can be changes in data, model adjustments, or code modifications.

Continuous training is crucial because machine learning models can become stale over time due to data drift or concept drift, where the statistical properties of target variables or the statistical distribution of production data change.

In practice, this means AI systems can dynamically adjust to new information, maintaining their effectiveness in rapidly changing environments. For example, an AI model used for fraud detection might need frequent retraining to adapt to evolving fraudulent techniques.

Adapting traditional machine learning workflows to support real-time inference involves overcoming several challenges. It requires a robust infrastructure capable of handling fast-moving data streams and deploying real-time models effectively. This includes ingesting and processing user events, computing and fetching online features with minimal latency, and synchronizing the served model with online feature stores without downtime.

Wrapping Up

Let’s take a step back and marvel at the AI journey we’re embarking on as we head into 2024. It’s not just about the tech getting smarter; it’s about how these advancements are poised to redefine our everyday interactions.

We’re talking about a seismic shift here — from the sheer computing power becoming more accessible to everyone, to AI thinking more deeply and methodically like us humans. And then, there’s this whole new angle of seeing LLMs as the backbone of future operating systems. It’s like we’re giving AI a whole new playground to innovate and grow.

But what really gets me excited is the self-learning exemplified by AlphaGo Zero — an AI teaching itself to outsmart human intelligence. And let’s not forget about the customizations and personalizations — it’s like AI is getting a real-time update on what we need, even before we know we need it.

--

--

iSolutions

Multiple award-winning experts in custom applications, machine learning models and artificial intelligence for business.