Navigating the AI Landscape: Market Trends and Opportunities For Startups

The most important trends and opportunities you should care about if you are considering starting an AI startup.

Navigating the AI Landscape: Market Trends and Opportunities For Startups

Hi Hitchhikers!

To the 170+ new subscribers since my last post, welcome to The Hitchhiker’s Guide to AI! I’m so grateful for your support.

This week instead of the usual highlights post, I’m going to deep dive into a topic I’ve been thinking about a lot lately:

What are the most compelling opportunities for startups entering the AI space?

In this post, I will take a stab at answering that question. I’ll start with an overview of the current AI landscape, then some of the trends in the space and finally the opportunities that these trends present for startups.


Is AI a disruptive technology?

Let’s start by referring to Clayton Christensen’s Innovators Dilemma, which distinguishes between two types of technologies: sustaining technologies and disruptive technologies:

Source: MIT Press

Sustaining technologies improve the performance of existing products or services, while disruptive technologies create new markets or value networks and eventually displace established ones. Disruptive technologies often start as inferior or niche products that appeal to a small segment of customers, but gradually improve and become mainstream over time. The dilemma is that incumbent firms often focus on satisfying their most profitable customers with sustaining innovations, and ignore or underestimate the potential of disruptive innovations until it is too late.

What’s interesting about AI, is that althought it seems like a disruptive technology, it is being introduced to the market much more like a sustaining one:

  1. AI is not an inferior or niche product, in fact one of the values of large-scale language models like GPT is that they are generally applicable to many different use cases.
  2. AI does not appeal to a small segment of customers. For example, ChatGPT is rumored to already have over 100M monthly active users, making it the fastest growing consumer product ever!
  3. AI is not being ignored by incumbent firms, in fact it is being embraced by incumbents at a surprisingly fast pace.

At the same time, given the sheer pace of innovation happening in the AI space right now, it’s hard to believe it will not be a force of disruption in the technology sector. It’s possible though, that the distruption is just happening much faster than in Christenson’s framework. A better way to think of AI as a disruptive force might be to change the above diagram to me more like this:

This means the assumption that AI as a disruptive technology clearly favors new entrants over incumbents may not apply. To understand where startups might have an advantage, we need to instead get a deeper understanding of how the AI space is evolving. This will help us identify where disruptive opportunities exist vs. not for startups. So, let’s dive in!

Today’s AI Landscape

To work out how the business of AI will evolve and where the most value will be created it's worth taking a step back first and getting a lay of the land. Let's start with an overview of the current AI landscape using this handy diagram from the folks at Madrona Ventures1 :

Madrona’s diagram has six layers to the AI market today, each layer building on top of the one below it:

Silicon layer

The Silicon layer is the companies making specialized computer chips, usually GPUs2 or, in Google’s case TPUs3, for training AI models. This is the most capital intensive layer given the need R&D needed to develop state of the art GPUs and expense of manufacturing hardware. Unsurprisingly Nvidia is the industry leader here with three decades of experience building GPUs. There have however been new entrants to this layer recently like Cerebras, a startup founded in 2016 that develops massive computing chips for artificial intelligence.

It's worth noting that most startups building AI products don't buy their own GPUs, instead opting to use cloud service providers that buy them in bulk and rent them out. A big reason for this is that compute resources are expensive, especially with the increased demand for AI applications and the explosion of AI startups in the last few years. For example, Nvidia's A100 GPUs4, which are purpose-built for data centers and designed for large-scale machine learning use cases, cost $200,000 each!

Cloud layer

The Cloud layer are the services that provide computing resources such as GPUs in their data centers for developers to train and serve their models or AI applications on. Competition is mounting between cloud providers, with both Microsoft and Google's cloud divisions making strategic investments in AI research companies. OpenAI exclusively uses Microsoft Azure for compute resources because of their business partnership5 and Microsoft's $10B investment. Not to be outdone by Microsoft, Google recently invested $300M in Anthropic, an AI research team founded by former OpenAI employees.

Foundational Model (FM) Operations

To train AI models like OpenAI’s GPT, you need to train them with massive datasets like the whole public internet, which is why they are called large-scale language models. As Madrona Ventures points out:

Foundation models have immense compute requirements for training and inference, requiring large volumes of specialized hardware. That is a significant contributor to the high costs and operational constraints (throughput and concurrency) that application developers face. The largest players can find the cash to accommodate — consider the “top 5” supercomputer infrastructure Microsoft assembled in 2020 as part of its OpenAI partnership. But even the mighty hyperscalers face supply chain and economic constraints.

This need for efficiency has created an opportunity for tools and infrastructure products to lower the cost of training, deployment and inference of models. For example, Scale AI is a data platform for AI that enables developers to collect, manage, and label data quickly and easily. Another example is Banana, a machine learning model deployment platform that enables companies to deploy models to production faster and more cheaply without needing to manage their GPU servers. MosaicML is another startup that focused on helping companies train large-scale models with their own data.

Foundational Models

The Foundational Models layer is where the magic happens. Foundational models are the generative AI models being developed across text, image, video and audio. Besides OpenAI, a few other companies are developing foundational models, including Google, and startups like Anthropic and AI21Labs.

Aside from the proprietary foundational models there is also a growing ecosystem of open-source models, the most well known of which is Stable Diffusion an open-source text-to-image model comparable to OpenAI's proprietary Dall-E6. Huggingface meanwhile is place for these open source models to be shared and iterated on by the community.

While proprietary models have the advantage of performance and scale thanks to the well funded companies backing them, open-sourced models offer more flexibility and lower costs to developers who want to customize them to meet their applications needs.

Tooling

Foundational models can’t be used in applications on their own because (1) they don’t have a real-time understanding of the world e.g. the latest version of GPT-3 was trained with data up to the end of 2021; (2) the functionality of their APIs are limited and (3) they contain bias in their training data which may lead to unpredictable results Andy (3) they hallucinate. The Tooling layer is where foundational models are augmented with external data integrations to make them more useful and powerful.

For example, LangChain is a platform that makes it easy to integrate models like GPT-3 with external data to create applications. GPTIndex is another example of tool that makes it easier to provide GPT with additional context from external sources besides the data it was trained on.  You use GPTIndex to embed all the articles in a newsletter to create a chatbot based on the content, just like Lenny Rachitsky did recently.

Application Layer

The application layer is where we are currently seeing a Cambrian Explosion of new products and services that make use of AI across many different use cases, from productively to creativity and across consumer and B2B:

The multitude of AI apps that have launched in the last few years broadly fall into three categories:

  • Creativity: This category includes all AI products that are helping people create content across text, video and audio and are are a mix of B2B and B2C.
  • Workflows: These are the products that help complete a specific workflow using AI more efficiently. Examples include Github’s Copilot which auto-completes code, or Descript which transcribes podcasts for editing.
  • Knowledge: These products help you query and retrieve knowledge from a large dataset. Chatbots like OpenAI’s ChatGPT, Quora’s Poe and Bing’s AI Chat are all examples of this. There will be many more specialized AI chatbots across different verticals, especially where proprietary data sets are required in training.

It’s worth noting that all three of these categories of apps rely on foundational models often via proprietary APIs like OpenAI’s or open-source alternatives. This makes it harder to differentiate with core technology, and startups instead have to focus on product experience, go-to-market and extracting the most value out of models via prompt engineering and fine tuning.


Are you enjoying this update? Then don’t forget to subscribe for free to receive new posts and support my work.


Now that we have a good lay of the land, here are some of the major trends I’ve observed in the AI market that will impact value creation and opportunity, especially for startups entering the space.

1.  Competition between model providers → AI costs go down, and performance goes up

Since OpenAI launched ChatGPT and announced its partnership with Microsoft earlier this year, things heat up between Big Tech. Google is also working on a large-scale language model and chatbot. Last week Amazon also announced a partnership with Huggingface7 and Meta released it’s large-scale language model8 that they claim is more performant than GPT-3.

Right now OpenAI+Microsoft are the primary providers of proprietary models which means they have a lot of pricing power. Elad Gil, a founder and angel investor currently exploring AI, makes a convincing case for an oligopoly market with a few major players in his recent blog post on AI Platforms, Markets & Open Source9:

“The reason to argue for a near term oligopoly market, versus likely fragmentation, is due to the capital/compute/data scale costs currently needed for each subsequently better performing LLM model. If GPT-3 at the time cost a few million to ten million or so to train, and GPT-4 from scratch may be estimated at tens of millions to maybe a hundred million, maybe GPT-5 is a few hundred million and GPT-N is a billion. This of course assumes that costs will scale faster than technical breakthroughs or drops in GPU (or specialized hardware cost declines) and these may be false assumptions.”

A Google, Microsoft, Meta, or Amazon oligopoly seems like the likely outcome, and as competition heats up between them, it should drive down for developers to build on too. Performance per dollar will also increase with competition as advancements are made in the efficiency of training and serving models. What might be expensive with today’s models may quickly become cheaper in a few months. An excellent example is OpenAI releasing ChatGPT’s API for 10% of the price of their previous state-of-the-art GPT-3 API10.

2. Open-source models vs. proprietary models will diverge

Today, it is possible for open-source foundational models with similar capabilities to fast follow proprietary models. This is possible for several reasons: (1) all models are being trained on the same public data, (2) AI researchers are publishing their advancements regularly11, (3) open-source projects are being sponsored by companies like StabilityAI and (4) communities of developers have formed around specific models. A prime example is Stable Diffusion, launched only a few months after OpenAI's Dall-E 2 and is considered on par in performance. Meanwhile, Meta released an open-sourced large-scale language model to compete with OpenAI just a few weeks ago12.

Over time though, open-source will probably lag behind proprietary models for three primary reasons:

  1. Cost to train: As models get bigger and bigger, they will become more expensive. Open-sourced models will therefore need more well-funded sponsors to develop them. Today StabilityAI might be able to afford to sponsor a GPT-3 equivalent model that costs $10M to train, but next year they might not be able to afford to support a GPT-5 equivalent that costs $100M to train.
  2. Proprietary datasets: Well-funded companies like OpenAI  license additional private data sets to improve their models. This will make it harder for open-sourced models to compete on performance and they may lag one or two generations behind proprietary models. For many customers, the price and versatility of open-sourced models will be the deciding factor. Using a lesser performant open-source model will be a path many developers take to get started cheaply before developing their own models or switching to proprietary ones once they have enough scale.
  3. Closed research: Many contributions that have enabled open-source AI models to come from private companies, like Google’s Transformer architecture. That might change as the need for Big Tech to compete in AI takes a higher priority over publishing research.

This means developers will need to choose between a proprietary state-of-the-art model with higher performance and more “bells and whistles” likely at a higher cost or choose an open-source model that is 1-2 generations behind but is cheaper and more customizable.

3. Bigger models →  More emergent behavior and faster path to AGI

As models are trained on larger datasets with more parameters, we will continue to see more emergent behavior appear13. This is important because if we discover new capabilities of these models after they are trained, there will also be new applications for them that we don't know or can predict today. This is both a blessing and a curse. On the one hand, it means more opportunities for creating innovative use cases for AI, for example unlocking the ability to make AI more actionable. On the other hand, if you built a business on a foundational model based on previously known capabilities and your differentiator is the software you wrote to augment that model for your specific use case, that differentiator may become obsolete in the next version of the model!

Furthermore, the unpredictability of emergent behavior also means that the path to artificial general intelligence (AGI) will continue to be unpredictable too and may accelerate faster than we expect. Sam Altam talks about how OpenAI will approach this unpredictability in a recent post:

As advancements in AI gather speed, and we quickly approach AGI, whole swathes of existing AI products may be completely replaceable by the next version of ChatGPT!

4. Low barrier to entry → Intense competition in the application layer

Every week we see dozens of new startups launch in the application layer, often with two or three startups tackling the same problem. An excellent recent example is Generative Design: A few weeks ago GalileoAI announced their text-to-UX product where you could describe a user interface or flow and AI will generate high-fidelity mockups:

Since then two other products, Genius14 and UIzard15 have been launched to solve the same problem. There are already dozens of products focused on copywriting, marketing, sales outreach, and coding. Apparently, over 30 startups in Y Combinator’s current batch are working on AI products:

Image

It’s pretty clear then that the barrier to entry to build AI-powered products right now is pretty low and we should expect hundreds more startups to be formed in the space. Another tailwind for AI startups is that funding has slowed down in series B to late-stage startups and venture capitalists are desperate to deploy capital from the funds they raised in 2020/2021.

All of this points to one trend we are going to see over the coming years: Intense competition between AI startups in the same sector going after the same set of customers. For customers, this is great as it will likely reduce costs to access AI and increase innovation and choice. For startups though, it will be more challenging because the cost of acquiring customers will continue to grow and margins will be eroded. Today Jasper.ai, a product that helps you write marketing copy, charges $60/mo which seems unsustainable as more products launch to solve the same problem. As Jeff Bezos famously said, “Your margin is my opportunity.”

5. Incumbents quickly adding AI → harder to disrupt

I talked about the speed of incumbents entering AI in my last weekly highlights post16

“We’re seeing incumbent technology companies like Microsoft, Snap, Spotify, and Shopify move quickly to adopt [AI Chatbot technology] which again speaks to the fact that the barriers to entry in AI are much lower than we might have expected. The corollary of this is that if you’re a startup thinking about using AI to disrupt an existing market with strong incumbents, it’s not a given that you will be able to move faster than them!”

This is one of the most surprising trends because incumbents aren’t ordinarily able or willing to move as quickly to adopt new technology into their products. For example, if you are building an enterprise product for Sales, Customer Service, Marketing or Supply Chain and AI was your secret sauce, it’s also Microsft’s. This week they announced “Microsoft Dynamics 365 Copilot”:

“With Dynamics 365 Copilot, organizations empower their workers with AI tools built for sales, service, marketing, operations and supply chain roles. These AI capabilities allow everyone to spend more time on the best parts of their jobs and less time on mundane tasks.”

Of course, Microsoft has massive distribution advantages, which is why they were able to display Slack in many organizations with Microsoft Teams quickly.

Here are a few more examples of other incumbents quickly adopting AI just from the last week. AI:

Brex adding an AI assistant to their financial stack:

Slack adding AI capabilities directly into their app:

Hubspot adding AI to their CRM tools:

It’s worth noting that in many of these cases, OpenAI is working closely with the company, giving them access to the latest models that are not yet available on their public developer platform. Why is OpenAI doing this? I believe it is to aggressively grab market share and close enterprise accounts that will contribute to the bulk of their revenue in the long term.

I think this is the single most important trend to watch if you’re a founder considering building something in AI: If there is already a dominant software incumbent solving the problem you are approaching, you will need to have both a more compelling product and a more robust go-to-market strategy to compete with their distribution.

Opportunities for startups

Given the current trends in the AI market in cost reduction, open-source advancements, competition, and fast-moving incumbents I believe there are three opportunities for startups to take advantage of:

1. Drive down the cost of training/serving models

As competition heats up in the foundational model layer, companies will likely compete with Microsoft+OpenAI/Google/Amazon oligopoly to offer more customized models or cater to a specific vertical model use case. Cost will be a prohibitive factor for these companies, who will want to save money on data acquisition, training and inference. Similarly, for companies that want to train and serve their own models or open-source models, the cost will also be a huge factor given that they won’t have the same economies of scale that the big tech companies do.

Here are a few examples of startups that could reduce the cost of foundational models:

  1. A startup that provides a platform for distributed and parallel training of large-scale language models using ColossalAI, a framework that enables efficient model parallelism, pipeline parallelism, tensor parallelism, and data parallelism. ColossalAI claims to offer up to 7.73 times faster training and 1.42 times faster inference than ChatGPT17.

  2. A startup that provides a platform for streaming data processing and model compression for large-scale generative models using Composer and MosaicML, tools that enable efficient data loading, preprocessing, augmentation, sampling, and quantization. Composer and MosaicML claim to reduce the training cost of Stable diffusion by up to 75% compared to GPT18.

  3. A startup that offers a service for compressing and pruning foundational models using techniques like quantization, distillation and sparsification, which can reduce model size, latency and energy consumption.

2. Specialized models for highly regulated verticals

Incumbents can move quickly to incorporate existing models into their products in unregulated verticals, but for highly regulated verticals like healthcare, financial services, and government this might not be the case. The use of off-the-shelf foundational models which are trained on public data at scale may face some limitations in highly regulated sectors, such as:

  • The need for explainability and transparency of the model's decisions and behaviors.

  • The risk of bias, error, or misuse of the model's outputs or data inputs.

  • Compliance with ethical, legal, and social standards and regulations.

  • The risk of hallucinations19 causing serious harm to the end user in life-threatening, business-critical or security-critical circumstances.

Here are some more examples of startups that could be formed in this space:

  • A startup that develops a model that can generate legal documents such as contracts, agreements, or reports based on environmental or energy regulations and standards.
  • A startup that provides an AI platform that can generate and verify regulatory reports for financial institutions based on their data and rules.
  • A startup that provides AI models that can enhance and improve the communication between healthcare services and patients, for example by better-summarizing doctors’ notes and treatment recommendations.

3. Help incumbents move faster to adopt AI

While more tech-forward incumbents will move quickly to incorporate AI in their products, not every company will have the in-house resources and expertise to this, despite competitive pressure. This creates an exciting opportunity for startups that can become good at integrating AI into existing companies, likely specializing in a vertical where it isn’t straightforward to do so.

Here are a few examples:

  • A startup specializing in creating chatbots for e-commerce sites, that can integrate easily with existing platforms like Shopify to allow customers to ask questions about products, get recommendations and inquire about the status of their orders.
  • A startup that helps fintech companies integrate chatbots into their products by specializing in processing financial data, similar to the Brex example earlier in this post.
  • A startup that allows developer-focused products to automatically generate integration code for their customers based on their APIs / SDKs.

This area will quickly gain momentum as founders with specialized knowledge in a particular vertical see the opportunity to integrate data from that vertical with natural language responses from large-scale language models.

4. AI-powered workflows tools in fragmented industries

There are might be industries that are both fragmented and have knowledge workers who carry out repetitive software-based workflows. These industries are probably ripe for disruption because (1) there's no dominant player that has a structural advantage, and (2) repetitive workflows can be quickly augmented with AI.

  • Legal - A startup that uses AI to assist paralegals in the discovery process by analyzing and categorizing documents, reducing the time and cost associated with manual review.
  • Education  - A startup that uses AI to personalize student learning by analyzing their performance data and tailoring their coursework to their strengths and weaknesses, improving engagement and outcomes.
  • Content moderation - A startup that uses AI to assist content moderators in identifying and removing inappropriate content, reducing the time and effort required for manual moderation.
  • Customer service - A startup that uses AI to assist customer service representatives in answering common questions and resolving issues, improving efficiency and reducing wait times.
  • Content moderation - A startup that uses AI to assist content moderators in identifying and removing inappropriate content, reducing the time and effort required for manual moderation.
  • Recruiting - A startup that uses AI to assist recruiters in identifying and matching candidates to job postings, reducing the time and effort required for manual resume screening.
  • Accounting - A startup that uses AI to assist accountants in categorizing and reconciling financial transactions, reducing the time and effort required for manual bookkeeping.

5. Novel AI first experiences

This is a startup's broadest and highest risk/reward opportunity: Creating a completely new product experience or interface paradigm using AI. This is what OpenAI did with ChatGPT and what Instagram did during the mobile era. These opportunities aren’t obvious and require lots of experimentation and exploration; for every successful startup, hundreds will fail.

Here are some areas where startups could explore opportunities based on the latest AI advances:

  • Personalized synthetic content: Using AI to create realistic and high-quality images, videos, audio, text, etc. is personalized to the end user based on their preferences and interests.
  • AI-powered creativity: Using AI to augment human creativity and enable new forms of expression and collaboration. For example, a startup could use AI to generate poems, stories, code, essays, songs, celebrity parodies and more based on the user’s input or inspiration.
  • Smarter AI Assistants: Using AI to understand users’ intent and then integrate with existing products in a user’s life to carry out tasks on their behalf. Think Jarvis from Iron Man.
  • Synthetic companions: AI-powered companions that users can talk to just like a real person, to feel a sense of connection and friendship.

This opportunity probably is the hardest of the five I’ve outlined because we simply don’t know how AI will evolve and what novel products will work / not work.

Competing directly with incumbents

Given the above, an area to be cautious of as a founder is competing directly with an incumbent, where your only differentiator is AI. In the last few months, many startups have been created that add AI to spreadsheets, docs, presentations, email and communication products. These are all use cases with strong incumbents that I believe will quickly integrate AI if they haven’t already and have the distribution advantage. Competing directly here seems like a risky path forward because although startups might get early adoption from niche markets or because of novelty value, it will be much more challenging to move upmarket. If a startup goes down this path, they shouldn’t assume that AI will be their advantage for long.

Conclusion

AI is clearly a distruptive technology but it is being quickly adopted by incumbents. This means startups entering the space need to cautious about where they can create differentiation and provide long term value. Reducing the cost of AI, helping incumbents adopt AI, introducing AI to fragmented ledgacy industries and dreaming up completely new product experiences are all exciting avenues. However, it is essential to remember that AI is not a silver bullet, and startups must be careful to focus on real problems that need to be solved rather than simply incorporating AI for the sake of it.

It’s also worth noting that everything discussed in this post may quickly change as the AI landscape evolves. So if you’re a founder considering building something in AI, subscribe to this newsletter to stay up-to-date on the latest advancements.

Thanks for reading The Hitchhiker's Guide to AI! Subscribe for free to receive new posts and support my work.

Thanks to Hemal Shah for reviewing drafts of this post and providing feedback.


  1. A GPU, or Graphics Processing Unit, is a type of computer chip that is specifically designed to process large amounts of data quickly and efficiently. They were originally created for use in video game graphics, but scientists and researchers soon realized that they could be used to accelerate many other types of computations, including those used in deep learning models.

    You can learn more about how GPUs are used in AI in part 3 of series on the origins of deep learning.

  2. A Tensor Processing Unit is a specialized computer chip developed by Google for machine learning computations. TPUs are designed to perform matrix operations at high speed, which are the key operations used in deep learning models. They are optimized for low-latency, high-throughput processing and are integrated with Google's cloud infrastructure to provide fast and scalable AI services. The TPU provides a significant performance boost compared to traditional CPUs or GPUs, making it a popular choice for training and running complex machine learning models.

  3. The NVIDIA A100 is a data-center-grade GPU designed for large-scale machine learning infrastructure. You can learn more about it here: https://www.engadget.com/nvidia-ampere-a100-gpu-specs-analysis-upscaled-130049114.html

  4. Microsoft and OpenAI have partnered since 2018 on AI development with Microsoft recently investing $10B in OpenAI. You can learn more about their partnership in this great interview by Ben Thompson with Sam Altman (CEO of OpenAI) and Kevin Scott (VP at Microsoft)

  5. Stable Diffusion is a Generative Image diffusion model created by researchers and engineers from Stability AI, CompVis, and LAION. It was released as an open-source alternative that was better than Dall-E and available for anyone to fine-tune and change!

    You can learn more about Stable Diffusion here: Stable Diffusion: Best Open Source Version of DALL·E 2

  6. AI Platforms, Markets, & Open Source
    (I originally wrote this post a few months ago and sat on it. Since then Google has announced entering the market and MSFT announced Bing and other AI integrations. So updating and publishing now and will undoubtedly be wrong again in a few months).
  7. Emergent behavior is a phenomenon that occurs when a complex system, such as an AI system, exhibits behaviors that are not explicitly programmed or expected by its designers. These behaviors arise from the interactions between the system’s components and its environment, and may give the impression of intelligence or creativity. For example, a large-scale language model like GPT wasn’t trained to understand human behaviour, but recent research has shown that it has the ability capacity to understand the the state of systems outside it’s training set:

    Emergent behavior can pose challenges to ethical AI principles, especially for defense applications, as it may create unpredictability and uncertainty about how an AI system will behave in real-world scenarios.

  8. Weekly Highlights: Chatbots for everything, the commoditization of AI and why chat probably isn’t the end game
    Hi Hitchhikers! First of all, a massive welcome to the 100(!) new subscribers that hitched a ride with us this week! I’m experimenting with a new format for my highlights post this week to focus on just the three most critical updates that happened in AI and why they matter. There’s a lot happening in the AI every week but I’m hoping this format will hel…
  9. ColossalAI: Making large AI models cheaper, faster https://github.com/hpcaitech/ColossalAI

  10. BioMedLM: a Domain-Specific Large Language Model for Biomedical Text. https://www.mosaicml.com/blog/introducing-pubmed-gpt

  11. AI hallucinations are confident responses by an AI that do not seem to be justified by its training data. For example, an AI that generates text based on natural language inputs may produce content that is nonsensical or unfaithful to the provided source content¹ This can happen due to errors in encoding and decoding between text and representations, or due to AI training to produce diverse responses¹ AI hallucinations can undermine the accuracy, reliability, and trustworthiness of the AI applications³. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT, which often seemed to embed plausible-sounding random falsehoods within its generated content¹².