How AI tools like ChatGPT work - in plain language

2023 will be remembered as the year artificial intelligence exploded into the public consciousness. People are using ChatGPT for everything from last-minute dinner recipes, based on what’s left in the fridge, to writing university essays and even arguing legal cases (which doesn’t tend to work out too well!).

It’s easy to use AI tools like ChatGPT without much understanding of how the technology that powers them works. Just type your prompt into ChatGPT and watch (possibly in awe) as it spits out an answer in seconds. Sometimes, it feels like magic.

 

Image created with Bing Image Creator.

 

But it’s not magic. ChatGPT is a sophisticated computer program designed to understand and generate human-like text. To do that, it relies on what’s known as a Large Language Model (LLM). An LLM is “a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other forms of content based on knowledge gained from massive datasets,” as NVIDIA explains.

I’ll admit that when I start reading about things like LLMs, deep learning algorithms, and the like, I quickly get lost in the technical jargon. Googling ‘How does an LLM work?’ can lead to results that quickly get highly technical.

To its credit, ChatGPT gave me a simple (and somewhat rosy) explanation of an LLM: “Imagine an LLM as a super-smart computer program designed to understand and generate human-like text. It's kind of like having a really knowledgeable friend you can ask questions or get advice from, but this friend is made of lines of code and data.”

Financial Times makes it easier to understand

This week, I found an excellent visual storytelling piece by the Financial Times (FT for short) explaining how LLMs work in plain language.

As a science communication specialist, I appreciate the work that went into using simple language to explain key concepts. The animated visuals work well to help readers understand what is happening when tools like ChatGPT produce text.

Screenshot from Financial Times (2023) article, ‘Generative AI exists because of the transformer.’

The article also explains how a transformer, which is essentially the brains behind an LLM, works.

Screenshot from Financial Times (2023) article, ‘Generative AI exists because of the transformer.’

The FT’s work on this is a stellar example of creative and effective science communication about a technology that is fast shaping our world.

Even with the plain language and snappy visuals, I still spent a fair amount of time navigating through the article. It’s long. But the length is warranted, given the complexity of artificial intelligence.

I left with a better understanding of how ChatGPT (and its kin) function and a healthy dose of respect for the people who put this together —Madhumita Murgia (the FT’s artificial intelligence editor) and the FT’s visual storytelling team.

If you want to see what I’m talking about, read the FT article here!


 

Brendon Bosworth is a science communication trainer and communications specialist with a growing interest in AI. He is the principal consultant at Human Element Communications.

Brendon Bosworth

Brendon Bosworth is a communications specialist and the principal consultant at Human Element Communications.

https://www.humanelementcommunications.com
Previous
Previous

Educators need to plan for an AI future

Next
Next

Learning about AI (for free) with the University of Johannesburg