The ChatGPT bot is causing panic now – but it will soon be as mundane a tool as Excel |  John Naughton

The ChatGPT bot is causing panic now – but it will soon be as mundane a tool as Excel | John Naughton

So The ChatGPT language processing model burst upon an astonished world, and the air was rent by howls of joy and shouts of outrage or complaint. The happy ones were those who were surprised to discover that a machine could apparently perform a written assignment competently. The outrage was sparked by fears of redundancy among people whose employment requires the ability to write craft prose. And the lamentations came from serious people (many of them teachers at various levels) whose daily jobs involve grading essays written so far by students.

So far, so predictable. If we know anything from history, it’s that we generally overestimate the short-term impact of new communication technologies while underestimating their long-term implications. This was the case with print, film, broadcast radio and television and the internet. And I suspect we have just jumped on the same cognitive merry-go-round.

Before pressing the panic button, however, it is worth examining the nature of the animal. It’s what the machine learning crowd calls a large language model (LLM) that has been augmented with a conversational interface. The underlying model has been trained on hundreds of terabytes of text, most of it probably scraped from the web, so you could say it has “read” (or at least ingested) almost everything that’s ever been published online. As a result, ChatGPT is quite adept at mimicking human language, a facility that has encouraged many of its users to anthropomorphize, i.e. view the system as more human-like than machine-like. Hence the aforementioned squeals of joy – and also the odd misled user who apparently thinks the machine is somehow “sentient”.

The best-known antidote to this tendency to anthropomorphize systems like ChatGPT is Talking About Large Language Models, a recent paper by renowned AI researcher Murray Shanahan, available on arXiv. In it, he explains that LLMs are mathematical models of the statistical distribution of “tokens” (words, parts of words, or individual characters including punctuation marks) in a large corpus of human-generated text. So if you give the model a message like “The first person to walk on the moon was…” and it responds with “Neil Armstrong”, it’s not because the model knows anything about the moon or the Apollo mission, but because we’re actually asking it following question: “Given the statistical distribution of words in the large public corpus of [English] text, which words are most likely to follow the sequence ‘The first person to walk on the moon was’? A good answer to this question is “Neil Armstrong”.

So what happens is “next-token prediction”, which happens to be what many of the tasks we associate with human intelligence also involve. This may explain why so many people are so impressed with the performance of ChatGPT. It turns out to be useful in many applications: for example, summarizing long articles, or creating a first draft of a presentation that can then be adjusted. One of the more unexpected features is as a tool to help write computer code. Dan Shipper, a seasoned software guy, reports it he spent Christmas experimenting with it as a programming assistant, and concluded that: “It’s incredibly good for helping you get started with a new project. It takes all the research and thinking and looking things up and eliminating that… In 5 minutes you can get a part of something working that previously would have taken hours to get going.” His caveat, however, was that you had to know about programming first.

That seems to me to be the beginning of wisdom about ChatGPT: at best it is an assistant, a tool that augments human capabilities. And it is here to stay. In that sense, it oddly reminds me of spreadsheet software, which hit the business world like a bolt of lightning in 1979 when Dan Bricklin and Bob Frankston wrote VisiCalc, the first spreadsheet program, for the Apple II computer, which was then mainly sold. in hobby shops. One day, Steve Jobs and Steve Wozniak woke up to find that many of the people who bought their computer didn’t have beards and ponytails, but wore suits. And that software sells hardware, not the other way around.

The news was not lost on IBM and prompted the company to create the PC and Mitch Kapor to write the Lotus 1-2-3 spreadsheet program for it. Eventually, Microsoft wrote its own version and called it Excel, which now runs on every machine in every office in the developed world. It went from being an exciting but useful extension of human capabilities to being an everyday accessory—not to mention the reason Kat Norton (aka “Miss Excel”) reportedly pulls in six-figure sums a day teaching Excel— tricks on TikTok. Odds are, someone somewhere is planning to do that with ChatGPT. And use the bot to write the scripts.

What I have read

Triple threat
The Third Magic is a meditation by Noah Smith on history, science and AI.

Don’t look back
Nostalgia for Decline in Deconvergent Britain is Adam Tooze’s long blog post about the longer history of British economic decline.

The consequences of inequality
Who Broke American Democracy? is an insightful essay about the Project Syndicate website by Nobel laureate Angus Deaton.

Leave a Reply

Your email address will not be published. Required fields are marked *