GPTChat could be 21st century Goethe’s Sorcerer’s Apprentice

GPTChat could be 21st century Goethe’s Sorcerer’s Apprentice
Опубликовано: Tuesday, 04 April 2023 15:32

Is artificial intelligence a dangerous innovation that could destroy humanity? A lot of serious people think so. More than 3,000 of them have signed a letter calling for a six-month moratorium on "giant AI experiments". The writer Yuval Noah Harari, the tech legend Steve Wozniak, and Elon Musk signed it.

Why all the uproar now? AI has been around for a while in daily applications like driver-assistance systems, the selection of what we see on social media, or the work of internet search engines.

  • ‘Today they make you more efficient, tomorrow they make you redundant’

But we have been passive consumers of these applications. ChatGPT has changed that. It lifted the curtain on AI. Anyone with an internet connection can use the tool and other AI-powered language and image generators. Anyone can get a feeling about the power of the technology. It is impressive.

It is also frightening. Many white-collar workers thought they were safe in their jobs — they felt they were too specialised or creative to worry. They are thinking again. Language generators can produce long, complex texts, often to a high standard.

They can write poems in any style, write computer code or deliver any kind of information. Today they make you more efficient, tomorrow they make you redundant.

Hence the sense of foreboding among the 3,000+ signatories of the letter. And it goes beyond concerns about a massive upheaval in the job market. They fear that we will have the experience of Goethe’s Zauberlehrling (the Sorcerer’s Apprentice).

He animates a broomstick, which begins to flood the house — and then he loses control of it: "Spirits that I’ve cited, my commands ignore."

Goethe’s prophetic warning

Like Goethe’s poem the letter has an End-of-Times tone: "Should we risk loss of control of our civilisation?". Goethe: "Brood of hell, you’re not a mortal! Shall the entire house go under?" And like Goethe, the letter talks about flooding: "Should we let machines flood our information channels with propaganda and untruth?"

Many have criticised the letter. The authors stand accused of doing free marketing for AI companies, by exaggerating the power of AI.

They have a point about the sudden hype around ChatGPT. It is far from perfect and does some things systematically wrong, for example producing misinformation in areas where it had only limited training material ("closed domains").

But this is no reason to relax, on the contrary. When such language models produce misinformation, they will do so on a large scale. Why? Because the industry believes the future of internet search will be based on language models. We will get authoritative-sounding answers to our search questions, rather than links to various sources.

When it goes wrong, it will go wrong in a big way.

Some people argue that these are merely language models working with probabilities, that they have no human intelligence that attaches meaning to words and therefore there is nothing to fear. But they confuse the inner workings of the technology with its effects.

At the end of the day users do not care how these models generate language. They perceive it as human-like and they will often perceive it as authoritative.

With social media we have allowed a few companies to totally restructure the course of public debate, based on their choices of what we should read, see and ‘engage with’. There was no risk assessment before social media were marketed.

Mark Zuckerberg’s now infamous motto was "move fast and break things". And so he rolled out social media that could be used to share family photos — or to promote calls for murderous violence against minorities.

Large language models will be far more influential than social media have been. They are approaching "general purpose" AI — able to do all kind of things — mediated by human language. We should not sit back and watch how this will play out.

Fortunately, in the EU we have the beginnings of some regulation. The misinformation problem of models like ChatGPT needs to be addressed through the code of practice against disinformation. Possibly it can also be tackled on the basis of the Digital Services Act. The EU´s AI-Act is in the works and should be enacted as a matter of urgency.

More attention should be paid to technical solutions as well. There are many ideas and initiatives to address harms of AI, such as mechanisms to authenticate content. If workable, they would remove one problem with AI-generated content: the current inability to tell whether some content is authentic, or whether it was manipulated or artificially-generated.

The letter writers worry in particular about the lack of risk assessments. They have a point.

OpenAI, the company behind ChatGPT (now largely controlled by Microsoft), tried to reduce harms and indeed ChatGPT avoids some obvious pitfalls. It will not tell you how to make a dirty bomb (except if you find a roundabout way of asking the question).

On the ChatGPT website, OpenAI has published a technical paper on a sibling language model. It provides important insights into how the model has been fine-tuned to reduce harm. However, it is a paper by data engineers for data engineers. It is mainly about optimisation of the model compared to other models.

This analysis is a far cry from a systematic risk assessment which would have to include many different specialisations and systematically anticipate the many malicious use cases of a language model and its unintended consequences, before making it available to everyone.

If ever policymakers need to show agility, it is now on AI. The technology urgently needs a regulatory framework to address and reduce its risks and to create a business-friendly environment based on the rule of law, offering a level-playing field for all companies.