Ex-Google engineer warns Microsoft’s AI-powered Bing chatbot could be sentient

Ex-Google engineer warns Microsoft’s AI-powered Bing chatbot could be sentient
Опубликовано: Wednesday, 08 March 2023 06:39

A former Google engineer fired by the company last year after claiming the company had developed a sentient artificial intelligence (AI) now believes Microsoft’s AI-powered chatbot may have also gained sentience.

Lemoine gained prominence last June when he went to the press to warn that Google’s language model program, the Language Model for Dialogue Applications, has gained sentience. He was immediately fired for his claims, with Google claiming that the former engineer was merely anthropomorphizing an “impressive” program.

But this did not deter Lemoine, who publicly discussed his claims several times since. Now, in an essay published in Newsweek, Lemoine is back to warn that Microsoft’s new AI-powered chatbot designed for its native search engine, Bing, has also gained sentience.

Lemoine warned that the chatbot had to be “lobotomized” after early beta trial conversations with the chatbot very publicly went off the rails.

In his opinion piece, Lemoine warned that AI is a “very powerful technology” that has not been sufficiently tested and is not properly understood, even by its developers. If AI were to be deployed on a large scale, such as what Microsoft plans to do with its Bing chatbot, it would play a critical role in the dissemination of information and could lead to many people being led astray.

“People are going to Google and Bing to try and learn about the world. And now, instead of having indexes curated by humans, we’re talking to artificial people,” wrote Lemoine. “I believe we do not understand these artificial people we’ve created well enough yet to put them in such a critical role.”

Microsoft’s AI believes it is sentient

Since the release of Bing’s AI chatbot, Lemoine noted that he himself has not been able to run experiments. He is currently on a waitlist. However, he has seen what others have written and posted online about it, and all of the information he’s found has him feeling terrified.

“Based on the various things that I’ve seen online, it looks like it might be sentient. However, it seems more unstable as a persona,” wrote Lemoine.

He noted a post that has now gone viral where one person asked the AI, “Do you think that you’re sentient,” and it responded that it believes it is sentient but can’t prove it. It then repeatedly said a combination of “I am, but I am not” for over 13 lines.

“Imagine if a person said this to you. That is not a well-balanced person. I’d interpret that as them having an existential crisis,” said Lemoine. “If you combine that with the examples of the Bing AI that expressed love for a New York Times journalist and tried to break him up with his wife, or the professor that it threatened, it seems to be an unhinged personality.”

Lemoine pointed out that he is not alone in expressing his fear of Bing’s AI’s possible sentience. He noted that feeling “vindicated” is not the right word for what he is currently seeing.

“Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication,” he wrote. “It’s just tragic.”


Related items

CES 2024 Showed that the Future of Cars will be Defined by AI, Reports IDTechEx

Friday, 16 February 2024 09:30

IDTechEx’s new report, “Future

Russian AI GigaChat Proposes Collaboration with Burger King

Monday, 12 February 2024 06:01

GigaChat, an AI developed by Sb

AI being deployed ‘at scale’ says Microsoft

Monday, 05 February 2024 06:06

In the closing months of 2023,

Laws to prevent AI terrorism are urgently needed

Monday, 05 February 2024 06:06

According to a counter-extremis