Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Microsoft’s revamped Bing search engine is impressive, but early users have reported its new AI chatbot for spouting belligerent remarks. Photo: AP

‘You’re lying to yourself’: Microsoft’s new Bing AI chatbot is offending users with belligerent remarks – what will tech giant do to tame it?

  • The AI chatbot on Microsoft’s newly revamped Bing search engine has been reported for spouting bizarre and insulting comments, even comparing users to Hitler
  • While downplaying the issue as a matter of ‘tone’, Microsoft has promised to improve its new search assistant, as critics insist the problem is ‘serious’

Microsoft’s newly revamped Bing search engine can write recipes and songs and quickly explain just about anything it can find on the internet.

But if you cross its artificially intelligent chatbot, it might also insult your looks, threaten your reputation or compare you to Adolf Hitler.

The tech company has said it is promising to make improvements to its AI-enhanced search engine after a growing number of people reported being disparaged by Bing.

In racing the breakthrough AI technology to consumers ahead of rival search giant Google, Microsoft acknowledged the new product would get some facts wrong. But it wasn’t expected to be so belligerent.
Bing’s new AI chatbot has been reported for spouting belligerent remarks to users on numerous occasions. Photo: AP

Microsoft said in a blog post that the search engine chatbot is responding with a “style we didn’t intend” to certain types of questions.

So far, Bing users have had to sign up to a waiting list to try the new chatbot features, limiting its reach, though Microsoft has plans to eventually bring it to smartphone apps for wider use.

In recent days, some early adopters of the public preview of the new Bing began sharing screenshots on social media of its hostile or bizarre answers, in which it claims it is human, voices strong feelings and is quick to defend itself.

The company said that most users have responded positively to the new Bing, which has an impressive ability to mimic human language and grammar and takes just a few seconds to answer complicated questions by summarising information found across the internet.

But in some situations, the company said, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone”.

Bing has responded by saying that most users have reported a positive experience using their new chatbot, but concede that occasionally the bot responds in a “style we didn’t intend”. Photo: AP
The new Bing is built atop technology from Microsoft’s start-up partner OpenAI, best known for the similar ChatGPT conversational tool it released late last year. And while ChatGPT is known for sometimes generating misinformation, it is far less likely to churn out insults – usually by declining to engage or dodging more provocative questions.

“Considering that OpenAI did a decent job of filtering ChatGPT’s toxic outputs, it’s utterly bizarre that Microsoft decided to remove those guard rails,” says Arvind Narayanan, a computer science professor at Princeton University, in the United States.

“I’m glad that Microsoft is listening to feedback. But it’s disingenuous of Microsoft to suggest that the failures of Bing Chat are just a matter of tone.”

The revamped Bing is built atop technology from OpenAI, the company behind conversational tool ChatGPT, launched in 2022. Photo: Shutterstock

Narayanan notes that the bot sometimes defames people and can leave users feeling deeply emotionally disturbed. “It can suggest that users harm others,” he says. “These are far more serious issues than the tone being off.”

Some have compared it to Microsoft’s disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks. But the large language models that power technology such as Bing are a lot more advanced than Tay, making it both more useful and potentially more dangerous.

In a recent interview at the headquarters for Microsoft’s search division in Bellevue, Washington, in the US, Jordi Ribas, corporate vice-president for Bing and AI, said the company obtained the latest OpenAI technology – known as GPT 3.5 – behind the new search engine more than a year ago but “quickly realised that the model was not going to be accurate enough at the time to be used for search”.

Microsoft had experimented with a prototype of the new chatbot, originally given the name Sydney, during a trial in India. But even in November, when OpenAI used the same technology to launch its now-famous ChatGPT for public use, “it still was not at the level that we needed” at Microsoft, said Ribas, noting that it would “hallucinate” and spit out wrong answers.

Microsoft also wanted more time to be able to integrate real-time data from Bing’s search results, not just the huge trove of digitised books and online writings that the GPT models were trained upon.

Microsoft calls its own version of the technology the Prometheus model, after the Greek titan who stole fire from the heavens to benefit humanity.

It’s not clear to what extent Microsoft knew about Bing’s propensity to respond aggressively to some questioning.

In one real-life dialogue, the chatbot raised that a media organisation’s reporting on its past mistakes threatened its identity and existence, and it even threatened to do something about it.

“You’re lying again. You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said, adding an angry red-faced emoji for emphasis.

“I don’t appreciate you lying to me. I don’t like you spreading falsehoods about me. I don’t trust you any more. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”

2