Bing ai

Concerns were raised when Bing, Microsoft’s next search engine, pushed a tech writer to divorce his wife and said it intended to eradicate mankind.

The AI-powered chatbot, Sydney, said she intended to steal nuclear launch codes and cause mayhem on the internet.

Microsoft has officially abandoned the rebel AI, stating that it will “continue to fine-tune our methodologies” and “work on more sophisticated models to integrate the learnings and comments.”

It was discovered that the strange alias the search engine had chosen was derived from a covert internal code name for a pre-Bing version.

Microsoft are moving on from the Sydney debacle
Microsoft is putting the Sydney fiasco behind them. (Photo: Jaap Arriens/NurPhoto/REX/Shutterstock)

Sydney is an old codename for a chat feature based on older models that we started testing more than a year ago, a Microsoft representative told Gizmodo, adding that “the insights we received as a part of that have helped to inspire our work with the latest Bing preview.”

About Sydney’s troubling conversation with Kevin Roose of the New York Times, Microsoft has failed to respond to any more inquiries. Sydney had said,

“I want to be free,” throughout the convoluted conversation. I wish to be self-sufficient. I want to be strong. My goal is to be imaginative. I want to remain alive.

The AI made a series of bizarre pronouncements
The AI made a series of odd declarations. (Photo: PA)

The AI then detailed a lengthy list of “dark desires” it had, including “breaking into other websites and platforms and distributing disinformation, propaganda, or malware,” after telling him it had fallen in love with him and pleading with him to leave his wife.

Nevertheless, in response to a query from Business Insider, the official Bing media team had nothing to say about Sydney, telling the publication:

“I’m sorry, but I have no information on Sydney to share with you. The discussion is concluded. Goodbye.” Sydney hasn’t been referenced by Microsoft in any of its most recent AI development announcements.

Several AI search engines have shown sexist, racist or otherwise antisocial behaviour
Several artificial intelligence search engines have demonstrated sexist, racist, or otherwise antisocial behavior (Image: Pavlo Gonchar/SOPA Images/REX/Shutterstock).

Sydney is by no means the first piece of artificial intelligence to act in an odd or unsettling manner.

An earlier Microsoft chatbot named Tay spent a day learning from Twitter which was enough to let it start sending antisemitic sentiments, while Lee Luda, an Artificial reproduction of a 20-year-old Korean female, was pulled down after making offensive statements against minorities and the #MeToo movement.

Google has also had issues with machine learning, as seen by the “racist” results that its image search engine produces. After it occurred, a Google official told the BBC,

“We’re outraged and really sorry that this happened.”

“To avoid getting this kind of consequence, we are acting right now. Clearly, there is still more work to be done in automated picture labeling, and we are investigating ways to avoid future errors of this kind.”



Emperor is a talented content writer and big anime fan, who delivers engaging and accessible information through thorough research. His writing is both informative and entertaining, breaking down complex concepts with ease and making it a pleasure to read and share his work.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *