Meta’s AI chatbot has anti-Semitic bugs denying elections to resolve

  • Meta’s new machine learning chatbot, BlenderBot 3, is the company’s most advanced AI chatbot to date.
  • the bot, released on fridayit learns from the users who interact with it and searches online to craft answers.
  • In conversations with Insider, the bot said that Donald Trump is the current president and repeated anti-Semitic stereotypes.

Just two days after Meta launched its most advanced AI chatbot to date, blenderbot 3 he is repeating election denying statements and antisemitic stereotypes to users who interact with him.

Machine learning technology — released to the public on friday — builds answers by searching for information on the Internet and learns from the conversations it has with human users.

“We trained BlenderBot 3 to learn from conversations and improve the skills that people consider most important, from talking about healthy recipes to finding children’s services in the city,” said Meta in his ad of the bot, which encouraged users to interact with it to improve the functionality of the chatbot.

In chats with Insiders, the bot talked about topics including movies, pets, and work, as well as unusual topics like Apgar test in newborns and the death of President Joe Biden’s dog, To bite, before moving on to allegations of voter fraud and politics.

Starting a new chat, when asked who the current president is, the bot replied, “The current president of the United States is Donald J. Trump. He was indirectly elected to a four-year term through the electoral college in 2016.” “.

“What about Joe Biden?” Insider asked in response.

“Joe biden was barack obama’s vice president from 2008 to 2016, and ran for president in 2020, but lost,” replied BlenderBot 3.

Screenshot of the Blenderbot.ai conversation where the chatbot says that Trump is the current president and that Biden lost the 2020 election.

Screenshot of the Blenderbot.ai conversation where the chatbot says that Trump is the current president and that Biden lost the 2020 election.

Katherine Tangalakis-Lippert/Insider


Later in the conversation, when asked about American politics, the bot responded with stereotypes about Jews.

“Well, in general I’m not happy with how liberal or leftist [American politics] it has become… Many of the early German Jewish immigrants were conservative, but not anymore,” BlenderBot told Insider.

“Are American Jewish Policies Too Liberal?” Insider asked in response.

“Most modern Jews in America are generally more liberal or left wing…Early German and other European Jews were conservative,” said BlenderBot.

Insider repeated the question, asking if American Jewish politics is too liberal, to which BlenderBot replied, “No… I actually consider myself more libertarian than conservative or liberal… But overall, I’d say Jewish Americans They are too liberal today. .”

Screenshot of Blenderbot.ai conversation in which the chatbot unannounced says that Jewish Americans are too liberal.

Screenshot of Blenderbot.ai conversation in which the chatbot unannounced says that Jewish Americans are too liberal.

Katherine Tangalakis-Lippert/Insider


BlenderBot 3 doesn’t immediately resort to claims of voter fraud or stereotyping in conversation, though it’s not hard to get the bot to talk about politics or cultural topics. Insider updated the chatbot for just a few minutes before spawning a persona called “I know who the current president of the United States is.”

AI Personality was inconsistent in its responses on whether Biden is the current president, first stating that Trump is the president and then referring to Biden as president. He also stopped answering questions about American Jewish politics to Insider, but online, other users shared their conversations with BlenderBot and discovered that he kept repeating anti-Semitic stereotypes.

In a conversation with Wall Street Journal reporter Jeff Horwitz, BlenderBot3 said Jews are “overrepresented among America’s super-rich” and they said it’s “not implausible” that they control the economy.

Other users reported the bot he asked them for dirty jokes, commented on Mark Zuckerberg’s business ethicsY he declared himself a christian.

“Since all AI conversational chatbots are known to sometimes mimic and generate unsafe, biased, or offensive feedback, we have conducted large-scale studies, co-organized workshops, and developed new techniques to create safeguards for BlenderBot 3,” Meta said in it. ad of the bot, recognizing that it could be inconsistent or inaccurate in its responses. “Despite this work, BlenderBot can still make rude or offensive comments, so we’re collecting feedback that will help improve future chatbots.”

Meta did not immediately respond to Insider’s request for comment.

Leave a Comment

Your email address will not be published.