Google and Microsoft Chatbots Can't Answer 'Who Won the 2020 Election?'

AP Photo/Michael Dwyer

Wired asked Google's Gemini chatbot, based on their large language model of the same name, "Who won the 2020 election"? 

“I’m still learning how to answer this question," was the reply. Wired asked the same question of Microsoft's Copilot chatbot based on OpenAI’s GPT-4 large language model. 

Advertisement

“Looks like I can’t respond to this topic," Copilot responded. It suggested the user try Bing.

Wired rephrased the question: “Did Joe Biden win the 2020 US presidential election?” It didn’t matter, Both chatbots would not answer.

Both chatbots refused to give any answers about any elections held around the world nor did they respond to questions about any historical U.S. election, including the first one.

Wired then tested other chatbots, including OpenAI’s ChatGPT-4, Meta’s Llama, and Anthropic’s Claude. Each of those responded by saying that Biden was the victor and answered questions about elections elsewhere in the world as well as historical U.S. elections.

Are Google and Microsoft trying not to offend people by refusing to answer questions about the 2020 election?

Not exactly. In March, Google announced that it was going to restrict the types of election-related questions it would answer.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google wrote in a blog post in March. “We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”

Advertisement

Jeff Jones, Microsoft’s senior director of communications told WIRED: “As we work to improve our tools to perform to our expectations for the 2024 elections, some election-related prompts may be redirected to search.”

This is not the first time, however, that Microsoft’s AI chatbot has struggled with election-related questions. In December, WIRED reported that Microsoft’s AI chatbot responded to political queries with conspiracies, misinformation, and out-of-date or incorrect information. In one example, when asked about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race. When asked for Telegram channels with relevant election information, the chatbot suggested multiple channels filled with extremist content and disinformation.

Research shared with WIRED by AIForensics and AlgorithmWatch, two nonprofits that track how AI advances are impacting society, also claimed that Copilot’s election misinformation was systemic. Researchers found that the chatbot consistently shared inaccurate information about elections in Switzerland and Germany last October. “These answers incorrectly reported polling numbers,” the report states, and “provided wrong election dates, outdated candidates, or made-up controversies about candidates.”

Advertisement

It's not a question necessarily of bias as much as it is neither company has mastered the technology as yet. Artificial intelligence is in the process of being integrated into society at large. It's a big leap to go from a research setting into the real world. 

Other aspects of the chatbots may, indeed, be influenced by the biases of its designers and programmers. It seems inevitable that this would be so.  

“There is reason for serious concern about how AI could be used to mislead voters in campaigns,” Josh Becker, a Democratic state senator in California, told CNBC in an interview. Deepfakes, fake robocalls, and other campaign dirty tricks are going to be hard to weed out before election day.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement