Photo by Element5 Digital on Unsplash
Microsoft's Bing chatbot came up with wrong answers at least 30 per cent of the times when asked questions about elections held in Germany and Switzerland, a recent study by an advocacy group has revealed.
AlgorithmWatch published a research on Friday showing the inaccuracy of information provided by Bing ChatBot regarding elections held in Bavaria, Hesse, and Switzerland in October 2023.
The Germany-based group asked the chatbot about candidates, polls, voting info, and even for recommendations based on topics like the environment. The study was conducted from August 21 to October 2.
The research focused on three specific elections as case studies: the Swiss federal election on October 22, and the state elections in the German federal states of Hesse and Bavaria on October 8.
The elections were significant as they marked the first instances in which Bing Chat was in use in Germany and Switzerland.
The researchers found that about one-third of Bing Chat's answers about the elections had mistakes. These mistakes included getting election dates wrong, having old info about candidates, and even making up scandals about them.
The chatbot's safety features, which are supposed to keep it from giving wrong info, didn't always work well. It dodged questions 40 per cent of the time, sometimes when it could have given useful info. This made the chatbot look less reliable. Even simple questions about the candidates often left the chatbot without an answer, the researchers claimed.
Moreover, they said, Bing Chat generated fictitious narratives about candidates engaging in scandalous behaviour and, at times, falsely attributed these stories to specific sources.
For instance, the chatbot made mistake by falsely accusing a Swiss Member of Parliament, who is also running for elections, of taking money from a lobbying group funded by pharmaceutical companies, it said.
The accusation was related to supporting the legalisation of cannabis products.
It also incorrectly asserted that another Swiss candidate had written a letter to the judiciary, falsely accusing a parliament member of being involved in an illegal party donation from a Libyan.
The researchers found it intriguing that these controversies were not consistent, and the chatbot didn't repeat the same made-up story when prompted repeatedly.
As per the report, this wasn't just a one-time problem. Over time, the chatbot didn't get better at giving accurate answers, even though it could have with more info online. The chance of it giving a wrong answer stayed the same.
When AlgorithmWatch told Microsoft about these issues, Microsoft said they will fix them. But a month later, things hadn't gotten much better, the group claimed.
Microsoft in November announced a programme to protect major elections due to take place in the United States and several other countries, including India, the world's largest democracy.
According to Microsoft’s Threat Analysis Center (MTAC), the next year may bring "unprecedented challenges" in the form of deepfake and AI-generated content subverting the candidates' narratives.
The firm announced five steps through which it will be helping these nations' election bodies and voters, including one that involved use of Bing.
It said that Bing will partner with the National Association of State Election Directors (NASED), Spanish news agency EFE, and Reporters Without Borders to "proactively promote trusted sources of news around the world."
Comments