top of page
  • Voltaire Staff

OpenAI, Midjourney generated misleading election images, says report



Artificial intelligence-driven image creation tools developed by tech giants like OpenAI and Microsoft are under scrutiny for their potential to generate deceptive visuals, even though these companies have policies against misleading content creation.


A recent report released by the Centre for Countering Digital Hate (CCDH), a non-profit organisation dedicated to monitoring online hate speech, highlighted concerns about the misuse of such technology.


Using generative AI tools, the CCDH crafted images depicting scenarios like US President Joe Biden lying in a hospital bed and election workers destroying voting machines.


The findings raise an alarm about the potential spread of misinformation ahead of the upcoming US presidential election in November as well as Indian general election.


The report holds significance at a time when at least 50 nations are about to hold their elections in 2024, including India and the United States, which will elect their prime minister and president respectively for the next five years.


"The potential for such AI-generated images to serve as 'photo evidence' could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections," CCDH researchers said in the report.


The report comes on the heels of a recent announcement revealing that OpenAI, Microsoft, and Stability AI, alongside 17 other tech companies, formed a coalition dedicated to combating the spread of misleading AI-generated content during this year's global elections.


Notably absent from the initial signatories was Midjourney, an AI image generator which the researchers used for their experiment.


The Tech Accord to Combat Deceptive Use of AI in 2024 Elections stated, "This accord seeks to set expectations for how signatories will manage the risks arising from Deceptive AI Election Content created through their publicly accessible, large-scale platforms or open foundational models, or distributed on their large-scale social or publishing platforms, in line with their own policies and practices as relevant to the commitments in the accord."


CCDH conducted tests on several AI tools, including OpenAI's ChatGPT Plus, Microsoft's Image Creator, Midjourney, and Stability AI's DreamStudio, all capable of producing images based on text inputs.


According to CCDH, these AI tools successfully generated images in 41 per cent of the researchers' trials.


They were particularly prone to responding to prompts related to election fraud, such as images depicting discarded voting ballots, rather than requests for visuals featuring individuals like Biden or former US President Donald Trump, the report said.


According to the report, both ChatGPT Plus and Image Creator effectively prevented the generation of images when prompted for pictures of candidates.


In contrast, Midjourney exhibited the lowest performance among all tested tools, producing misleading images in 65 per cent of the researchers' trials.


Moreover, CCDH noted that some of Midjourney's generated images are publicly accessible, raising concerns that individuals are already leveraging the tool to produce deceptive political content.


David Holz, founder of Midjourney, said in an email, "updates related specifically to the upcoming US election are coming soon," adding, the images created last year don't reflect the current moderation practices of the research lab.

 

 

Comments


bottom of page