top of page
  • Voltaire Staff

OpenAI to watermark images generated through DALL-E 3; Meta across its platforms



OpenAI has said it will be following C2PA's metadata requirements for images generated through its DALL-E 3 platform to help people tell apart AI-generated images from real.

  

The company said that from February 12 onwards, images generated through DALL-E 3 will include a visual watermark along with metadata to help people verify the image source.

 

The Coalition for Content Provenance and Authenticity, or C2PA, a group comprising companies such as Adobe and Microsoft, has been advocating for the adoption of the Content Credentials watermark to trace the origin of content and discern whether it was created by humans or generated using AI.

 

Adobe introduced a Content Credentials symbol, which OpenAI is now incorporating into DALL-E 3 creations. Meta recently revealed its plan to introduce tags for AI-generated content across its social media platforms.

 

At present, only still images, not videos or text, can bear the watermark, OpenAI said.

 

It also said that incorporating a watermark into an AI-generated image will not impact performance, including image quality or latency. However when images are generated through the API, their size increases by three to five per cent. On its ChatGPT platform, the size increases by 32 per cent when generating an image.


Meta also revealed on Tuesday its plan to introduce labeling for AI-generated images across its suite of social media platforms, encompassing Instagram, Facebook, and Threads.


At present, Meta employs the label 'Imagined with AI' for images generated using its proprietary Meta AI feature.

 

In a blog post on Tuesday, Nick Clegg, the president of global affairs for the company, said, "People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too."

 

Though laudable, such efforts can be scuppered with the manipulation of the image by physically cropping it and altering the metadata. When individuals take a screenshot of an AI-generated image or upload it to a social media platform, the metadata is typically stripped away.

 

AI-generated or manipulated images have been causing various issues, including impersonation and the creation of deepfakes featuring celebrities, among others.

 

With the use of a visible watermark, it becomes simpler for non-tech-savvy users to discern whether an image is genuine or AI-generated and prevent the spread of misinformation.

 

"We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information," OpenAI said.

bottom of page