top of page
  • Voltaire Staff

Meta to start labelling AI-generated content ahead of US elections

Ahead of the US elections, Meta, the parent company of Facebook, has unveiled significant revisions to its policies concerning digitally-altered media.

Meta's plan involves introducing "Made with AI" labels in May, to be affixed to AI-generated videos, images, and audio shared across its platforms. The move marks an expansion of their previous policy, which primarily targeted a limited range of altered videos.

Monika Bickert, Vice President of Content Policy, outlined these changes in a recent blog post.

Bickert said, "In February, we announced that we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our 'Made with AI' labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content."

Meta had earlier announced a plan to identify images created with other companies' generative AI tools using hidden markers in the files, without specifying a start date.

The company now is changing its strategy regarding manipulated content. Instead of just removing certain posts, Meta will now keep them up while informing viewers about their creation process.

According to a company spokesperson, the updated labelling strategy will be implemented across Meta's Facebook, Instagram, and Threads platforms. However, different rules will apply to its other services, such as WhatsApp and Quest virtual reality headsets, reported Reuters.

Meta will begin applying the more prominent "high-risk" labels immediately, the spokesperson added.

The feature can play a crucial role in preventing users from being duped , especially as several nations go to polls in the coming months, including India.

In February, Meta's oversight board criticised the company's current rules on manipulated media, deeming them "incoherent." This criticism came after the board reviewed a video posted on Facebook last year featuring altered footage of US President Joe Biden, falsely implying inappropriate behaviour.

Despite objections, the video remained online, as Meta's existing policy concerning "manipulated media" only prohibits misleadingly altered videos if they're generated by artificial intelligence or if they depict individuals saying words they never uttered.

The oversight board argued that the policy should extend to non-AI content, which can be equally misleading, as well as to audio-only content and videos showing individuals engaging in actions they never actually performed.





bottom of page