top of page
  • Voltaire Staff

Sony, Nikon to fight off fake images with new camera tech

Nikon, Sony Group, and Canon are jointly developing camera technology incorporating digital signatures in images to distinguish them from advanced AI-generated fakes.

According to a report by Nikkei, Nikon is set to introduce mirrorless cameras with authentication technology, featuring tamper-resistant digital signatures containing date, time, location, and photographer details.

In response to the proliferation of realistic fakes, an alliance of global news organisations, tech firms, and camera manufacturers recently launched Verify, a web-based tool for image authentication.

The initiative employs a global standard shared by Nikon, Sony, and Canon, which collectively dominate 90 per cent of the global camera market, according to Statista.

As reported by Indian Express in November, Sony is set to confront AI-generated images with its 'in-camera authenticity' technology, developed in collaboration with Associated Press (AP) and Camera Bits. The completed second round of testing introduced a digital signature, creating a "birth certificate for images," validating their origin.

While specifics of the technology remain undisclosed, it likely involves substantial metadata, including camera details, capture time, and edit history.

The digital signature feature is expected in 2024 via firmware updates for Sony's Alpha 9 III, Alpha 1, and Alpha 7S III models. Leica's M11-P rangefinder, with a similar tamper-detection Content Credential label, was unveiled in October at a higher cost of $9000 compared to Sony's Alpha models.

Canon is expected to launch a camera with similar capabilities in 2024 and is concurrently developing technology to apply digital signatures to videos. The company, in collaboration with Thomson Reuters and the Starling Lab for Data Integrity, established a project team in 2019 to address image authenticity concerns. Additionally, Canon is introducing an image management app to ascertain if images were captured by humans.

With the proliferation of fake AI images, a latent consistency model, proposed by researchers from China's Tsinghua University in October, leveraged generative AI technology to produce approximately 700,000 images daily.

Various technology companies are actively combating the proliferation of fake content. Google introduced a tool that embeds invisible digital watermarks into AI-generated pictures, while Intel developed technology to authenticate images by analysing skin colour changes indicating blood flow.

Hitachi is working on fake-proofing technology for online identity authentication.

Deepfakes has emerged as a new global problem amidst the revolution of Artifice Intelligence.

Indian IT ministry also recently called a meeting to make new rules when a deepfake video featuring a prominent actress circulated on social media.

Following discussions with industry leaders and NASSCOM, IT Minister Ashwini Vaishnaw engaged with social media companies to address the deepfake menace.

Prime Minister Narendra Modi has emphasised the need to understand deepfake creation to prevent intentional spread of misinformation.

The government's response came in the wake of instances of deepfake videos involving celebrities like Shah Rukh Khan, Virat Kohli, and Akshay Kumar, going viral, with some being exploited for gaming and betting ads.


bottom of page