Meta on Thursday said it is testing features that blur messages containing nudity on Instagram to protect teenagers from potential scammers and harmful content.
The Instagram and Facebook parent company faces increasing scrutiny in the US and Europe for allegations that its apps are addictive and contribute to mental health issues among young users.
Meta said the feature will be applied in Instagram's direct messages. It will use on-device machine learning to check if an image contains nudity. The feature will be automatically enabled for users under 18 and will have the option for adults to activate it too.
"Financial sextortion is a horrific crime. We’ve spent years working closely with experts, including those experienced in fighting these crimes, to understand the tactics scammers use to find and extort victims online, so we can develop effective ways to help stop them," Meta said in a release.
It added, "We’re also testing new measures to support young people in recognizing and protecting themselves from sextortion scams. These updates build on our longstanding work to help protect young people from unwanted or potentially harmful contact."
Instagram's direct messages lack encryption, unlike Meta's Messenger and WhatsApp apps. However, Meta intends to introduce encryption for the service. Encryption is the process of converting information into a code to prevent unauthorised access.
Meta is also working on a technology to detect accounts involved in sextortion scams and is testing pop-up messages for users who may have interacted with such accounts.
Meta wrote, "Nudity protection will be turned on by default for teens under 18 globally, and we’ll show a notification to adults encouraging them to turn it on. When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind."
In October, attorneys general from 33 US states filed a lawsuit against the company, alleging it misled the public about the dangers of its platforms. Meanwhile, the European Commission is seeking information on Meta's measures to protect children from harmful content.
In January, Meta announced plans to hide more content from teens on Facebook and Instagram, aiming to reduce their exposure to sensitive topics like suicide, self-harm, and eating disorders.
Â
Â
Â
Â
Â
Comments