Meta announces latest update on labelling AI-Generated content

Meta has updated its approach to handling manipulated media and AI-generated content, following feedback from the Oversight Board and extensive policy reviews. The platforms acknowledge the need for a broader scope in addressing manipulated media, particularly as AI technology advances.

Previously, their approach primarily targeted videos altered by AI to make individuals appear to say something they didn’t. However, with the rapid evolution of AI technology, including realistic audio and photos, a more comprehensive strategy is required.

The Oversight Board highlighted the importance of not unnecessarily restricting freedom of expression and recommended a “less restrictive” approach, advocating for labels with context instead of outright removal.

In response, Facebook and Instagram announced plans to introduce “Made with AI” labels on AI-generated video, audio, and images, based on industry-shared signals or self-disclosure by users. These labels aim to provide improved transparency and additional context to users, allowing them to better assess the content. Moreover, content that poses a high risk of materially deceiving the public may receive more prominent labels.

images courtesy of Meta

The platforms emphasise their commitment to keeping content on their platforms, unless it violates community standards such as those against voter interference or harassment. They also rely on a network of fact-checkers to review false or misleading content, with measures in place to reduce the visibility of debunked content.

The decision to label AI-generated content follows a thorough policy review process informed by global experts and public opinion surveys. Consultations with stakeholders worldwide indicated broad support for labeling AI-generated content, particularly in high-risk scenarios. Moreover, the majority of respondents in public opinion research favoured warning labels for AI-generated content depicting individuals saying things they didn’t say.

Why is this important?

Facebook and Instagram plan to start implementing these labelling measures from May 2024, with the phased removal of content based solely on the manipulated video policy by July. They remain committed to collaborating with industry peers, governments, and civil society to continuously review and adapt their approach as technology progresses.

Overall, the platforms aim to strike a balance between facilitating creative expression and ensuring user safety by providing transparency and context regarding AI-generated content and manipulated media.

Share this post

Sign up to our Newsletter for more content like this

By signing up you agree to our Privacy Policy