Meta Plans A Less Punitive AI-Generated Content Policy


Meta announced an update to its AI labeling policy, expanding its definition of “manipulated media” to go beyond AI-generated videos, to now include deceptive audio and images on Facebook, Instagram and Threads.

An important feature of the new policy is it’s sensitivity on being perceived as being restrictive of freedom of expression. Rather than adopt the approach of removing problematic content Meta is instead simply labeling it. Meta introduced two labels, “Made with AI” and “Imagined with AI,” to make clear what content was created or altered with AI.

New Warning Labels

The AI-generated content will rely on identifying the signals of AI-authorship and self-reporting:

“Our ‘Made with AI’ labels on AI-generated video, audio, and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content”

Content that is significantly misleading may receive more prominent labels so that users can get a better understanding.

Harmful content that violates the Community Standards, such as content that incites violence, election interference, bullying or harassments will qualify for removal, regardless if it is human or AI generated.

Reason For Meta’s Updated Policy

The original AI labeling policy was created in 2020 and because of the state of the technology it was narrowly defined confined to addressing deceptive videos (the kind that depicted public figures saying things they never did). Meta’s Oversight Board recognized that technology has progressed to the point that a new policy was needed. The new policy accordingly expands to now address AI-generated audio and images, in addition to videos.

Based On User Feedback

Meta’s process for updating their rules appear to have anticipated pushback from all sides. Their new policy is based on extensive feedback from from a wide range of stakeholder and input from the general public. The new policy also has the flexibility to bend if needed.

Meta explains:

“In Spring 2023, we began reevaluating our policies to see if we needed a new approach to keep pace with rapid advances… We completed consultations with over 120 stakeholders in 34 countries in every major region of the world. Overall, we heard broad support for labeling AI-generated content and strong support for a more prominent label in high-risk scenarios. Many stakeholders were receptive to the concept of people self-disclosing content as AI-generated.

…We also conducted public opinion research with more than 23,000 respondents in 13 countries and asked people how social media companies, such as Meta, should approach AI-generated content on their platforms. A large majority (82%) favor warning labels for AI-generated content that depicts people saying things they did not say.

…And the Oversight Board noted their recommendations were informed by consultations with civil-society organizations, academics, inter-governmental organizations and other experts.”

Collaboration And Consensus

Meta’s announcement explains that they plan for the policies to keep up with the pace of technology by revisiting it with organizations like the Partnership on AI, governments and non-governmental organizations.

Meta’s revised policy emphasizes the need for transparency and context for AI-generated content, that removal of content will be based on violations of their community standards and that the preferred response will be to label potentially problematic content.

Read Meta’s announcement

Our Approach to Labeling AI-Generated Content and Manipulated Media

Featured Image by Shutterstock/Boumen Japet



Source link

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

We Know You Better!
Subscribe To Our Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
Yes, I want to receive updates
No Thanks!
close-link

Subscribe to our newsletter

Sign-up to get the latest marketing tips straight to your inbox.
SUBSCRIBE!
Give it a try, you can unsubscribe anytime.