Can GPT-4 Reduce The Human Cost Of Content Moderation? OpenAI Thinks So


OpenAI announced it has developed an AI system using GPT-4 to assist with content moderation on online platforms.

The company says this system allows for faster iteration on policy changes and more consistent content labeling than traditional human-led moderation.

OpenAI said in its announcement:

“Content moderation plays a crucial role in sustaining the health of digital platforms. A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours.”

This move aims to improve consistency in content labeling, speed up policy updates, and reduce reliance on human moderators.

It could also positively impact human moderators’ mental health, highlighting the potential for AI to safeguard mental health online.

Challenges In Content Moderation

OpenAI explained that content moderation is challenging work that requires meticulous effort, a nuanced understanding of context, and continual adaptation to new use cases.

Traditionally, these labor-intensive tasks have fallen on human moderators. They review large volumes of user-generated content to remove harmful or inappropriate materials.

This can be mentally taxing work. Employing AI to do the job could potentially reduce the human cost of online content moderation.

How OpenAI’s AI System Works

OpenAI’s new system aims to assist human moderators by using GPT-4 to interpret content policies and make moderation judgments.

Policy experts first write up content guidelines and label examples that align with the policy.

GPT-4 then assigns the labels to the same examples without seeing the reviewer’s answers.

By comparing GPT-4’s labels to human labels, OpenAI can refine ambiguous policy definitions and retrain the AI until it reliably interprets the guidelines.

Example

In a blog post, OpenAI demonstrates how a human reviewer could clarify policies when they disagree with a label GPT-4 assigns to content.

In the example below, a human reviewer labeled something K3 (promoting non-violent harm), but the GPT-4 felt it didn’t violate the illicit behavior policy.

Screenshot from: openai.com/blog/using-gpt-4-for-content-moderation, August 2023.

Having GPT-4 explain why it chose a different label allows the human reviewer to understand where policies are unclear.

They realized GPT-4 was missing the nuance that property theft would qualify as promoting non-violent harm under the K3 policy.

This interaction highlights how human oversight can further train AI systems by clarifying policies in areas where the AI’s knowledge is imperfect.

Once the policy is understood, GPT-4 can be deployed to moderate content at scale.

Benefits Highlighted By OpenAI

OpenAI outlined several benefits it believes the AI-assisted moderation system provides:

  • More consistent labeling, since the AI adapts quickly to policy changes
  • Faster feedback loop for improving policies, reducing update cycles from months to hours
  • Reduced mental burden for human moderators

To that last point, OpenAI should consider emphasizing AI moderation’s potential mental health benefits if it wants people to support the idea.

Using GPT-4 to moderate content instead of humans could help many moderators by sparing them from having to view traumatic material.

This development may decrease the need for human moderators to engage with offensive or harmful content directly, thus reducing their mental burden.

Limitations & Ethical Considerations

OpenAI acknowledged judgments made by AI models can contain unwanted biases, so results must be monitored and validated. It emphasized that humans should remain “in the loop” for complex moderation cases.

The company is exploring ways to enhance GPT-4’s capabilities and aims to leverage AI to identify emerging content risks that can inform new policies.


Featured Image: sun ok/Shutterstock

!function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js');

if( typeof window.sopp != "undefined" && window.sopp === 'yes' ){ fbq('dataProcessingOptions', ['LDU'], 1, 1000); } console.log('load_px'); fbq('init', '1321385257908563');

fbq('track', 'PageView');

fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'can-gpt-4-reduce-the-human-cost-of-content-moderation-openai-thinks-so', content_category: 'generative-ai news' }); } });





Source link

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

We Know You Better!
Subscribe To Our Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
Yes, I want to receive updates
No Thanks!
close-link

Subscribe to our newsletter

Sign-up to get the latest marketing tips straight to your inbox.
SUBSCRIBE!
Give it a try, you can unsubscribe anytime.