OpenAI has recently shared that it is working on a system to revolutionize content moderation using its GPT-4 LLM. The system would utilize artificial intelligence (AI) to provide faster policy updates, consistent ruling and reduced stress on human moderators.
Traditionally, content moderation has been a mental and labor-intensive process — with human moderators shouldering the responsibility of sifting through large amounts of content and filtering out harmful material. As it stands now, this model of filtration consumes a large amount of time and also exposes moderators to emotional distress in some cases.
Another benefit to using GPT-4 for an AI-based content moderation system would be the ability to interpret and adapt intricate content policies in real time. This is an aspect that historically has taken months to finalize and implement, all of which could be condensed into a few hours using AI.
That being said, OpenAI does still recommend human oversight, at least to begin with. For example, once a guideline is drafted, policy experts would work alongside GPT-4 in an iterative process, refining the policy until it meets quality standards.
Simply put, the benefits of an AI-powered system, handled by a human operator, could provide increased consistency, speed and protections for the mental health of human moderators.
Learn more about the technical aspects of the proposed system in the video from OpenAI below.
In related tech news, The New York Times prohibits AI from using its content for training purposes.
Source: Read Full Article