A new report shows how YouTube has been tightening its content regulation process.
The last number of months have been stormy for YouTube, from the Logan Paul controversy to the proliferation of disturbing content aimed at children on the platform.
In December 2017, the company announced it would be hiring thousands of new moderators as well as investing in advancing its existing machine-learning system.
New regulations having an effect
Yesterday (23 April), the company released its first quarterly report into the enforcement of its community guidelines, in an effort to provide more information to investors and users alike.
YouTube is planning to refine its reporting systems with additional information, including data on comments, speed of content removal and policy removal reasons.
A Reporting History dashboard will also be introduced, allowing individual users to see the status of videos they have notified moderators about.
8m videos removed in three months
YouTube said that machine learning is allowing the company to “flag content at scale”. It added that more than 8m videos were removed from its platform between October and December 2017, and 6.7m of these were initially flagged for review by machines as opposed to human reviewers. 76pc of the 6.7m videos were removed before they were viewed by any users.
This is the first time YouTube has ever published statistics relating to content removal, so there are no past figures to compare it to in terms of its strategy.
CEO Susan Wojcicki explained that the company is also planning to apply more stringent criteria, conduct more manual curation and increase numbers on its advertising review team. She said: “As challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them. But, no matter what challenges emerge, our commitment to combat them will be sustained and unwavering.”