YouTube diminishes 8 million abusive videos using Artificial intelligence in the last three months of 2017, according to the company. Most of the videos deleted were spam or pornography and counted for “a fraction of a percent” of YouTube’s total views during this time period.
Of those 8 million videos deleted, 6.7 million were removed by machine-learning software, the company said. And more than three quarters of those flagged videos were removed by a human employee before even one public viewing, it said.
YouTube says it receives 400 hours of new video uploaded every minute. This volume and its largely automated platform’s failure to stop creators from posting extreme and abusive content have been an increasing source of criticism for the Google-owned unit.
The same videos also cause headaches for advertisers, which suspend advertising when they’ve discovered ads playing before videos that promote terrorism, hate space, pedophilia or other exploitive subject matter.
In turn, the company has promised more human attendant working with advanced computer systems will weed out the worst videos before they become widespread.
YouTube also had changed some of its rules last year in an attempt to crack down on extremist videos, taking away their ability to run ads and share revenue with YouTube.
YouTube is going to introduce a new feature called the “Reporting History dashboard.” It will show individual users the status of any videos they’ve flagged for review.