“We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts and we are committed to advancing our work and sharing our progress,” wrote Facebook. “We are committed to being transparent about our efforts to combat hate … To date, the data we’ve provided about our efforts to combat terrorism has addressed our efforts against Al Qaeda, ISIS, and their affiliates.”

Facebook CEO Mark Zuckerberg often asserts that AI like its recently open-sourced image and video algorithms will substantially cut down on the amount of abuse perpetrated by millions of ill-meaning Facebook users. A concrete example of this in production is a “nearest neighbor” algorithm that’s 8.5 times faster at spotting illicit photos than the previous version, which complements a system that learns a deep graph embedding of all the nodes in Facebook’s Graph — the collection of data, stories, ads, and photos on the network — to find abusive accounts and pages that might be related to each other.

In Facebook’s Community Standards Enforcement Report published in May, the company reported that AI and machine learning helped cut down on abusive posts in six of the nine content categories. Concretely, Facebook said it proactively detected 96.8% of the content it took action on before a human spotted it (compared with 96.2% in Q4 2018), and for hate speech, it said it now identifies 65% of the more than four million hate speech posts removed from Facebook each quarter, up from 24% just over a year ago and 59% in Q4 2018.

Those and other algorithmic improvements contributed to a decrease in the overall amount of illicit content viewed on Facebook, according to the company. It estimated in the report that for every 10,000 times people viewed content on its network, only 11 to 14 views contained adult nudity and sexual activity, while 25 contained violence. With respect to terrorism, child nudity, and sexual exploitation, those numbers were far lower — Facebook said that in Q1 2019, for every 10,000 times people viewed content on the social network, less than three views contained content that violated each of those policies.