Facebook disabled 583 million fake accounts in first three months of 2018

Gwen Vasquez
May 16, 2018

21 million pieces of content depicting inappropriate adult nudity and sexual activity were taken down, 96 percent of which were first flagged by Facebook's tools.

Facebook is struggling to catch much of the hateful content posted on its platform because the computer algorithms it uses to track it down still require human assistance to judge context, the company said Tuesday.

Facebook took down approximately 3.5 million pieces of violent content in Q1 2018, 86 percent of which was automatically detected by the social network's technology. A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaeda and the Islamic State, recruitment posts for other USA -designated terrorist groups are found easily on the site.

However, it said that most of the 583m fake accounts were disabled "within minutes of registration" and that it prevents "millions of fake accounts" on a daily basis from registering. It said the rise was due to improvements in detection.

In Facebook's first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts-with 837 million spam posts and 583 million fake accounts being acted upon. During Q1, Facebook found and flagged 85.6% of such content it took action on before users reported it, up from 71.6% in Q4. For every 10,000 content views, an estimated 22 to 27 contained graphic violence and 7 to 9 contained nudity and sexual violence that violated the rules, the company said. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.

Now, however, artificial intelligence technology does much of that work. Facebook said the number tends to fluctuate from quarter to quarter. The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content.

More news: David Tepper expected to become new Panthers owner

But it declined to say which countries see more of the offending content or which category of users.

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

"It may take a human to understand and accurately interpret nuances like... self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety". It says it found and flagged almost 100% of spam content in both Q1 and Q4. Facebook has more than 2 billion monthly active users, suggesting there are still millions of fake accounts on its service at any given time.

Rosen also said that Facebook blocks millions of fake account attempts every day from even attempting to register, but did not specify how many.

Other reports by LeisureTravelAid

Discuss This Article