LinkedIn, the social network best known for job-hunters and recruiters, is grappling with fake accounts, violent content and even child exploitation.

It’s pulling back the curtain for the first time on how it removes content that breaks its rules. Its transparency report, provided first to the Washington Post, makes clear that the popular professional site is dealing with many of the same problems plaguing other social media companies.

LinkedIn took down more than 21 million fake accounts in the first half of the year, and it removed more than 60 million pieces of spam, including fake job postings. It also took down more than 16,000 instances of harassment, 11,000 posts containing obscene or pornographic content, nearly 2,000 posts showing violence or terrorism and 22 occurrences of child exploitation.

“Unfortunately, some people will use technology in ways that it was never intended,” LinkedIn general counsel Blake Lawit said. “So for us, we need to be vigilant and police it and take care of it, which is what we do.”

LinkedIn’s announcement shows how technology companies are heeding Washington’s calls for increased transparency about their decisions on content moderation in the wake of foreign interference in the 2016 election and terror attacks that originated online.

Facebook began publicly reporting similar metrics last year, and this fall, it began reporting them for Instagram as well.

LinkedIn has only a fraction of the number of takedowns as larger social networks such as Facebook, but the fact that it’s occurring on the service at all highlights the omnipresence of harmful content online.

“Any is too much,” Lawit said. “Part of being responsible, being accountable is providing transparency.”

It can be difficult to compare how companies stack up against one another in their efforts to combat violence, harassment and other harmful content, since the reports’ methodology is inconsistent from company to company.

For instance, Facebook includes some categories that LinkedIn doesn’t, like removals of drugs or firearm sales, or instances of self-harm.

Twitter and Google do not report the same granular data as Facebook about the content they’re deciding to pull down. They report some broader categories: Twitter discloses instances of election interference and government requests for content removal. Google also reports government requests to delete content or instances when it pulls information down under European privacy law requests.

Facebook has criticized its tech peers for not being as transparent about these efforts, without directly naming rivals Twitter and Google.

LinkedIn has not been under the same public pressure as Facebook on its content moderation efforts, but given the broader debate over the industry, it began seriously considering going public with its numbers last year, Lawit said.

“We’re at a point now where we recognize that we have a responsibility,” he said. “Part of that is to provide more transparency, and I think that’s what led to the discussion and the action.”

Get the latest development, jobs and retail news, delivered straight to your inbox every day.

* I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy.

Commenting is limited to Omaha World-Herald subscribers. To sign up, click here.

If you're already a subscriber and need to activate your access or log in, click here.

Load comments

You must be a full digital subscriber to read this article You must be a digital subscriber to view this article.