Drake's nude photos expose big moderation problems on X, but ...

8 Feb 2024
Drake

Just one week ago, X CEO Linda Yaccarino was on Capitol Hill talking up the company’s plan to bolster children’s safety and to “accelerate our impact” thwarting harmful content on the platform.

On Wednesday, downloads of X’s mobile app surged to the top of the charts. But this acceleration coincided with something seemingly at odds with Yaccarino’s pledge: an influx of videos on the site depicting what appeared to be the rapper Drake masturbating (the singer has yet to verify the authenticity of the video).

The Drake video emerged not long after X was inundated with posts featuring deepfake pornography of famed singer Taylor Swift. While X’s policies allow users to share nudity with a sensitive content warning, it draws the line at nonconsensual nudity, such as this case with Drake and Swift.

The flood of celebrity smut highlights the social media company’s struggles policing its platform even as it embarks on an ambitious restructuring of its content moderation efforts that includes a new trust and safety center in Austin, Texas.

As Fortune exclusively reported this week, X has hired roughly a dozen trust and safety “agents” in Austin as part of a plan that initially envisioned building a team of 500 in-house content moderators in San Francisco but appears to have been rolled back to a staff of 100 in the more affordable Lone Star State.

X’s move to in-house moderators is a big change from the typical practice of hiring low-paid, contractors to do the job. But many are questioning whether the X team possesses the genuine commitment and capability to moderate the millions of users running amok on their platform. Will X’s 100-person team of in-house moderators prove effective, or is it merely a superficial gesture aimed at enticing skittish advertisers back to the platform?

“100 people in Austin would be one tiny node in what needs to be a global content moderation network,” former Twitter trust and safety council member Anne Collier told Fortune. “100 people in Austin, I wish them luck.”

According to a source familiar with trust and safety at X, “the number of humans at computers matters less, in some ways, than having clear policies rooted in proven harm reduction strategies, and the tools and systems necessary to implement those policies at scale—both of which have been dismantled since late 2022.”

What X desperately needs, another source told Fortune, is larger strides in artificial intelligence. An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said.

If that were the case at X, an AI model specifically designed to detect the nonconsensual pornography rampant on the platform could effectively identify perpetrators, enforce bans on their accounts, and halt the continued dissemination of such policy-breaking material.

Whether Musk is doing any of that is unclear. For now, the billionaire owner of X seems most content to brag about the crowd of users coming to his platform amid a flood of supposedly prohibited content.

“X is now the #1 most downloaded app of any kind!” Musk posted on Wednesday.

Read Fortune’s full report on X’s plan to re-invent content moderation here.

Do you have insight to share? Got a tip? Contact Kylie Robison at [email protected], through secure messaging app Signal at 415-735-6829, or via X DM.

Subscribe to Data Sheet, our daily newsletter about the business of tech. Sign up for free.

Read more
Similar news
This week's most popular news