Decades ago, the early internet looked a bit like the old Wild West at the start of the gold rush. Users could post anything they liked, and those logging on via dial-up phone lines accepted the risk of exposure to violence, pornography, hate speech, misinformation, and other harmful content.
Now, Texas and Florida could return the internet to that era with so-called “must-publish” laws requiring websites to publish all legal user-generated content.
So, what happens if we return to those online Wild West days?
“Must-publish” laws would prohibit websites from removing or deprioritizing harmful content, like hate speech posted by their users. For example, Facebook would be unable to remove or hide a post advocating for genocide that was placed in a group memorializing victims of genocide.
The Texas and Florida laws were temporarily blocked by courts awaiting consideration by the Supreme Court, which recently decided to hear both cases. Arguments for “must-publish” rules are fueled by claims that websites benefit from all content and engagement — even, and in some cases especially, harmful content.
New research proves that not only are these claims false, but negative content like hate speech actually harms websites and damages the value of online advertisements.
In a first-of-its-kind experiment, researchers used simulated social media feeds to test how hate speech impacted users’ opinions of social media services and the brands advertising on them.
They analyzed the impacts of hate speech on user attitudes toward the social media service hosting the content, the companies advertising alongside the content, and the advertised products themselves.
The findings were clear: Hate speech harms both websites and advertisers alike. An average of 40% of respondents reported liking the social media service less after viewing simulated hate speech, while 20% reported that the content made them like the advertiser less. Respondents indicated that they were less likely to click on ads appearing adjacent to hate speech and more likely to report those ads.
Given that advertiser dollars provide up to 90% of revenue for popular apps and websites, the results suggest that websites have a rational incentive to moderate content on their platforms.
These results are not surprising. Not all engagement is equal. Offensive content tends to push users and consumers out of a purchasing mindset and creates engagement patterns that advertisers actively seek to avoid. Social media services dedicate enormous resources to protect users and advertisers, and advertisers likewise invest significant resources in brand safety teams. Neither social media services nor their advertisers would expend such resources unless there was a compelling business reason to do so.
Until now, lawmakers calling to end content moderation to combat perceived “censorship” have claimed without rebuttal that social media services benefit from all engagement — including negative content. The findings of this new research are evidence that these arguments are unfounded and “must-publish” laws should be rejected.
Trevor Wagener is CCIA’s Research Center Director and Chief Economist.