As the Supreme Court considers whether Big Tech should be liable for harmful content, two separate cases heard this week are also shedding additional light on brand safety, the pace of innovation, and how each case could impact the future of digital advertising.
This week, the nation’s highest court heard separate oral arguments about Google and Twitter, and whether social networks should be held responsible for terrorist content that families of victims claim led to the deaths of their relatives.
The cases relate to whether tech companies should be held liable for dangerous content, and both hinge on aspects of Section 230 of the Communications Decency Act, a 1996 law that provides protections for online platforms and third-party content on them. And although the focus of each is quite narrow, experts say the stakes are much higher and could have a potentially far broader impact on the future of free speech, content moderation and how platforms sell advertising.
“It could transform both the ways that ads are hosted and recommended on the algorithms and also the way non-advertising content is recommended,” said Jeffrey Rosen, CEO of the National Constitution Center, a nonpartisan nonprofit focused on constitutional education.
Numerous tech companies — including Twitter, Reddit, Craig’s List, Yelp, Meta, Microsoft and The Match Group — along with various trade groups and advocacy organizations have filed amicus briefs with the Supreme Court. Each covers a range of topics including how social platforms moderate their content, the role of advertising and the potential implications to users and companies. Others, such as the trade groups including Interactive Advertising Bureau, suggest that weakening Section 230 protections could also impact small businesses and make them susceptible to a wave of new lawsuits.
“The Google case is simply the tip of the iceberg as it relates to legal jurisdiction and legislation surrounding Section 230,” said Marc Beckman, CEO of ad agency DMA United. “The floodgates are opening.”
Some tech companies’ briefs address their ongoing efforts to mitigate harmful content. For example, Meta’s points out current efforts for removing terrorist-related accounts and posts from Facebook and Instagram — policies and actions that the company says are important for retaining both users and advertisers.
The counter-argument holds that platforms like Google’s YouTube shouldn’t be protected like traditional publishers. A briefing filed by Common Sense Media and Facebook whistleblower Frances Haugen claims that Google’s features are “particularly insidious” and could more easily allow dangerous groups to interact with people through accounts and content.
“Google knowingly provides ISIS with use of its algorithms, and other unique computer architecture, computer servers, storage, and communication equipment, to facilitate ISIS’s ability to reach and engage audiences it otherwise could not reach as effectively,” according to the briefing filed by Common Sense Media and Haugen. “Advertisers pay Google to place targeted ads on videos, and Google has approved ISIS videos for ‘monetization’ through the tech firm’s placement of ads in those specific videos.”
Google did not respond to Digiday’s requests for comment about the claims.
There’s also a danger in combining the two cases, said Erik Stallman, a former Google lawyer who is now a law professor at the University of California-Berkley. One of the things he’s most concerned about: A potentially “muddy ruling” in the Google case that chips away at Section 230 while making it unclear what a platform can or can’t do.
“The thing that’s under-appreciated is how much the recommendation algorithms are connected to also keeping certain types of harmful content either off platforms or less likely to be disseminated on those platforms,” Stallman said.
The Supreme Court’s decision could also potentially impact rules related to artificial intelligence — and in particular generative AI. On Tuesday, Justice Neil Gorsuch pointed out that search engines might be protected when it comes to content. However, beyond that is still unclear.
“I mean, artificial intelligence generates poetry, it generates polemics today,” Gorsuch said during oral arguments in the Google case. ”That would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”
The full impact won’t be known until the Supreme Court issues a ruling, which is expected before July. But some observers say the impact on the ad industry might be more muted. Brian Wieser, a longtime advertising analyst, believes the decision might only marginally change where dollars flow. However, Wieser said it could make platforms that improve brand safety measures more attractive.
“If you lop off a quarter of YouTube’s inventory, I don’t think the math behind the inventory changes,” said Wieser, now running his own consultancy, Madison & Wall.
While it’s still unclear what the Supreme Court might decide, some legal experts pointed out that the justices on both sides of the political aisle seem to understand the weight of their decision.
No matter what the Supreme Court decides, others also see a need for more transparency. And rather than trying to regulate everything, some say the court might be better off focusing on more specific issues. Sahar Massachi, cofounder and executive director of tech think tank The Integrity Institute, likened attempts to regulate social media to preventing car crashes. Massachi, who spent several years as an engineer on Facebook’s civic integrity teams, said it makes more sense to first understand where the problems exist.
“You think you’re regulating cars, but what you’re actually doing is regulating roads and bridges and a transportation network,” Massachi said. “Can you talk about the differential drive shafts first? Understand what those are, and work your way upwards from there. If the problem in cars is that Ford Pintos explode, talk about designing safety first before you sink your teeth in this whole transportation network, because you’ve got to work up to it.”