A slew of big tech giants, including Meta, Microsoft, and Twitter, have joined Google in filing legal briefs in the U.S. Supreme Court, arguing that amending a statute pertaining to social media companies’ liability for content posted on their platforms could strip the companies of protections from lawsuits.
Section 230 of the Communications Decency Act generally shields online platforms like Google and Meta from liability for content published by third parties, or their users. The law was passed by Congress in 1996.
This year, in February, the Supreme Court is scheduled to hear a high-profile case—Gonzalez v. Google—which centers around whether or not online websites should be held liable for content recommended on their sites.
For example, Nohemi Gonzalez was a 23-year-old American citizen who has killed by ISIS terrorists when they opened fire in a Paris bistro in November 2015. Lawyers for the family have argued that Google-owned YouTube was partly to blame for the incident and helped the terrorist group to grow by allowing it to share content with its users, thus prompting deaths.
Lawyers for Gonzalez also alleged that YouTube’s recommendation algorithms made it easier for ISIS videos to reach potential recruits.
The Supreme Court is now weighing up possibly striking down some of the protections provided in Section 230.
Amending Section 230 Could ‘Upend the Internet’
However, tech giants such as Google argued (pdf) that recommending content is an essential element in how platforms provide content to users and that modifications of Section 230 could potentially “upend the internet” and “encourage both wide-ranging suppression of speech and the proliferation of more offensive speech.”
Microsoft, in its filing (pdf) this week, stated that a court decision amending the statute could “strip these digital publishing decisions of long-standing, critical protection from suit—and it would do so in illogical ways that are inconsistent with how algorithms actually work.”
A ruling by the justices “would thereby expose interactive computer services to liability for publishing content to users whenever a plaintiff could craft a theory that sharing the content is somehow harmful. Simply put, the stakes could not be higher,” Microsoft wrote.
Meta, meanwhile, argued in its filing (pdf) that the company has “long had strict policies prohibiting terrorists and terrorist groups, as well as posts that praise or support such individuals and groups, on its services.”
“Indeed, there are no allegations that the terrorists who carried out those attacks even viewed social media—much less that they viewed ISIS videos on YouTube because Google ‘recommended’ them. The absence of any such allegations makes this a singularly inappropriate vehicle to draw such a distinction,” Meta wrote.
The Mark Zuckerberg-led company then went on to argue that “petitioners’ purported recommendation/removal distinction for liability purposes is illusory, as it has no grounding either in the statutory text—which broadly describes the third-party ‘information’ for which an interactive computer service may not be held liable, and nowhere mentions any exception for ‘recommendations’—or in how websites actually function.”
Elsewhere, Twitter owner Elon Musk wrote in the company’s brief (pdf) that while plaintiffs and the United States “try to distinguish YouTube’s use of a sidebar to display video thumbnails as targeted recommendations, there is no principled or administrable line between such displays and the selection of content for display in a users’ information feed based on, for example, what accounts the user follows and where the user lives.”
Biden Says Big Tech Needs to Take Responsibility
“Section 230 ensures that websites like Twitter and YouTube can function notwithstanding the unfathomably large amounts of information they make available and the potential liability that could result from doing so,” Twitter wrote in its filing.
Yelp, Reddit, and Craigslist also filed briefs to the court in support of the other tech companies.
Google, in a court filing (pdf) last week, argued that the Supreme Court should “decline to adopt novel and untested theories that risk transforming today’s internet into a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content.”
A string of tech company trade groups, including the Center for Democracy and Technology, the Computer and Communications Industry Association, and the Chamber of Progress, also filed briefs this week defending the importance of Section 230.
Lower courts previously ruled in Google’s favor with regards to Gonzalez v. Google, but the Supreme Court is scheduled to hear oral arguments in the case on Feb. 21 along with a second case, Twitter Inc. v. Taamneh.
President Joe Biden has long argued that Section 230 protections allow platforms to spread hate speech, most recently penning an opinion piece in The Wall Street Journal in which he said that tech companies must “take responsibility for the content they spread and the algorithms they use.”
“That’s why I’ve long said we must fundamentally reform Section 230 of the Communications Decency Act, which protects tech companies from legal responsibility for content posted on their sites,” Biden wrote.
Meanwhile, Republican lawmakers have argued that the statute grants immunity to tech giants like Google and Facebook, allowing them to censor conservative voices.
From The Epoch Times