Twitch Platform Used to Spread Child Sexual Content: Report

Twitch Platform Used to Spread Child Sexual Content: Report
The U.S. live streaming video platform Twitch's logo application on the screen of a tablet, in Paris, on July 24, 2019. (Martin Bureau/AFP via Getty Images)

Short videos depicting sexual content involving children are being spread on Amazon-owned streaming platform Twitch, with such videos receiving thousands of views.

Twitch has a feature called “Clips” that allows users to capture short videos from livestreams that can then be edited and shared. A recent Bloomberg analysis of 1,100 clips revealed that at least 83 of them involved sexual material related to children. The material was reviewed by the Canadian Centre for Child Protection. Out of the 83 clips, 34 showed children, mostly boys between the ages of 5 and 12, displaying their genitalia.

Such exhibitionism was found to often be triggered by the encouragement they get from livestream viewers. Those clips had been watched 2,700 times. Some of the remaining 49 clips involved children being subjected to sexual grooming, with the videos racking up 7,300 views.

When a livestream viewer captures such material, it “becomes an almost permanent record of that sexual abuse,” said Stephen Sauer, director of the center. “There’s a broader victimization that occurs once the initial livestream and grooming incident has happened because of the possibility of further distribution of this material.”

Mr. Sauer insisted that social media firms cannot be looked upon to regulate child abuse content and called for government intervention.

“We’ve been on the sidelines watching the industry do voluntary regulation for 25 years now … We know it’s just not working. We see far too many kids being exploited on these platforms. And we want to see government step in and say, ‘These are the safeguards you have to put in place.’”

In a statement to the outlet, Twitch CEO Dan Clancy said “youth harm, anywhere online, is deeply disturbing.” When alerted by Bloomberg, the company deleted the child sexual content. “Even one instance is too many, and we take this issue extremely seriously,” Mr. Clancy said.

According to Twitch’s “Transparency Report” for the first half of 2023, the company has made “significant updates” for detecting and removing child sexual exploitation material. It is also “addressing evolving forms of harm, such as AI-enabled Child Sexual Abuse Material (CSAM).”

During this period, Twitch issued 13,801 enforcements for violating the firm’s Youth Safety Policy.

However, Twitch submitted fewer tips to the U.S. National Center for Missing and Exploited Children (NCMEC). Between the second half of 2022 and the first half of 2023, the number of tips fell from 7,600 to 3,300.

Twitch insisted that the decrease “reflects a change in our categorization to ensure we are accurately reporting illegal content. It does not represent a change in our enforcement for content that may endanger youth.”

An October study by the American Academy of Pediatrics (AAP) warned that minors using Twitch are at risk of being manipulated and groomed by sexual predators.

“Twitch represents a clandestine, threatening digital environment where minors are interacting with adult strangers without parental supervision,” the study stated.

“Young users clearly feel a false sense of safety on the platform; a significant proportion were willing to reveal personal information despite having no knowledge of who might be listening.”

The Epoch Times reached out to Twitch for comment.

Internet Child Exploitation Material

The issue of child sexual content proliferation is not just limited to Twitch. Many tech firms like Twitter, TikTok, Google, and Facebook are facing similar accusations.

In February last year, Julie Inman Grant, Australia’s eSafety Commissioner, issued legal notices to Google, Twitter, and TikTok, asking them to explain what they are doing to tackle the issue.

“The creation, dissemination, and viewing of online child sexual abuse inflicts incalculable trauma and ruins lives. It is also illegal,” she said at the time. “It is vital that tech companies take all the steps they reasonably can to remove this material from their platforms and services.”

In November, a U.S. federal judge ruled that Big Tech social media firms have to face a lawsuit that accused the companies of triggering a “youth mental crisis” and facilitating the spread of child sexual content.

The companies argued that the First Amendment protected them from liability for the content they published. However, District Judge Yvonne Gonzalez Rogers pointed out that many violations alleged in the lawsuit do “not constitute speech or expression, or publication of same.”

For instance, plaintiffs accused the social media firms of not providing effective parental controls to parents, not offering options for users to self-restrict time spent on a platform, not using robust age verification, and not implementing reporting protocols that would allow users to report CSAM and other such material.

“Addressing these defects would not require that defendants change how or what speech they disseminate,” the judge wrote.

Lawmakers are taking action to counter the problem. In August, Rep. Ann Wagner (R-Mo.) introduced the Child Online Safety Modernization Act which requires social media firms to collaborate with law enforcement to identify children in images classified as CSAM.

“This bill will make it clear that images and videos of children being raped is not ‘pornography,’ it is sexual abuse of a child. America cannot, and should not, accept a reality where innocent children are sexually exploited for financial gain,” she said.

Meta’s rollout of default end-to-end encryption for personal messages and calls on Messenger and Facebook has also raised concerns among child welfare activists.

In a Dec. 7 press release, the Canadian Centre for Child Protection stated that Meta’s decision means “millions of child sexual abuse and exploitation cases will cease to be reported.”

Since 2020, Meta has forwarded 74.4 million reports of suspected child sexual abuse and exploitation to the U.S. National Centre for Missing and Exploited Children (NCMEC) as per legal requirements, it said. These reports have triggered numerous investigations by law enforcement.

Meta’s decision means that law enforcement “will lose its ability to effectively monitor these crimes unfolding across large swaths of their platforms, including Facebook and Instagram.”

“NCMEC, which processes Meta’s child exploitation reports, has estimated these actions could cause as much as 70 percent of all reportable cases on its services to go undetected.”

From The Epoch Times