Home Artificial Intelligence AI child pornography is already here and it’s devastating – Deseret News

AI child pornography is already here and it’s devastating – Deseret News

Sexual exploitation of children over the internet is a major problem that’s getting worse courtesy of artificial intelligence, which can aid production of child sexual abuse material. Meanwhile, the tools to deal with an AI influx of child pornography are already inadequate.

That’s according to a new report by the Stanford Internet Observatory Cyber Policy Center and experts interviewed by the Deseret News, who all conclude the problem isn’t a prediction of future harm, but something that exists and is poised to explode unless effective countersteps are taken.

“It’s no longer an imminent threat. This is something that is happening,” Tori Rousay, corporate advocacy program manager and analyst at the National Center on Sexual Exploitation, told Deseret News.

Generative AI can be used to create sexually exploitive pictures of children. The National Center for Missing and Exploited Children said it has received more than 82 million reports of child sex abuse material online and more than 19,000 victims have been identified by law enforcement. It’s not clear how many of the images are AI-manipulated from photos of real child victims, since the technology can be used to make images depicting children performing sex acts or being abused. The problem is so serious that at the end of March the FBI issued a public service announcement to remind would-be perpetrators that even images of children created with generative AI are illegal and will be prosecuted.

The CyberTipline, managed by the National Center for Missing and Exploited Children under congressional authorization, takes tens of millions of tips a year from digital platforms like Facebook and Snapchat, then forwards them to law enforcement, where fewer than 1 in 10 result in prosecution for various reasons, as The Washington Post recently reported.

Sometimes, though, the tips help bust up networks involved in sharing child sexual abuse material, which is referred to simply as CSAM by law enforcement, child advocates and others.

The Stanford report calls the CyberTipline “enormously valuable,” leading to rescuing children and charging offenders with crimes. But it calls law enforcement “constrained” when it comes to prioritizing the reports so they can be investigated. There are numerous challenges. Reports vary in quality and information provided. The National Center for Missing and Exploited Children has struggled to update its technology in ways that help law enforcement triage tips. And the center’s hands are somewhat tied working with platforms to find child pornographic images. A 2016 federal appeals court ruled the center can accept offered reports, but “may not tell platforms what to look for or report, as that risks turning them into government agents, too, converting what once were voluntary private searches into warrantless government searches” the courts can toss out if someone is charged.

So platforms decide whether to police themselves to prevent sexual exploitation of children. If they do, the Stanford study further notes that “another federal appeals court held in 2021 that the government must get a warrant before opening a reported file unless the platform viewed that file before submitting the report.”

AI is making it worse.

“If those limitations aren’t addressed soon, the authors warn, the system could become unworkable as the latest AI image generators unleash a deluge of sexual imagery of virtual children that is increasingly ‘indistinguishable from real photos of children,’” the Post reported.

“These cracks are going to become chasms in a world in which AI is generating brand-new CSAM,” Alex Stamos, a Stanford University cybersecurity expert who co-wrote the report, told the Post. He’s even more worried about potential for AI child sex abuse material to “bury the actual sex abuse content and divert resources from children who need rescued.”

Rousay bristles at the idea that because the images are generated by AI, there’s no harm since the kids pictured aren’t real. For one thing, “there’s no way to 100% prove that there’s not actual imagery of abuse of anybody in that AI generator” without having the actual data that trained the AI, she said. And by law, if you can’t tell the difference between the child in a generated image and actual abuse imagery, it’s prosecutable. “It’s still considered to be CSAM because it looks just like a child.”

Rousay isn’t the only one who sees any “it’s not real” indifference as misplaced.

Why child sexual abuse material is always dangerous

“Artificially generated pornographic images are harmful for many reasons, not least the unauthorized use of images of real people to create such ‘fake’ images. AI isn’t simply ‘made up,’ but is rather the technological curating and repurposing of large datasets of images and text — many images shared nonconsensually. So the distinction between ‘real’ and ‘fake’ is a false one, and difficult to decipher,” said Monica J. Casper, sociology professor, special assistant on gender-based violence to the president of San Diego State University and chair of the school’s Blue Ribbon Task Force on Gender-Based Violence.

She told Deseret News, “Beyond this issue is the worldwide problem of child sexual abuse, and the ways that any online images can perpetuate violence and abuse. Children can never consent to sexual activity, though laws vary nationally and internationally, with some setting the age of consent anywhere from 14 to 18. Proliferating explicit and nonconsensual images will make it even harder for abusers to be found and prosecuted.”

The problem is global and so is its recognition. In February, the United Nations special rapporteur on sale and sexual exploitation of children, Mama Fatima Singhateh, issued a statement that read in part: “The boom in generative AI and eXtended Reality is constantly evolving and facilitating the harmful production and distribution of child sexual abuse and exploitation in the digital dimension, with new exploitative activities such as the deployment of end-to-end encryption without built-in safety mechanisms, computer-generated imagery including deepfakes and deepnudes, and on-demand live streaming and eXtended Reality of child sexual abuse and exploitation material.”

She said the volume of child sexual abuse material reported since 2019 has increased 87%, based on WeProtect Global Alliance’s Global Threat Assessment 2023.

Singhateh called for a “core multilateral instrument dedicated exclusively to eradicating child sexual abuse and exploitation online, addressing the complexity of these phenomena and taking a step forward to protect children in the digital dimension.”

AI-generated images of child sexual abuse is harmful on multiple levels and helps normalize sexualization of minors, Rousay said.

Nor does a deepfake image mean a real child won’t be victimized. “What we do know from CSAM offenders is they have a propensity to hands-on abuse. They are more likely to be hands-on offenders” who harm children, she said.

That’s a worry many experts share. “We often talk about addiction to harmful and self-destructive habits beginning with some sort of a ‘gateway.’ To me, value assertion aside, enabling AI to exploit children is complicit in providing a gateway to a devastatingly harmful addiction,” said Salt Lake City area therapist and mental health consultant Jenny Howe. “Why do we have limits and rules on substances? To help provide a boundary which in turn protects vulnerable people. This would open up an avenue to harm, not detract from child exploitation,” she said of AI-generated images of children being sexually exploited.

Struggling to tame AI

Rousay said everyone concerned about child sexual abuse and exploitation is trying to figure out how to handle the new threat recent dramatic proliferation of AI creates, including law enforcement, child advocates, lawmakers and others. Experts struggle with terminology for AI-generated images and how the issue should be framed. Additionally, child sexual abuse material creation takes many forms, including abuse images of real children and creation of images by putting photos of real child abuse into a generator to create images where a child would not be identifiable. Child sexual abuse material sometimes turns innocuous pictures of children into exploitive images that were generated by combining them with photos of adults committing sex acts, resulting in what Rousay calls “photorealistic CSAM.”

“It doesn’t have to be actual images of abuse, but you can still create a child that is in explicit situations or does not have any clothes on based on what is in the AI generator,” Rousay said.

Differences in state laws also create challenges. “It’s not a mess,” said Rousay, “but everyone’s trying to figure this out. Trying their best. It’s very, very new.”

AI further muddies the issue of age. It’s obvious when an image portrays a 5-year-old. A 16- or 17-year-old is a minor, too, the sexual exploitation illegal, but it’s easier to say the portrayal is of an adult, said Rousay.

The hope is we can find ways to prosecute, she added. While technology has evolved, bringing increased access to child sexual abuse material, experts believe that platforms and others have an obligation to step up and help combat what obviously amounts to child sexual exploitation enabled by technology.

Will Congress act?

That AI generates child sexual abuse material images is well known. In September, 50 state-level officials sent a letter to Congress asking lawmakers to act immediately to tackle AI’s role in child sexual exploitation.

Congress is pondering what to do. Among legislative action being considered:

  • The Kids Online Safety Act, proposed in 2022, would require digital platforms to “exercise reasonable care” to protect children, including reining in features that could make depression, bullying, sexual exploitation and harassment worse.
  • Altering Section 230 liability protection for online platforms under the 1996 Communications Decency Act. The act says digital platforms can’t be sued as publishers of content. As PBS reported, “Politicians on both sides of the aisle have argued, for different reasons, that Twitter, Facebook and other social media platforms have abused that protection and should lose their immunity — or at least have to earn it by satisfying requirements set by the government.”
  • The REPORT Act, which focuses on child sexual abuse material, passed the Senate by unanimous consent, but the House has not acted. It amends federal provisions regarding reporting of suspected child sexual exploitation and abuse offenses. REPORT stands for Revising Existing Procedures on Reporting via Technology.

The Stanford report has its own call to action for Congress, saying funding for the CyberTipline should be increased and rules clarified so tech companies can report child sexual abuse material without bringing liability on themselves. It also says laws are needed to deal with AI-generated child sexual abuse material. Meanwhile, the report adds that tech companies must commit resources to finding and reporting child sexual abuse material. The report recommends the National Center for Missing and Exploited Children invest in better technology, as well.

The final ask is that law enforcement agencies train staff to properly investigate child sexual abuse material reports.

What others can do

Sexual exploitation using technology reaches into different communities and age groups.

Rousay cites the example of middle school and high school boys downloading pictures of female classmates from social media and using AI to strip them of clothes “as a joke.” But it’s not funny, and is abuse that can be prosecuted, she said. “The girls are still victimized and their lives turned upside down. It’s very traumatizing and impactful.”

The apps used were designed to do other things, but were readily available from an app store. Such apps should be restricted, she said.

Parents and other adults need to help children understand how harmful sexual exploitation and abuse is, including that generated by AI, according to Rousay. “That would be really beneficial because I think there is some kind of dissonance, like this is not real because it’s not an actual child. But you don’t know the abuse that went into the generator; you can’t tell me that those images were taken legally with consent and legal age.”

She said talking about what’s known about child sexual abuse material offenders would help, too, including their tendency to hands-on offenses.

Child sexual abuse material is something victims live with forever, Rousay said. “We know that as adults the disclosure rate is poor because there is a stigma, I guess, of talking to people about your experience.” Beyond that, pornographic images can be shared years after the fact “and it’s really hard to get it taken down. Plus, you’re basically asking the victim to go and find their image and get it from the platform. It’s horrible.”

Images may linger online forever.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment