“An offender can take a picture of your child from a soccer game and convert it into a child (pornographic) video.” — Corrine St. Thomas Stowers, former Supervising Intelligence Analyst
The term “deepfake” was coined in 2017 by an internet user who helped spark a particularly onerous development in virtual technology. By using open-source images of celebrities and inserting them into explicit images and video, a new and lucrative industry was launched.
Since then, with the rapid growth of artificial intelligence technology, or AI, it has only become worse. According to Italian technology company Deeptrace, pornography in 2019 accounted for 96 percent of deepfake videos online.
Sadly, children have become prime victims of abuse, enticement, exploitation, coercion and so-called sextortion.
Getting the word out
April is National Child Abuse Prevention Month and in recognition, SafeOC, a local site that focuses on safety issues, is working with experts to raise awareness about the dangers of online child exploitation.
SafeOC highlighted the issue in its April newsletter in collaboration with Corrine St. Thomas Stowers, a professor and former supervising intelligence analyst. Safe OC will continue to provide updates to spread awareness and encourage vigilance.
“Child sex abuse material has gone up exponentially and horrifically,” St. Thomas Stowers said.
And that was before advances in AI became widely available.
St. Thomas Stowers, formerly a cyber analyst in the Exploited Children’s Division with the National Center for Missing and Exploited Children (NCMEC), has seen the explosion firsthand. The NCMEC, which records tips of potential online child exploitation, reports that the group’s cyber tipline hit a record 36.2 million complaints last year.
When St. Thomas Stowers started, she said, there were about 100 tips a day.
“Now there are 1,000 or more,” she said. “You can’t even make a dent in it.”
Donna Rice Hughes, who founded Enough Is Enough, a leading nonprofit in the fight against internet child exploitation, says web sharing, “exponentially grew child sexual abuse material, and with AI you just blew the top off Pandora’s box. And you can’t put it back in once it’s out in cyberspace.”
What is a Deepfake?
Deepfakes use digital technology and AI tools to create text, images and video that can mimic something “real,” or, as is increasingly the case, produce something entirely new and often much worse.
In 2019, undressing or nudify apps were introduced to create naked images and videos, usually without consent, from an otherwise benign photograph of a person.
Time magazine reported that in 2023, the social network analysis company Graphika found “24 million people visited undressing websites.”
Nowadays, St. Thomas Stowers says, “an offender can take a picture of your child from a soccer game and convert it into child sexual abuse material (including images and videos).”
That content is then shared and/or used to coerce, extort, blackmail or exploit a child, potentially causing irreparable psychological and physical harm. In certain cases, child abusers have used bogus images of their targets to force them into filming their own abuse, beginning a cycle that can last for years.
The FBI states it “has seen a huge increase in the number of cases involving children and teens being threatened and coerced into sending explicit images online — a crime called sextortion.”
This form of extortion is one of the fastest growing sectors of cybercrime, with boys aged 13 to 17 a particularly vulnerable group, according to a report by We Protect Global Alliance, an international group battling online exploitation.
According to the FBI and cybersecurity experts, since 2021, at least 30 suicides of teen boys have been linked to sextortion with threats of revealing “sensitive material.” Given the extreme shame often associated with such behavior, that number is likely just a small fraction of the true number.
Making children perpetrators
Abuse is not just an adult-on-juvenile problem either.Tragically, exploitation has grown to include child-on-child victimization.
In New Jersey, a 2023 case made national headlines when a teenager used a commercially available undressing site to create more than 30 nude images of girls at his school before sharing the manipulated images in various group chats.
Similar instances have been reported in Washington, Texas, and California. Late last year, two Pennsylvania students were charged with 59 counts of sexual abuse of children, for creating and distributing deepfake images of their female classmates.
A survey by the Center for Democracy & Technology found that 15 percent of students were aware of at least one “Deepfake that depicts an individual associated with their school in a sexually explicit or intimate manner.”
According to THORN, another advocate for child safety online, 1 in 17 teens say they have been a target of deepfakes.
New versions of generative AI can invent images, which has led to vile scenarios involving infants and toddlers in horrific depictions, according to St. Thomas Stowers. Consequently, the line between what’s real and what isn’t can become all but indistinguishable. This in turn has led to legal and ethical debates about how to treat images that aren’t “real.”
According to St. Thomas Stowers and Rice Hughes, these arguments sidestep an essential element and potential connection between reality and imaginary, with those who create virtual abuse likely to later commit “hands-on abuse.”
“We find these offenses go hand in hand,” St. Thomas Stowers said.
What to do
A bipartisan bill, S.4569, or the Take It Down Act, sponsored by Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) was passed by the Senate in December and has reached the House.
Behind the Badge reported on the bill in December.
Among its provisions, the bill pertains to “nonconsensual intimate visual depictions,” including both authentic photos shared without consent and forgeries produced by artificial intelligence or other technological means.
The bill would require platforms to remove the content and punish those who publish images of either adults or minors punished with a fine or up to three years in prison.
The President has said he will sign the bill, although cuts to leadership and staff at the Federal Trade Commission, which would enforce the act, could reduce the bill’s efficacy.
As a result, states need to remain at the forefront of battling deepfakes and child exploitation.
In November, the National Conference of State Legislatures reported, “lawmakers in at least 17 states enacted laws that specifically refer to online impersonation done with an intent to intimidate, bully, threaten or harass a person through social media sites, email or other electronic or online communications.”
California went further with Assembly Bill 1831, which aims to close a loophole that excludes AI-generated synthetic material that doesn’t depict actual people.
Assemblyman Marc Berman (D-Palo Alto), co-author of the bill, notes that law enforcement has difficulty prosecuting individuals due to the current state of the law.
“People say, ‘Well, there’s no victim,’ because the image is not of an actual child, but I would argue that all the thousands of children whose images were used, scraped off the internet, scraped off of school websites, scraped off of public social media profiles. … All of those children are abused when their images are used to create this terrible content,” Berman stated. “It’s so important that we get ahead of this issue and make it as clear as day that this content is illegal to create, possess, distribute, or sell.”
Meanwhile it is vital to equip law enforcement to do its job, experts say.
“There’s no putting the genie back in the bottle,” Rice Hughes said. “But you can aggressively fund and give police the power to enforce existing law.”
Laws aren’t enough
When it comes to protecting our children — job no. 1 of a responsible caregiver — parents simply must take control of their devices and the family’s communication habits. The consequences of not being vigilant and educated are too terrifying not to.
To combat online dangers, websites like SafeOC have released guidelines for parents.
The first thing parents should do is turn on parental controls on all devices to monitor and restrict what their children do online.
New devices typically include instructions for setting device controls. In addition, free online guides and tutorials can take you through steps to set up and customize controls.
Adults should also investigate parental control apps and services which can help monitor activity, control access through web filters, offer location tracking, keystroke tracking, screen recording and other options. Here are some independent parental control app reviews.
Other reviews help parents rate the offerings of different apps on specific areas of interest and concern.
It is vital to talk with your children, because for every method of prevention a parent can dream up, a child or teen can usually find multiple workarounds. Experts say the best approach is an honest discussion, complete with expectations and consequences.
A former Orange County School Resource Officer developed Cybersafetycop, a website with helpful information, tips, and contacts. Parents can also consider an internet and mobile device contract to be printed and signed that sets limits and expectations.
Dozens of internet safety organizations have sites that parents can peruse and share with their children. If it seems time consuming and confusing, consider the stakes.
None of it is easy, and nothing could be more worthwhile.
As St. Thomas Stoewers said, the amount of exploitation and abuse “is staggering.”
“The sad reality is victimization is happening and juveniles are still targeted,” she said. “And the consequences can be psychologically devastating.”
Sign up for the ReadyOC newsletter and the SafeOC newsletter to receive local updates, public safety alerts, and tips.