Home Internet To Keep Kids Safe, the Internet Needs a Bouncer at the Door

To Keep Kids Safe, the Internet Needs a Bouncer at the Door

More than 30 state attorneys general are suing Meta, the parent company of Facebook and Instagram, for luring kids younger than 13 onto the social media platforms and collecting their data—in violation of federal children’s privacy law.

The states argue Meta should use more robust age verification methods. But the platforms and their allies claim that such requests threaten free expression and personal privacy.

This is a self-interested argument. It is grounded not in fact, but in a desire to attract as many users as possible.

Age verification measures don’t violate anyone’s right to privacy. They’re hardly more invasive than the widespread data collection and tracking technologies that online platforms already utilize. They do, however, protect children from sexual exploitation.

The internet is often likened to a town square, where all members of society can converse and interact. But in reality, it’s more like an enormous masquerade ball, where anyone can assume a false identity.

That poses a special risk to children, who are frequently victims of sexual abuse online. One increasingly common form is “sextortion”—where an adult pretends to be a child or teenager to solicit explicit photos and blackmail their victims.

Sextortion usually happens on social media or gaming platforms, such as Facebook or Discord. An adult posing as a child or teen will befriend a minor online and proceed to gain their trust. They’ll soon begin requesting sexually explicit photos, often by sending those kinds of images themselves. Once the victim reciprocates, the abuser then proceeds to viciously blackmail them, demanding they send money or more sexually explicit material.

Rates of online sextortion have skyrocketed in recent years. In 2022, 80,000 incidents were reported to the National Center for Missing and Exploited Children’s CyberTipline, a centralized reporting system for the sexual exploitation of children. That represents an 82 percent increase over the year prior. According to experts, actual sextortion rates were much higher than reported due to the large number of victims who are blackmailed into staying silent.

Sextortion can also lead to child sexual abuse material proliferating across the internet. The CyberTipline received nearly 32 million individual reports of such material from online platforms in 2022 alone. That’s 10 million more than it received three years ago—and similar to reports of sextortion, likely just the tip of the iceberg.

SWANSEA, UNITED KINGDOM – OCTOBER 27: A 12-year-old boy looks at an iPhone screen on October 27, 2023 in Swansea, Wales. The amount of time children spend on screens each day rocketed during the Covid pandemic by more than 50 per cent, the equivalent of an extra hour and twenty minutes. Researchers say that unmoderated screen time can have long-lasting effects on a child’s mental and physical health. Recently TikTok announced that every account belonging to a user below age 18 have a 60-minute daily screen time limit automatically set.
Matt Cardy/Getty Images

Far from only being prevalent on the “dark web” or shady chat rooms, 95 percent of child sexual abuse material reports from electronic service providers in 2022 came from six of the most popular online platforms: Facebook, Google, Instagram, Snapchat, TikTok, and WhatsApp.

Stricter age verification measures would help protect kids from this abuse. These can range from requirements that users upload government-issued identification, to software that ascertains a person’s age by analyzing their facial features. Successful implementation would make it much harder for predators to pose as minors—and ensure that if they still manage to do so, social media companies will have information on record that can help authorities identify them.

Multiple U.S. states are fighting to protect children online using these methods. Louisiana, Utah, Arkansas, and Mississippi have mandated age verification on porn sites, for example.

Some countries are going further, imposing age verification requirements on social media platforms. French leaders, alarmed by a growing body of research showing that social media is harming children’s mental health, recently blocked social media sites from allowing minors under 15 to register unless they receive their parents’ permission. Platforms that fail to put appropriate verification safeguards in place will receive hefty fines.

Critics contend that age verification violates the right to privacy. But “online privacy,” as they describe it, hasn’t existed for years. Internet service providers already collect and sell users’ data. There are hundreds of people-finder websites that serve as digital white pages, compiling information such as addresses, phone numbers, and birth dates on millions of people. These sites can sell that personal information without your consent.

Meanwhile, third-party web tracking platforms collect information about every internet user on a wide range of sites. Google alone tracks nearly 40 percent of all web traffic, and Facebook tracks more than 15 percent. And 84 percent of internet users have a unique “browser fingerprint” that can be used to identify them. That’s to say nothing of geolocation software on smartphones, which constantly monitors users’ every move.

At the end of the day, this isn’t about privacy. It’s about protecting vulnerable kids—and prioritizing the actual documented harms that so many children experience online daily.

Asking adults to quickly and simply verify their identity doesn’t violate civil liberties—but it can shield children from people who want to harm them.

Teresa Huizar is CEO of National Children’s Alliance (nationalchildrensalliance.org), America’s largest network of care centers for child abuse victims.

The views expressed in this article are the writer’s own.