pwshub.com

A booming industry of AI age scanners, aimed at children’s faces

In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child’s face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children’s school as part of a campaign called “Share to Protect.”

The company, Yoti, had developed an AI tool that could estimate a person’s age by analyzing their facial patterns and contours. But to make it more accurate — and to bolster the company’s clientele of government agencies and tech firms — its developers needed more photos of kids.

Riaan van der Bergh recalled dutifully scanning his daughter and son, ages 11 and 10, in their suburban Johannesburg living room one afternoon, telling them the technology could help keep kids safe on a perilous web. But other parents, he said, hated the idea with an “extreme passionate fear.”

The skepticism was “overwhelming,” he added, “especially from the moms, who said, ‘No way, it’s my children.'”

With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as “age assurance” has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web.

The companies say their age-check tools could give parents a better sense of control and peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children — and everyone else — to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused.

Companies such as Yoti, Incode and VerifyMyAge increasingly work as digital gatekeepers, asking users to record a live “video selfie” on their phone or webcam, often while holding up a government ID, so the AI can assess whether they’re old enough to enter.

Some of the biggest social networks, such as Facebook, Instagram and TikTok, now use age-check tools to detect and restrict their youngest users. OpenAI uses them for its ChatGPT chatbot; so, too, do a number of online gaming and adult-content sites, including Pornhub and OnlyFans.

The systems’ prominence has surged alongside concerns that the internet, and particularly social media, could be damaging America’s youth — a crisis the U.S. Surgeon General deemed so dire he proposed cigarette-style warning labels for the platforms he said threatened “significant mental health harms.”

Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine: Employees at Incode, a San Francisco firm that runs more than 100 million verifications a year, now internally track state bills and contact local officials to, as senior director of strategy Fernanda Sottil said, “understand where … our tech fits in.”

But while the systems are promoted for safeguarding kids, they can only work by inspecting everyone — surveying faces, driver’s licenses and other sensitive data in vast quantities. Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'”

Yoti and Incode have said they take privacy seriously, including by deleting images after a person’s face is analyzed. But beyond privacy concerns, critics worry that adults could be wrongfully blocked from websites for failing an age check because of a disability or technical snag, like not having an ID. Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.

The tools’ supporters acknowledge that age checks could fuel a profound expansion in government oversight of online life. But critics argue that lawmakers hoping to shield kids could instead expose users of all ages to terrible risk, forcing them to hand over intimate details of their lives to companies that are largely unproven, unregulated and unknown.

“To protect them, you have to know who the children are,” said Brenda Leong, an attorney at Luminos Law who specializes in biometrics. “But one of the things you want to protect about them is their privacy. And the more you learn about them, the more their privacy is at risk.”

The baby test

More than two decades after the Children’s Online Privacy Protection Act first mandated that internet companies treat children differently from adults, most apps and websites still assume that all of their users, when asked their age, will tell the truth. Any 5-year-old able to select a fake birthday or tap “yes” on a box can see practically the entire web.

A small crew of tech firms in recent years began proposing a more rigorous route. They repurposed a technique used in facial recognition, an identification tool popular with police, by feeding millions of face photos into an AI system that learned to pinpoint aging’s tiniest clues.

For the most part, these systems have worked. In May, the National Institute of Standards and Technology (NIST), a federal laboratory, said it tested six age estimators on 11 million government photos — from border-crossing checkpoints, consulates and police mug shots — and found that they were typically accurate within about three years. Incode’s algorithm was especially precise with infants, the test found, triangulating a baby’s age within four months.

An Instagram spokeswoman said the social network’s age checks had stopped 96 percent of the teens who tried to change their accounts to make them look over 18. And Yubo, a social network built around live video chats, uses the tool to split users into age groups, the platform’s calling card.

When asked to be scanned to create a new account, one in 10 users walk away, but “we have accepted the trade-off,” said Marc-Antoine Durand, the company’s operations chief. “In the long term there is more confidence from users and more trust.”

Age-check gateways have quickly multiplied all over the web, and many offer unparalleled speed. Razan Altiraifi, a 37-year-old in North Carolina who recently scanned her face to access TikTok, said it concluded she was an adult so quickly that she almost felt self-conscious.

“It did feel a little creepy,” she said, “but you can’t beat the convenience.”

Yoti last year began advertising a future in which its age estimator could be stationed outside casinos and nightclubs, where it would never “get fatigued on a long shift … [or] show favor to personal friends.” NIST suggested the tools might one day help companies gather “population age statistics” for customers or assist with “age-tailored” digital ads.

But the technology also offers some quirks. Though NIST’s tests found the systems were resilient to humanity’s wide array of facial variations, such as bushy eyebrows or sun-damaged skin, their reliability eroded when a person showed certain facial expressions or wore eyeglasses. Error rates for girls and women were also higher than for boys and men, and the testers haven’t determined why.

“Maybe it’s cosmetics or hairstyles or bone structure,” said Patrick Grother, a biometrics researcher at NIST. “The truth is, we don’t know.”

The AI, they found, faces the same challenge that humans do: Some people just look older or younger than they are. Puberty kicks in at different ages, and cultural mores can shape how kids look and grow. Someone with “good living or extra help” will look far younger than someone who is malnourished or sleep deprived, said Julie Dawson, Yoti’s chief policy and regulatory officer.

Those blurred results could undermine their usefulness. The estimators’ three-year error rates, one federal judge noted, meant that some 16-year-olds would be able to access websites that 20-year-olds could not.

Violet Elliot, a 25-year-old woman with dwarfism, had her TikTok account banned and threatened with deletion in February after the app said falsely she was under 13. Her account where she posts about the discrimination she faces was reinstated after three days, Elliott said, but more than 500 videos disappeared.

TikTok restored some of the videos after questions from The Washington Post, and a company official blamed the ban on human error. But Elliott worried the technology might only add to the problem.

These checks fail to consider the thousands of disabled people “who may not fit the stereotypical image of an adult,” Elliot said. “AI is not equipped to understand the complexities of human life.”

‘Darker corners’

Louisiana became a national model for government age-verification demands when it passed a law in 2022 requiring age checks for any website deemed “harmful to minors” or designed to “pander to the prurient interest.” Eighteen states have since followed, including Tennessee, whose law requires an age check every 60 minutes a user is on the site, for extra monitoring.

Alabama’s law demands explicit sites warn visitors that porn “desensitizes brain reward circuits” and “increases the demand for … child pornography.” (An appeals court struck down a similar warning in a Texas law, calling it unscientific and unconstitutional.) Other states have expanded beyond just adult content: Florida now requires social networks to get parents’ permission before letting 15-year-olds create their own profiles.

But the laws have kicked off constitutional battles in federal courts, where judges have cautioned they could subject adults to undue monitoring and suppress protected speech. The Supreme Court said last month it would hear an appeal from a porn-industry group involving the Texas law.

Mississippi’s law was halted by a judge last month who said parents could use less restrictive “supervisory technologies” to monitor their children. A judge blocked Indiana’s law in June, noting minors could “just search terms like ‘hot sex’” on Google, no age scan required.

Some judges warned the costs of these government-mandated checks could be “prohibitively” expensive for companies. The Indiana judge said Pornhub, whose global traffic outranks LinkedIn and Netflix, could be charged more than $13 million a day. (Yoti said its age-check costs typically range from 10 to 25 cents a face but can drop lower for websites with “very high volumes.”)

Sarah Bain, a partner at the private-equity firm that owns Pornhub’s parent company Aylo, said the rules have forced companies like hers to collect unwanted information and driven viewers to less-policed websites. Since Pornhub started complying with Louisiana’s law, its traffic has plunged 80 percent statewide, even as traffic to foreign websites and others that don’t follow the law have climbed.

“These people did not stop looking for porn,” Bain said. “They just migrated to darker corners of the internet.”

In 11 age-checking states — Arkansas, Idaho, Kansas, Kentucky, Mississippi, Montana, Nebraska, North Carolina, Texas, Utah and Virginia — Pornhub has begun blocking access to everyone as a form of protest. Visitors are instead shown a video in which adult performer Cherie DeVille says that “giving your ID card every time you want to visit an adult platform” will put “your privacy at risk.”

Companies like Meta and Aylo have argued the checks should be done by big device and app-store makers, like Apple and Google. But any central storehouse for data would raise its own risks, said Jason Kelley, a director at the advocacy group Electronic Frontier Foundation.

“All these extremely sensitive pieces of information, linked to people’s faces?” he said. For a hacker, “that’s the best [treasure trove] I can imagine.”

‘Nanny state’

More than 70 percent of U.S. adults (and 56 percent of teens) say they support verifying users’ ages before they can use social media, a Pew Research survey found last year. But not everyone is so excited about the reality of switching on these tools.

In states with age-check laws, people have worked to dodge the scans by using tools, known as virtual private networks, that can mask their location. And when Yoti last year asked the Federal Trade Commission to approve age estimators as a new way to obtain parental consent, the agency fielded more than 300 public comments decrying the “nanny state” technology as a “breach of privacy for families who would be forced to submit.”

“'Facial age estimation technology’ is the most dystopian phrase I’ve ever heard,” said one commenter, Thomas Hale. The FTC denied the proposal but said it could be refiled after further tests.

Among underage users, the systems have encountered a different kind of resistance. On message boards and subreddits, some users have shared tips on how to print out fake IDs, buy other people’s selfie videos or apply makeup that might make them look sufficiently adult.

To see through such tricks, the companies run “liveness checks” on users’ videos that can detect whether the face is really just a printout or someone’s sleeping dad. TikTok is dotted with videos of young people trying, and failing, to sneak in: One user recorded themselves trying to fool an age estimator with a selfie on their laptop screen before ultimately giving up, declaring it “SO DEPRESSING.”

But the most critical ingredient has been the companies’ growing photo collections of kids in the real world. Sottil, the Incode executive, said the company had paid a contractor to get kids’ facial photos with their parents’ permission across Africa, Asia, Europe and Latin America, sometimes in exchange for Amazon gift cards.

Yoti, which collected face data in Nairobi years before “Share to Protect,” had hoped the South Africa project would be well received by parents because of its stated focus on keeping kids safe. “People share photos of their children all the time,” Dawson, the chief policy officer, said at the time.

Kate Farina, a founder of Be in Touch, a South African advocacy group that helped Yoti with the project, said parents were given a packet asking for their consent and explaining the photos would be “kept for just as long as the system needs it.” Teachers were instructed to line up participating students and then go face by face, uploading a scan for each child.

But as Riaan van der Bergh remembered it, many parents refused to sign up because of their “fear of the unknown.” The offer of money for each photo backfired, he added, making it all seem uncomfortably transactional.

The company, he remembered, received a little more than a thousand photos, far short of its 50,000-face goal. (Yoti declined to give an exact number, saying only that the campaign was a success.)

“Some people immediately said, ‘Oh no, I’m selling my kid,'” he recalled, to which he tried to offer some reassurance.

“No,” he told them. “You’re selling your kids’ data.”

Source: washingtonpost.com

Related stories
3 weeks ago - New campuses — Industrial sites have the large tracts of land and resources needed. Booming demand...
1 month ago - Kurt “CyberGuy" Knutsson goes into detail on AI massage robots, like Aescape's, revolutionizing wellness with consistent, custom massages.
1 month ago - Lenovo also cashes in on AI demand, without being able to turn it into profit Demand for cloudy CPUs has levelled out at top Chinese clouds Alibaba and Tencent, whose customers increasingly want GPUs instead.…
1 month ago - Anti-monopoly groups want DOJ to probe Nvidia’s AI chip bundling, alleged price-fixing.
1 month ago - The company reiterated that it already released version 1.0 of the process design kit (PDK) for 18A to partners in July, enabling them to kickstart the development of chips using this cutting-edge manufacturing tech. But the more exciting...
Other stories
1 hour ago - As an Amazon Prime member, not only do you get a free Grubhub+ membership, you can also score $10 off your first $15 order.
1 hour ago - Amazon's second Prime Day event of 2024 is still a few weeks away, but there are some bargains you can score now.
1 hour ago - YouTube will roll out a new generative AI video tool named Veo later this year that'll allow creators to create 6-second clips with nothing more...
2 hours ago - FBI Director hails successful action but calls it “just one round in a much longer fight.”
2 hours ago - SocialAI takes the social media "filter bubble" to an extreme with 100% fake interactions.