The Federal Trade Commission rebuked social media and streaming companies including YouTube, Amazon and Facebook on Thursday, accusing them of failing to adequately protect users from privacy intrusions and safeguard children and teens on their sites.
In a sprawling 129-page staff report, the agency summed up a years-long study into industry practices by criticizing the companies for not “consistently prioritizing” users’ privacy, for broadly scooping up data to power new artificial intelligence tools and for refusing to confront potential risks to kids.
FTC Chair Lina Khan, a Democrat whose aggressive oversight of the tech giants has drawn plaudits from liberals and conservatives alike, said the report shows how companies’ practices “can endanger people’s privacy, threaten their freedoms and expose them to a host of harms,” adding that the findings on child safety were “especially troubling.”
In 2020, the FTC demanded that nine social networks and video streaming providers hand over information on how they collect, use and sell people’s personal data, how their products are powered by algorithms and how their policies affect kids and teens.
The agency was able to compel information from companies whose practices lawmakers and regulators have often criticized as being too opaque. They included Amazon, Facebook (now Meta), Google-owned YouTube, Twitter (now X), Snap, TikTok owner ByteDance, Discord, Reddit and Meta-owned WhatsApp. (Amazon founder Jeff Bezos owns The Washington Post.)
FTC employees wrote that the report described “general findings” across those studied but noted that not all of them applied to every company in every instance. Still, agency staffers described numerous pervasive patterns they said exposed users to harm or left them in the dark about how their data was being used to make money for the companies.
According to the report, the companies have collected troves of data on users and nonusers, often in “ways consumers might not expect,” and many of the guardrails put in place to protect that information were erected only in response to global regulations. While the companies are increasingly mining that data to launch AI products, the agency found, consumers typically lacked “any meaningful control over how personal information was used” for them.
The findings, staffers wrote, revealed “an inherent tension between business models that rely on the collection of user data and the protection of user privacy.” The agency’s Democratic leadership has spoken out before against “commercial surveillance” practices they say have come to dominate Silicon Valley.
An FTC official, who briefed reporters on the condition of anonymity to discuss the findings, declined to comment on how the study might shape the agency’s enforcement but said it showed that many of the issues they anticipated ran much deeper than expected.
According to the report, many of the companies studied “bury their heads in the sand when it comes to children” on their sites. Many claimed that because their products were not directly targeted at children and their policies did not allow children on their sites, they knew nothing of children being present on them. “This is not credible,” agency staffers wrote.
Child safety advocates have long expressed concern that under existing federal child privacy laws, known as the Children’s Online Privacy Protection Act, or COPPA, companies can avoid accountability by claiming not to have knowledge that children are accessing their sites.
Concerns about companies failing to protect younger users were particularly pronounced among teens, whom many platforms simply treated like “traditional adult users” and typically did not afford the same protections as young children, the agency wrote.
The FTC official declined to comment on Instagram’s newly released safety tools for teens but said companies can’t be relied upon to regulate themselves.
The report recommended that Congress pass both comprehensive federal privacy legislation to cover all consumers and to expand existing guardrails for children onto teens.
Since the study began four years ago, the social media market has become more fractured and decentralized as upstarts such as TikTok challenge long-standing leaders and platforms such as Telegram cater to increasingly niche audiences. Asked whether the agency’s analysis was still relevant, the FTC official said it was difficult to obtain information from the internet companies even with their investigative authority.
The official added that the practices they have highlighted are tied to the companies’ business models, which have not changed.
While the study began during the Trump administration, the FTC under Khan has dialed up its enforcement against the tech sector over data privacy and child safety complaints, including by launching sprawling efforts to update privacy regulations.
The study’s release arrives as lawmakers at the federal and state levels push to pass expanded protections for children’s privacy and safety. Dozens of states have passed laws to that effect over the past year, and a key House committee advanced a pair of bills Wednesday that would mark the most significant update to child online safety laws in decades.
But those efforts face opposition from tech industry and business groups that say they trample on users’ free speech rights, force companies to collect more data and stifle innovation.
This is a developing story.