pwshub.com

Microsoft Says AI Deepfake Abuse Should Be Illegal

Artificial intelligence seems to be everywhere these days, doing good by helping doctors detect radiation and doing bad by helping fraudsters bilk unsuspecting victims. Now, Microsoft says the US government needs new laws to hold people who abuse AI accountable.

In a blog post Tuesday, Microsoft said US lawmakers need to pass a "comprehensive deepfake fraud statute" targeting criminals who use AI technologies to steal from or manipulate everyday Americans.

"AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation -- especially to target kids and seniors," Microsoft President Brad Smith wrote. "The greatest risk is not that the world will do too much to solve these problems. It's that the world will do too little."

Microsoft's plea for regulation comes as AI tools are spreading across the tech industry, offering criminals increasingly easy access to tools that can help them more easily gain the confidence of their victims. Many of these schemes abuse legitimate technology that's designed to help people write messages, do research for projects and create websites and images. In the hands of fraudsters, those tools can create fake forms and believable websites that fool and steal from users.

"The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI," Smith wrote. But he said governments need to establish policies that "promote responsible AI development and usage."

Already behind

Though AI chatbot tools from Microsoft, Google, Meta and OpenAI have been made broadly available for free only over the past couple of years, the data about how criminals are abusing them is already staggering. 

Earlier this year, AI-generated pornography of global music star Taylor Swift spread "like wildfire" online, gaining more than 45 million views on X, according to a February report from the National Sexual Violence Resource Center

"While deepfake software wasn't designed with the explicit intent of creating sexual imagery and video, it has become its most common use today," the organization wrote. Yet, despite widespread acknowledgement of the problem, the group notes that "there is little legal recourse for victims of deepfake pornography." 

Meanwhile, a report this summer from the Identity Theft Resource Center found that fraudsters are increasingly using AI to help create fake job listings as a new way to steal people's identities. 

"The rapid improvement in the look, feel and messaging of identity scams is almost certainly the result of the introduction of AI-driven tools," the ITRC wrote in its June trend report.

That's all on top of the rapid spread of AI-manipulated online posts attempting to tear away at our shared understanding of reality. One recent example appeared shortly after the attempted assassination of former president Donald Trump earlier in July. Manipulated photos spread online that appeared to depict Secret Service agents smiling as they rushed Trump to safety. The original photograph shows the agents with neutral expressions. 

Even this week, X owner Elon Musk shared a video that used a cloned voice of vice president and Democratic presidential candidate Kamala Harris to denigrate President Joe Biden and refer to Harris as a "diversity hire." X service rules prohibit users from sharing manipulated content, including "media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm." Musk has defended his post as parody.

For his part, Microsoft's Smith said that while many experts have focused on deepfakes used in election interference, "the broad role they play in these other types of crime and abuse needs equal attention." 

Source: cnet.com

Related stories
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - Brad Smith, Microsoft's President and Vice Chairman, writes that while the tech sector and non-profit groups have taken steps to address the problems posed by deepfakes, especially when used for fraud, abuse, and manipulation against kids...
1 month ago - PSA comes amid multiple IT services crises in recent days US law enforcement and cybersecurity agencies are reminding the public that the country's voting systems will remain unaffected by distributed denial of service (DDoS) attacks as...
1 month ago - As deepfakes flood the web, deepfake detector tools have been marketed as a silver bullet for identifying what’s real. But they can be easily duped.
Other stories
2 minutes ago - Says Lina Khan in latest push to rein in Meta, Google, Amazon and pals Buried beneath the endless feeds and attention-grabbing videos of the modern internet is a network of data harvesting and sale that's far more vast than most people...
38 minutes ago - After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued,...
1 hour ago - Act fast to grab this high-performing mesh router for less than $500, keeping you connected while saving some cash too.
1 hour ago - If the old-school PlayStation is dear to your heart, you can soon relive those totally sweet 1990s memories. Sony is releasing a series of products...
1 hour ago - If you've got an old phone to part with, T-Mobile is offering both new and existing customers the brand-new Apple iPhone 16 Pro for free with this trade-in deal.