pwshub.com

See why AI detection tools can fail to catch election deepfakes

Artificial intelligence-created content is flooding the web and making it less clear than ever what’s real this election. From former president Donald Trump falsely claiming images from a Vice President Kamala Harris rally were AI-generated to a spoofed robocall of President Joe Biden telling voters not to cast their ballot, the rise of AI is fueling rampant misinformation.

Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or “deepfakes.” Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil.

But the science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques.

Meanwhile, deepfakes and manipulated video are proliferating.

This video of Harris resurfaced on X the day Biden dropped out of the race, quickly gaining over 2 million views. In the clip, she seems to ramble incoherently. But it’s digitally altered.

How can you know for sure? The Washington Post talked with AI experts, computer scientists and representatives from deepfake detection companies to find out how the technology worksand where it falls short.

Here are several key techniques deepfake detectors use to analyze content.

First, a video clip, sound bite or image is uploaded into a deepfake detection tool. There, the content is judged by a panel of expert algorithms trained to look for indicators of authenticity.

One algorithm inspects the area around the face, looking for evidence that it was swapped onto another person’s body.

This clip of Harris did not use a face swap, so the outline is not particularly suspicious.

Another tracks the lips for abnormal movements and unrealistic positioning.

Some frames in this clip show unnatural lip movements.

A third algorithm analyzes the sound, scanning for unusual vocal frequencies, pauses and stutters.

This clip has some strange audio elements.

Moving down to the pixel level, an algorithm examines imagery for patterns of visual “noise” that deviate from other areas.

The red indicates suspicious visual “noise” in Harris’s face and hands.

The next algorithm compares how pixels move between video frames, looking for subtle cues like unnatural jumps or the absence of motion blur present in authentic videos.

This analysis of her movements is inconclusive.

Then an algorithm tries to remake the image using the diffusion technique that powers today’s AI. The resulting image shows what diffusion was not able to reconstruct.

Much of the original Harris clip can be seen in this output, meaning the visuals probably were not wholly generated by AI.

Finally, the last algorithm looks for a checkerboard pattern in the pixel distribution, a hallmark of GAN, an older image generation technique.

No checkerboard here means the older generative technique was not used.

Each algorithm’s verdict is then combined by the detection tool into a final decision on whether the content is likely to be real or fake.

This video was deemed highly suspicious.

If deepfake detection tools functioned properly, they could provide real-time fact-checking on platforms like Instagram, TikTok and X, eradicating AI-generated fake political ads, deceptive marketing ploys and misinformation before they take hold.

Policymakers from Washington to Brussels have grown increasingly concerned about the impact of deepfakes and are rallying around detectors as a solution. Europe’s landmark AI legislation attempts to stem the impact of fake imagery through mandates that would help the public identify deepfakes, including through detection technology. The White House and top E.U. officials have been pressuring the tech industry to invest in new ways to detect AI-generated content in an effort to create online labels.

But deepfake detectors have significant flaws. Last year, researchers from universities and companies in the United States, Australia and India analyzed detection techniques and found their accuracy ranged from 82 percent to just 25 percent. That means detectors often misidentify fake or manipulated clips as real — and flag real clips as fake.

Hany Farid, a computer science professor at the University of California at Berkeley, said the algorithms that fuel deepfake detectors are only as good as the data they train on. The datasets are largely composed of deepfakes created in a lab environment and don’t accurately mimic the characteristics of deepfakes that show up on social media. Detectors are also poor at spotting abnormal patterns in the physics of lighting or body movement, Farid said.

Detectors are better at spotting images that are common in their training data, researchers at the Reuters Institute for the Study of Journalism said. That means detectors may accurately flag deepfakes of Russian President Vladimir Putin while struggling with images of Estonian President Alar Karis, for example.

They also are less accurate when images contain dark-skinned people. And if people manipulate AI-generated photos using Photoshop techniques such as blurring or file compression, they can fool the tools. Deepfakers are also adept at creating images that are one step ahead of detection technology, AI experts said.

A simple trick to fool deepfake detectors: Lower the image quality

Since detectors are far from perfect, humans can employ old-school methods to spot fake images and video online, experts said. Zooming in on photos and videos allows people to check for abnormal artifacts, such as disfigured hands or odd shapes in background details, such as floor tiles or walls. In fake audio, the absence of pauses and vocal inflections can make a person sound robotic.

It’s also crucial to analyze suspected deepfakes for contextual clues. In the Harris video, for example, the lectern sign reads “Ramble Rants,” which is the account name of the deepfaker.

The authentic image says "Fighting for" but the manipulated image says "Ramble rants"

Dozens of companies in Silicon Valley have dedicated themselves to spotting AI deepfakes, but most methods have fallen short.

Until now, the industry’s biggest hope was watermarking, a process that embeds images with an invisible mark that only computers can detect. But watermarks can be easily tampered with or duplicated, confusing the software meant to read them.

Last year, leading companies including OpenAI, Google, Microsoft and Meta signed a voluntary pledge that requires them to develop tools to help the public detect AI-generated images. But tech executives are skeptical about the project.

“I received mixed answers from Big Tech,” Věra Jourová, a top European Union official, told The Post at an Atlantic Council event. “Some platforms told me this was impossible.”

The inability to detect AI-generated images can have real-world consequences. In late July, an image of a Sikh man urinating into a cup at a Canadian gas station went viral on X, fueling anti-immigrant rhetoric. But according to a post on X, the owner of the gas station claimed the incident never happened.

The Post uploaded the image into a popular deepfake detection tool from the nonprofit AI company TrueMedia. The image was labeled as having “little evidence of manipulation,” indicating it may be real.

Later, The Post received an email saying a human analyst at the company had found “substantial evidence of manipulation.” Oren Etzioni, founder of TrueMedia, said its AI detectors are “not 100 percent accurate” and rely on human analysts to review results. Corrected results are used to “further train the AI detectors and improve performance,” he said.

Farid said such inconsistent results are dangerous because people are “weaponizing” them to alter society’s concept of reality.

“It’s making it so that we don’t trust or believe anything or anybody,” he said. “It’s a free-for-all now.”

Source: washingtonpost.com

Related stories
1 month ago - “Age assurance” checks -- increasingly popular among lawmakers trying to wall kids off from the open internet -- rely on a style of surveillance that ranges “from ‘somewhat privacy violating’ to ‘authoritarian nightmare.’”
3 weeks ago - From the iPhone 15 to the Samsung Galaxy S24 series and Google Pixel 9. Here are the best phones you can buy as tested and chosen by CNET editors.
1 month ago - The $349 Pixel Watch 3 has a new 45mm size, a brighter screen and a range of running tools, but are they enough to lure you away from the less-pricey Samsung Galaxy Watch 7?
3 weeks ago - Sword or plowshare? That depends on whether you're an attacker or a defender Sponsored Feature Artificial intelligence: saviour for cyber defenders, or shiny new toy for online thieves? As with most things in tech, the answer is a bit of...
3 weeks ago - The fake photos and videos look convincing, and it doesn't help when former presidents post them on social media.
Other stories
30 minutes ago - Experts at the Netherlands Institute for Radio Astronomy (ASTRON) claim that second-generation, or "V2," Mini Starlink satellites emit interference that is a staggering 32 times stronger than that from previous models. Director Jessica...
31 minutes ago - The PKfail incident shocked the computer industry, exposing a deeply hidden flaw within the core of modern firmware infrastructure. The researchers who uncovered the issue have returned with new data, offering a more realistic assessment...
31 minutes ago - Nighttime anxiety can really mess up your ability to sleep at night. Here's what you can do about it right now.
31 minutes ago - With spectacular visuals and incredible combat, I cannot wait for Veilguard to launch on Oct. 31.
31 minutes ago - Finding the perfect pair of glasses is difficult, but here's how to do so while considering your face shape, skin tone, lifestyle and personality.