How to Fight Deepfakes: Tech Solutions

HomeNewsHow to Fight Deepfakes: Tech Solutions

La rédaction

Deepfakes—synthetic videos or voices generated by artificial intelligence (AI)—have become a powerful tool for disinformation and organized crime. Women are the primary victims, especially through non-consensual pornographic exploitation. The tech sector is developing new solutions to protect citizens from scams and sophisticated attacks.

A Growing Threat to Women

While deepfakes are often linked to financial scams or political manipulation, their most common and alarming use directly targets women. A study by cybersecurity firm Deeptrace found that over 96% of deepfake videos are pornographic and feature women without their consent. These videos have amassed more than 134 million views, primarily targeting celebrities but also anonymous victims, falsely associating them with explicit content.

Deeptrace also reported a surge in deepfake videos, doubling from 7,964 to 14,678 between 2018 and 2019. Several online platforms, including Reddit, 4chan, and 8chan, host communities dedicated to creating and sharing such content. In response, some companies have taken action—Reddit, for example, banned the r/deepfake subreddit, which was dedicated to these videos.

Deepfakes Are Becoming More Sophisticated—Even Imitating Family or Colleagues

The case of Laurel, a California resident, illustrates the growing danger. One day, she received a phone call where the voice sounded exactly like her mother, Debby Bodkin, claiming she had been in an accident and was at the hospital. Suspicious, Laurel immediately called her mother, who was actually safe at work. The scam was intended to deceive Laurel’s 93-year-old grandmother, Ruthy.

“It’s not the first time scammers have called grandma,” Debby told AFP. The ability to manipulate voices and images has become a common tool for criminals. A shocking case occurred in early February when Hong Kong police revealed that an employee of a multinational company transferred $25 million to fraudsters after attending a video conference where AI-generated avatars of his colleagues appeared convincingly real.

Technology in the Hands of Scammers

Initially appearing on social media, deepfakes have been used to distort the image of public figures for disinformation purposes but are now also enabling financial fraud. A February report from iBoom, a startup specializing in detecting fake images, found that only 0.1% of Americans and Britons tested could correctly identify a manipulated video or image.

Vijay Balasubramaniyan, CEO of Pindrop Security, which specializes in voice authentication, explained: “Before, it took 20 hours (of voice recording) to recreate your voice,” the executive told AFP. “Now, it’s five seconds.”

How to Detect Deepfakes: Tech Industry Solutions

As deepfakes become more advanced, several companies are developing real-time detection tools. Firms like Reality Defender and Intel, with its FakeCatcher tool, are leading the way. FakeCatcher detects deepfakes by analyzing subtle blood vessel changes in a person’s face, helping to distinguish real footage from AI-generated content.

Pindrop analyzes 8,000 sound fragments per second to verify whether a voice is human.  Nicos Vekiarides, CEO of Attestiv, a company specializing in AI-generated content detection warned that the issue “is becoming a global cybersecurity threat.”

“In the beginning, we saw people with six fingers on one hand, but progress has made it harder and harder to tell (deepfakes) with the naked eye.”

With remote work and virtual communication becoming the norm, the risk of identity fraud is rising, making AI detection tools essential for industries like finance and insurance.

Solutions for Individuals

Some companies are also developing tools for everyday users. For example, Chinese manufacturer Honor introduced the Magic7 smartphone, which can detect and flag AI use in real-time during video calls. Meanwhile, UK startup Surf Security launched a web browser (currently for businesses) that alerts users when a video or voice appears AI-generated. Attestiv already offers a free version of its detection tool to thousands of individual users.

Pindrop is collaborating with telecom providers and plans a major announcement within six months to “protect end consumers.”

“Generative AI Has Blurred the Line Between Humans and Machines”

Siwei Lyu, a computer science professor at the University at Buffalo (New York), believes that deepfakes will eventually be managed like spam—once a major problem, now controlled through effective filters. “Generative AI has blurred the line between humans and machines,” says Vijay Balasubramaniyan. He predicts that “companies that restore this distinction will become huge,” and the industry could be worth billions.

The digital world is entering a new era of cybersecurity, where combating deepfakes will be as crucial as filtering spam. The question remains: will current solutions be enough to slow the spread of AI-driven manipulation? And can we ever fully separate human reality from machine-generated deception?

Also discover