Deepfake makers: Why is it so hard to catch them?

NEW DELHI
:

With AI tools becoming more accessible, deepfakes are a rising threat in audio, video and photo formats. But, catching the actual perpetrators is next to impossible, thanks to how cyber tools allow people to obfuscate the traces of origin. Mint explains why:

How easy is it to create a deepfake today?

A deepfake is more sophisticated than basic morphed content. As a result, they require more data—typically of facial and physical expressions—as well as powerful hardware and software tools. While this makes them harder to create, generative AI tools are becoming increasingly accessible now. That said, true deepfakes that are hard to detect, such as the video that targeted actor Rashmika Mandanna recently, require targeted efforts to be made, since accurately morphing facial expressions, movements and other video artifacts require very sophisticated hardware and specialized skills.

Why are they so hard to detect?

Deepfake content is typically made in order to target a specific individual, or a specific cause. Motives include spreading political misinformation, targeting public figures with sexual content, or posting morphed content of individuals with large social media following for blackmail. Given how realistic they look, deepfakes can pass off as real before a forensic scrutiny is done. Most deepfakes also replicate voice and physical movements very accurately, making them even harder to detect. This, coupled with the exponential reach of content on popular social media platforms, makes deepfakes hard to detect.

Has generative AI made deepfakes more accessible?

Yes. While generative AI has not given us tools to make accurate morphed videos and audio clips within seconds, we are getting there. Prisma’s photo editing app Lensa AI used a technique called Stable Diffusion to morph selfies. Microsoft’s platform Vall-E needs only three seconds of a user’s speech to generate longer authentic-sounding speech.

What tech tactics do deepfake makers use?

Deepfakes are very hard to trace because of how the internet works. Most individuals who create deepfakes have specific malicious intent, and plenty of tools to hide the original content. Following the digital footprint can lead you to an internet protocol (IP) address that is often placed by a perpetrator to mislead potential investigations and searches. Those who create deepfakes use advanced tactics to remove any digital signature of their location that can lead investigations to them—thus keeping their identity anonymous.

What can you do if you are the target?

On 7 November, union minister of state for information technology (IT) Rajeev Chandrasekhar said people are encouraged to file FIRs and seek legal protection against deepfakes. Section 66D of the IT Act mandates three years jail term and a fine of 1 lakh for ‘cheating by impersonation’. Firms have been told to remove deepfakes within 36 hours of a report by users—or lose their safe harbour protection. While India does not have a specific law on deepfakes, there are several existing laws that can be tapped.