Hero Image

#DEEPFAKES HOW THEY ARE MADE AND WAYS TO COMBAT THEM

INDIAWIDE STATS ON DEEPFAKES

  • 64% of respondents say AI has made it harder to spot online scams in India
  • In the past 12 months, 75% of people say they’ve seen deepfake content, 38% of people have encountered a deepfake scam, and
18% have been a victim of a deepfake scam Of the people who encountered or were the victim of a deepfake scam:
  • 57% said they came across a video, image, or recording of a celebrity and thought it was real, with 31% losing money to a scam
  • 40% said they believe their voice was cloned and used to try and trick someone they know
  • 39% said they received a call, voicemail or voice note that sounded like a friend or loved one
  • 22% said they came across a video, image, or recording of a political candidate and thought it was real at first
  • 37% said their likeness was used to create sexually explicit content that was shared with others
  • (McAfee Cybersecurity AI Report, 2024)

    THESE ARE FOUR COMMON METHODS THAT USE AI TO CREATE DEEPFAKES

    Audio manipulation: Ranveer Singh and Aamir Khan


    Audio for a deepfake is generated using voice cloning , which involves AI models like Retrieval-based-Voice-Conversion (RVC) that use existing voice data. “To do this, 10-20 minutes of a person’s recorded voice needs to be uploaded to the AI tool, which analyses speech patterns and create the target voice speech in minutes. Audio recordings of public figures are easily available, making it easy to clone their voices,” says Karan Shetty, an AI artist. In the case of Aamir and Ranveer, the AI tool was used to replace certain strategic words in their original videos to create the deepfake. Voice cloning is a menace that has led to multiple phone call scams recently, where the recipient mistakes the caller’s voice for someone familiar and gives in to a request, most often for money. Scammers use existing audio clips that they have accessed to execute this scam.
    "Usually, viewers watch videos on social media on their mobile phones. In busy, bustling surroundings, it's not easy to detect that the voice they hear is robotic and generated using AI. Some AI tools, like ElevenLabs, are used to generate comparatively clean audio," says Shubham Agarwal, AI artist

    Video manipulation: Rashmika Mandanna , Alia Bhatt , Kajol


    Deepfake videos are generated via AI tools that use facial reenactment. This involves analysing pre-recorded video footage of a person and then applying their facial expressions to someone else via AI-face swap tools.

    ">



    This is how the Rashmika Mandanna, Alia Bhatt and Kajol deepfakes were created. Some of the AI tools that are used for this include DeepFaceLab, Face2Face, and Avatarify.

    Original video using AI lip sync: Barack Obama


    This method creates an entirely new video with lip sync from existing footage of a person. “For this, an AI tool like Argil.ai is used and it lets you upload a few minutes of video. You can then give speech prompts that will generate a completely new video, where the person is seen speaking the lines given in the prompt. The AI tool allows a perfectly lip-synced video to be created. The voice cloning tech allows perfect replication of the voice as well,” says AI artist Shubham Agarwal. A deepfake video of Barack Obama was generated using this tech.

    Audio and video manipulation: Mark Zuckerberg


    This is the most sophisticated form of deepfake, where the AI-generated audio and video are combined seamlessly using video editing software. “For this, different tools like voice cloning and face swapping are used simultaneously,” says AI artist Karan Shetty. For the Mark Zuckerberg deepfake in 2019, his face was manipulated by taking footage from 2017, while his voice was manipulated to make it seem he had said something that he hadn’t. Apart from the audio, other indicators that the video was fake included unusual eye-blinking patterns and mouth movements.

    "Investigating a deepfake video is a lengthy process. It takes around two months to obtain all permissions and investigate the matter. It also depends on how harmful the content is. If the matter involves anti-national, antisocial-related activities, the process is fast-tracked. In cases of individual defamation, permission can take longer, as accessing user accounts can lead to privacy issues," says Sujitkumar Gunjkar, cybercrime officer, Maharashtra Police.

    HOW TO IDENTIFY A DEEPFAKE VIDEO

    • FACIAL EXPRESSIONS: Pay attention to facial expressions, lip movement, the cheek and forehead areas, and the blinking of eyes. An AI video will give away inconsistencies
    • BODY MOVEMENT: Look for discrepancies between the proportions of the body and face, or between facial expressions and body movements
    • ROBOTIC VOICE: An AI-manipulated audio will be somewhat different from an actual person’s voice. Often, if it is a video deepfake, the lip movements may not match the words being spoken

    LAWS THAT ARE APPLICABLE TO DEEPFAKES
    While there is no specific law against deepfake videos and photos, deepfakes and voice cloning scams come under sections 66C and 66D of the IT Act and Section 420 of the IPC
    SECTION 66C - Fraudulent use of the electronic signature, password, or any other unique identification feature of another person
    SECTION 66D - Cheating by impersonation, using any communication device or computer resource
    SECTION 420 OF IPC - Intention of cheating

    Depending on the intention of the crime committed via deepfake, one can be charged with defamation, threatening, extortion, or impersonation. A person’s voice is data and cloning of a voice is part of the forgery of electronic records," says Prashant Mali, data protection lawyer.


    READ ON APP