Deepfakes, voice clones and AI images amplified disinformation on India-Pak conflict
During the 2025 India-Pakistan conflict, a new threat emerged on social media: the surge of disinformation powered by artificial intelligence (AI). As tensions intensified, AI-generated videos of political leaders, manipulated battlefield footage, and cloned voices flooded platforms, spreading false narratives at an unprecedented pace.
Fact-checkers and digital forensic teams found themselves racing against synthetic content designed to inflame sentiments, with some deepfakes gaining millions of views before being debunked. Platforms like X, WhatsApp, Facebook, and YouTube were inundated with content that felt urgent and authentic, yet much of it was fabricated. What stood out was not just the volume of disinformation, but the manner in which it was created and consumed. Emboldened by AI tools, bad faith actors moved swiftly to produce a steady supply of convincing, but counterfeit visuals and audio. Such fake content was inserted into breaking news on the military escalations between India and Pakistan with the aim of exploiting public outrage before the facts could catch up.
Here, we break down how AI-enabled disinformation took shape during the conflict and how you can spot AI-generated content.
One of the earliest examples came not during Operation Sindoor itself, but in the days following the terror attack in Pahalgam. Lieutenant Vinay Narwal and his wife Himanshi were on their honeymoon. Vinay, a young naval officer, was held at gunpoint, questioned about his religion, and shot dead.
The next day, photos of Himanshi, clad in red bangles, sitting in silent shock beside his body, were widely shared. The image of Himanshi quickly became iconic, used to symbolise grief, patriotism, and loss. But it was also politicised. It was framed across social media and certain media outlets to evoke the pain of a “newlywed Hindu woman” allegedly suffering at the hands of “Muslim attackers.” The image was weaponised to stoke communal sentiments.
“The first major instance we noticed,” said Nivedita Niranjankumar, News Editor at BOOM, “was a ‘Ghibli-style’ AI image of Himanshi beside her husband. It started circulating almost immediately after the photo went viral.”
But when Himanshi released a public appeal saying, “We don’t want people going against Muslims or Kashmiris. We want peace, and only peace”, the narrative changed. Her plea for harmony triggered a vicious backlash. What had started as widespread sympathy turned into online abuse. Right-wing accounts attacked her, accusing her of betraying the national mood. She was doxxed and trolled relentlessly.
The emergence of ‘Ghibli-style’ images marks another troubling trend in AI-generated art. Studio Ghibli Incorporated, renowned for animated films such as Spirited Away (2001) and Howl’s Moving Castle (2004), enjoys a near cult status among movie buffs. After the trend of AI-generated images mimicking Ghibli's style reached fever pitch, a 2016 video clip of Ghibli co-founder Hayao Miyazaki strongly condemning the use of AI models in animation resurfaced and went viral. In the video, Miyazaki, a beloved figure to many, says, “I’m utterly disgusted … I would never wish to incorporate this technology into my work. I strongly feel this is an insult to life itself.”
A model for female grief
While some of the AI images of Himanshi might have appeared harmless at first, others revealed a disturbing pattern. Later visuals showed Himanshi crying and wailing—images that were entirely fabricated to match a formulaic portrayal of female grief. “I think this fits a very gendered disinformation narrative. A woman is expected to express grief in a conspicuous manner in a terrible situation. In this instance, when that did not happen, a narrative was instead manufactured to fit that gendered expectation,” Nivedita said.
Nivedita believes that this early wave of manipulation played a role in the backlash Himanshi later faced. “Her image became the emotional face of the Pahalgam terror attack. But when she didn’t conform to the public’s idea of a heartbroken widow, people turned on her. It set the stage for the trolling and doxxing she faced later.”
A major driver of these AI visuals, she explained, was the absence of actual footage from the attack. “There were no visuals of the shooting, of people running or of the attackers. So people filled the gap with AI. Even entering basic prompts like ‘terrorists in a meadow’ or ‘Pahalgam attack’ created realistic images. That’s how accessible these tools are now.”
Unlike earlier disinformation campaigns which relied on photoshopped images or old videos, this time, AI tools were used to amplify a real event. “It didn’t invent the tragedy,” Nivedita pointed out. “But it dramatised it, visually and emotionally, to fit political agendas.”
Then came Operation Sindoor.
Weeks after the conflict, Prime Minister Narendra Modi declared, “Those who tried to erase the sindoor of our sisters have been razed to the ground.” Even India’s codename for the retaliation against Pakistan was framed within the Hindu symbolism of vermillion (sindoor), traditionally erased after a woman’s husband passes away.
Fabricating a narrative of Pakistan’s defeat
According to Nivedita, the disinformation around Operation Sindoor was designed to serve a singular purpose in India: to build a strong narrative around India’s military success and Pakistan’s defeat. “The deepfakes and AI-generated visuals were used to manufacture a sense of India’s triumph and Pakistan’s admission of loss,” she said.
Fact-checking organisations such as Alt-News and BOOM Live played a critical role in countering disinformation at the time Operation Sindoor was carried out.
A Washington DC-based think tank, the Center for the Study of Organized Hate (CSOH), released a detailed report analysing disinformation tactics in both India and Pakistan. The report warns that “The weaponisation of misinformation and disinformation during this conflict is not an isolated phenomenon, but part of a broader global trend in hybrid warfare.” The report offers a comprehensive list of instances when AI-generated content was used to depict concocted events. The think tank also noted that AI-generated content marked a significant escalation in disinformation tactics.
One of the first deepfakes was of Pakistan’s Director General of Inter-Services Public Relations (ISPR), Ahmed Sharif Chaudhry, admitting that two Pakistani JF-17 jets had been shot down by India. The video spread rapidly on May 7 night, when Operation Sindoor began, even making it to prime-time TV debates before being debunked.
Soon after, another deepfake appeared. This time showing Pakistan’s Prime Minister Shehbaz Sharif conceding defeat and lamenting a lack of support from China and the UAE. Even though it was a hoax, the clip was shared by right-wing influencers and news outlets like Sudarshan News.
The most viral image during the operation was a supposedly bombed Rawalpindi stadium, shared by X user Amitabh Chaudhary. The AI-generated image racked up over 9.6 million views but was not labelled or fact-checked on the platform. Another video falsely claimed that US President Donald Trump had issued a statement supporting a ceasefire because of Indian military success. This video used AI-generated narration and a simulated backdrop to appear legitimate.
Indian leaders weren’t spared either. Deepfakes of Modi, Home Minister Amit Shah and External Affairs Minister S Jaishankar declaring victory in the operation were also circulated. Several X accounts even shared an AI-generated image of Modi directing Pakistan Army Chief General Asim Munir to sign a ceasefire agreement.
The challenge for fact-checkers wasn’t just the volume, but also the believability. “We could tell some videos were fake by the way people in the videos spoke,” Nivedita explained. “But most viewers in India don’t know how Pakistani leaders sound. Many Pakistani news channels are blocked here. So for an average user, the fakes looked real.”
Grok
Another worrying trend during the conflict was the increasing reliance on AI tools like Grok for fact-checking, despite the technology not being equipped for it.
“We went through hundreds of replies where people were asking Grok to verify claims. In the early days of both the Pahalgam terror attack and Operation Sindoor, many of its responses were inaccurate or misleading,” said Nivedita. “We documented at least six clear instances where Grok provided twisted or entirely false summaries. People are turning to AI for verification, but the tech simply isn’t there yet. AI cannot replace human fact-checkers.”
One such instance involved a widely circulated photo of journalist Rana Ayyub standing with filmmaker Pooja Bhatt and actor Alia Bhatt. When an X user asked Grok, “Who is the lady in the red dress?”. The chatbot misidentified Rana Ayyub as Jyoti Rani Malhotra, a YouTuber arrested for allegedly spying for Pakistan during Operation Sindoor.
A quick search of “Is it true (@grok)” on X reveals hundreds of similar prompts, with users relying on the bot for real-time verification. In another case, Grok incorrectly claimed that two digitally altered photos of Congress leader Rahul Gandhi, purportedly posing with Jyoti Rani Malhotra, were authentic. BOOM later fact-checked the images: one was of Rahul Gandhi with Uttar Pradesh MLA Aditi Singh, and the other showed him with a party supporter during the Bharat Jodo Yatra.
Deepfakes are easy to make
While disinformation is nothing new, the rise of AI has transformed its scale, speed, and believability. “The disinformation or misinformation that you now see is on steroids, thanks to AI", said Pamposh Raina, who heads the Deepfakes Analysis Unit (DAU), an initiative under the Misinformation Combat Alliance (MCA). The MCA is an industry body of fact-checkers and media houses in India. “Earlier, you could perhaps tell the difference between what is fake and what is not. Early deepfakes looked somewhat rudimentary."
This shift towards more believable AI content becomes especially dangerous during high-stakes moments like elections or military escalations, when public attention is heightened and emotions are running high. "While deepfake videos are still expensive to produce, AI-manipulated content, especially using original video footage overlaid with cloned audio, is rampant and far cheaper,” Pamposh explained.
Tools like Gemini Veo3 now allow anyone to create hyper-realistic AI-generated videos in minutes, with just a few prompts. While deepfake images are already widespread on social media, video generation—once a technical challenge—is rapidly catching up. Visual deepfakes can now mimic everything from accurate hand movements to flawless backgrounds. And audio cloning tools are increasingly capable of replicating real voices. “We have tested some of these tools, and the results are mind-boggling,” Nivedita said.
At present, the DAU focuses on countering AI-generated misinformation, specifically audio and video content. Also, their role is to verify, not fact-check such material. Verification simply confirms whether the content in question is authentic or if it has been generated or manipulated using AI.
One trend Pamposh’s team has observed is the use of synthetic audio to alter real videos, creating the illusion that public figures said things they never did. For example, the video of Pakistan’s Shehbaz Sharif admitting defeat. “On closer inspection, we found signs of tampering. The lip-sync was forced, and in one frame the moustache was visible, while in another, it had vanished,” Pamposh said. “These inconsistencies like mismatched skin tones around the mouth, unnatural teeth, or glitchy movement during head turns are red flags.”
How to spot a fake
Such inconsistencies are common in AI manipulated content. “If you see a difference in a person's skin tone from the nose all the way down to the chin,” Pamposh illustrated. “The mouth area will be regenerated to match a particular audio. Sometimes the teeth might look off.” Algorithms also falter with head movement.
Even seemingly minor glitches can be revealing. “If a person is near a mic, you might see that the microphone appears to be compressed by their chin or jaw. That's a giveaway, because it is a sign that the area around the mouth has been algorithmically regenerated,” Pamposh further noted.
She admits these signs aren’t always easy for the general public to catch, but some basic checks help. “Take screenshots and run a reverse image search,” she suggested. Slowing down a video to look for possible lip-sync flaws can also reveal manipulated content.
According to Nivedita, basic digital hygiene still applies: verify the source, cross-check the information, and use search engines to confirm whether a claim is being reported elsewhere.
Voice cloning is difficult to detect
Audio deepfakes, however, are especially hard to detect, Nivedita warned. Many available detection tools are limited in what they can catch, especially when noise or background music is used to mask the cloning. “You can’t just believe what you see or hear anymore,” she said. “You have to verify everything.”
Unsurprisingly, one of the most common forms of AI-manipulated content today involves synthetic audio layered over real video footage, said Pamposh. “We are increasingly seeing cases where genuine footage is manipulated to make it seem like someone said something they never did, simply by overlaying a cloned voice,” she added.
This kind of manipulation has become easier with the proliferation of free, user-friendly AI tools. Such tools let a user generate voice samples that come remarkably close to the real thing according to Pamposh. “A perfect clone might be hard to achieve, but even an approximate match can sound convincingly authentic, especially to the untrained ear,” she pointed out.
Pamposh also highlighted how cloning someone’s voice doesn’t require much raw material, particularly for public figures. “With someone like Prime Minister Modi, for instance, there’s a vast archive of speeches and recordings. That makes it relatively easy for these tools to generate audio that mimics his voice,” she said.
Audio, unlike video, offers fewer cues and that’s what makes it so dangerous. “Cloned voices are harder to detect than visual deepfakes,” said Nivedita. “We saw a notable spike in fake audio content during the Madhya Pradesh elections. And many detection tools struggle to flag them, especially when the clips are short or laced with background noise.”
Still, even convincing voice clones often have subtle inconsistencies. “You have to listen for telltale signs, variations in accent, rhythm, tone, or overall delivery,” Pamposh demonstrated. Comparing the questionable clip with known, verified audio can help expose the deception.
The limitations of free tools only add to the challenge. “You have to stay persistent. Run the audio through multiple tools, trim it down into smaller segments and keep testing them for inconsistencies,” Nivedita added.
AI literacy
As deepfakes become increasingly convincing, Pamposh believes one critical defence system is being overlooked: media literacy. “Deepfakes are getting more sophisticated, but people don’t always know what to look for,” she said. “Some things that look AI-generated may not be and vice versa. Sometimes, it’s just someone using a fancy phone and a few tweaks to mislead others.”
According to her, AI literacy is not a future challenge, it’s an urgent and current need. “I am not going to call this the next wave of the problem, because the problem is already here. We are not dealing with something on the horizon.We are in the middle of it.” Pamposh said.
She pointed out that conversations about generative AI (or gen AI) often remain confined to echo chambers. “We live in big cities, we work in media or tech-adjacent spaces, we are curious, we read up,” she said. “But a bulk of our population doesn’t operate in these circles. They’re not even aware that what they’re seeing or hearing could be completely synthetic.” Generative AI refers to artificial intelligence models that can produce new content such as images, audio, video, or text based on patterns learned from vast datasets.
Meanwhile, bad faith actors are evolving fast, outpacing detection mechanisms. “Now they are inserting things like subtle breathing sounds into synthetic audio to make it seem more human,” she noted. “Why? Because detection tools don’t perform well when background noise or music is present. It throws off the system. Audio detection is incredibly difficult because you don’t have the visual cues you can cross-check as you can with videos.”
This makes media and AI literacy absolutely essential, not just for journalists and fact-checkers, but for everyone. “People need to understand that not everything they hear or see is true anymore,” Pamposh said. “And it’s not just AI. AI has been in our lives for a long time. What we are really dealing with now is the weaponisation of generative AI.”
Her work at DAU is aimed at building defences. “We are a project dedicated to verifying AI-generated misinformation,” she explained. “Fact-checkers from across India and beyond escalate suspicious content to us. We use a mix of human expertise and detection tools to analyse whether something is synthetic or authentic.”
“Whenever we publish a report, we document our methodology in detail,” she further said. “You can go to our website and see exactly how we reached our conclusions. We want people to replicate that thinking, to recognise the signs when they encounter a suspicious video or audio clip.”
Yet, even as awareness grows, digital platforms continue to fall short of responsibility. Many now require AI-generated content to carry visible labels, but in practice, enforcement is inconsistent. “AI labelling is a requisite, especially on platforms like Instagram, where a lot of AI-generated visuals are shared,” said Nivedita, adding, “But it’s not being implemented the way it should be.”
She explained that while policies have evolved within social media companies, implementation hasn’t kept pace. “Right now, AI labelling is more aspirational than enforced. Platforms need to ensure that all AI-generated content, harmful or not, is clearly labelled so users can make informed decisions.”
But the responsibility doesn’t lie solely with tech platforms. Media organisations, too, must step up. “We at BOOM have repeatedly asked newsrooms to clearly mark AI-generated visuals, even if it’s just a representational image,” Nivedita said. “Some do it, but many still don’t. It’s a point we keep raising in meetings.”