#FactCheck -AI-Manipulated Video Falsely Claims ₹50 Crore Deal Involving Bhupen Bora
Executive Summary
A purported news clip circulating on social media claims that the Bharatiya Janata Party (BJP) purchased Bhupen Bora, a leader of the Indian National Congress, for ₹50 crore as part of a political deal in Assam. The viral clip further alleges that the transaction took place under the leadership of Assam Chief Minister Himanta Biswa Sarma and included an agreement to induct several Congress leaders into the BJP.
However, research by CyberPeace found the viral claim to be false and revealed that the original news video had been manipulated using AI and shared with misleading claims.
Claim
On February 18, 2026, a user shared the viral video on Facebook, claiming that the Assam BJP had bought a Congress leader who had lost the last three elections for ₹50 crore, and that the alleged deal led by Himanta Biswa Sarma had drawn public criticism.

Fact Check:
To verify the authenticity of the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the original version of the video published on the website of Aaj Tak on February 16, 2026. In the original report, the anchor is only seen reporting on Bhupen Bora’s resignation from the party. The report does not mention any alleged financial transaction or political deal, contrary to the claims made in the viral clip.

In the next stage of the research, the viral video was analysed using the AI detection tool AURGIN AI, which identified the video as AI-generated.

Conclusion
Our research found that users had manipulated the original news broadcast using AI and shared it with misleading claims. The viral clip does not show any real financial deal between Bhupen Bora and the Assam Chief Minister.
Related Blogs

Introduction
When a tragedy strikes, moments are fragile, people are vulnerable, emotions run high, and every second is important. In such critical situations, information becomes as crucial as food, water, shelter, and medication. As soon as any information is received, it often leads to stampedes and chaos. Alongside the tragedy, whether natural or man-made, emerges another threat: misinformation. People, desperate for answers, cling to whatever they can find.
Tragedies can take many forms. These may include natural disasters, mass accidents, terrorist activities, or other emergencies. During the 2023 earthquakes in Turkey, misinformation spread on social media claiming that the Yarseli Dam had cracked and was about to burst. People believed it and began migrating from the area. Panic followed, and search and rescue teams stopped operations in that zone. Precious hours were lost. Later, it was confirmed to be a rumour. By then, the damage was already done.
Similarly, after the recent plane crash in Ahmedabad, India, numerous rumours and WhatsApp messages spread rapidly. One message claimed to contain the investigation report on the crash of Air India flight AI-171. It was later called out by PIB and declared fake.
These examples show how misinformation can take control of already painful moments. During emergencies, when emotions are intense and fear is widespread, false information spreads faster and hits harder. Some people share it unknowingly, while others do so to gain attention or push a certain agenda. But for those already in distress, the effect is often the same. It brings ore confusion, heightens anxiety, and adds to their suffering.
Understanding Disasters and the Role of Media in Crisis
Disaster can be defined as a natural or human-caused situation that causes a transformation from a usual life of society into a crisis that is far beyond its existing response capacity. It can have minimal or maximum effects, from mere disruption in daily life practices to as adverse as inability to meet basic requirements of life like food, water and shelter. Hence, the disaster is not just a sudden event. It becomes a disaster when it overwhelms a community’s ability to cope with it.
To cope with such situations, there is an organised approach called Disaster Management. It includes preventive measures, minimising damages and helping communities recover. Earlier, public institutions like governments used to be the main actors in disaster management, but today, with every small entity having a role, academic institutions, media outlets and even ordinary people are involved.
Communication is an important element in disaster management. It saves lives when done correctly. People who are vulnerable need to know what’s happening, what they should do and where to seek help. It involves risk in today’s instantaneous communication.
Research shows that the media often fails to focus on disaster preparedness. For example, studies found that during the 2019 Istanbul earthquake, the media focused more on dramatic scenes than on educating people. Similar trends were seen during the 2023 Turkey earthquakes. Rather than helping people prepare or stay calm, much of the media coverage amplified fear and sensationalised suffering. This shows a shift from preventive, helpful reporting to reactive, emotional storytelling. In doing so, the media sometimes fails in its duty to support resilience and worse, can become a channel for spreading misinformation during already traumatic events. However, fighting misinformation is not just someone’s liability. It is penalised in the official disaster management strategy. Section 54 of the Disaster Management Act, 2005 mentions that "Whoever makes or circulates a false alarm or warning as to disaster or its severity or magnitude, leading to panic, shall, on conviction, be punishable with imprisonment which may extend to one year or with a fine."
AI as a Tool in Countering Misinformation
AI has emerged as a powerful mechanism to fight against misinformation. AI technologies like Natural Language Processing (NLP) and Machine Learning (ML) are effective in spotting and classifying misinformation with up to 97% accuracy. AI flags unverified content, leading to a 24% decrease in shares and 7% drop in likes on platforms like TikTok. Up to 95% fewer people view content on Facebook when fact-checking labels are used. Facebook AI also eliminates 86% of graphic violence, 96% of adult nudity, 98.5% of fake accounts and 99.5% of content related to terrorism. These tools help rebuild public trust in addition to limiting the dissemination of harmful content. In 2023, support for tech companies acting to combat misinformation rose to 65%, indicating a positive change in public expectations and awareness.
How to Counter Misinformation
Experts should step up in such situations. Social media has allowed many so-called experts to spread fake information without any real knowledge, research, or qualification. In such conditions, real experts such as authorities, doctors, scientists, public health officials, researchers, etc., need to take charge. They can directly address the myths and false claims and stop misinformation before it spreads further and reduce confusion.
Responsible journalism is crucial during crises. In times of panic, people look at the media for guidance. Hence, it is important to fact-check every detail before publishing. Reporting that is based on unclear tips, social media posts, or rumours can cause major harm by inciting mistrust, fear, or even dangerous behaviour. Cross-checking information, depending on reliable sources and promptly fixing errors are all components of responsible journalism. Protecting the public is more important than merely disseminating the news.
Focus on accuracy rather than speed. News spreads in a blink in today's world. Media outlets and influencers often come under pressure to publish it first. But in tragic situations like natural disasters and disease outbreaks, rushing to come first is not as important as accuracy is, as a single piece of misinformation can spark mass-scale panic and can slow down emergency efforts and lead people to make rash decisions. Taking a little more time to check the facts ensures that the information being shared is helpful, not harmful. Accuracy may save numerous lives during tragedies.
Misinformation spreads quickly it can only be prevented if people learn to critically evaluate what they hear and see. This entails being able to spot biased or deceptive headlines, cross-check claims and identify reliable sources. Digital literacy is of utmost importance; it makes people less susceptible to fear-based rumours, conspiracy theories and hoaxes.
Disaster preparedness programs should include awareness about the risks of spreading unverified information. Communities, schools and media platforms must educate people on how to respond responsibly during emergencies by staying calm, checking facts and sharing only credible updates. Spreading fake alerts or panic-inducing messages during a crisis is not only dangerous, but it can also have legal consequences. Public communication must focus on promoting trust, calm and clarity. When people understand the weight their words can carry during a crisis, they become part of the solution, not the problem.
References:
- https://dergipark.org.tr/en/download/article-file/3556152
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf
- https://english.mathrubhumi.com/news/india/fake-whatsapp-message-air-india-crash-pib-fact-check-fcwmvuyc
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf

Executive Summary:
A viral video (archived link) circulating on social media claims that Vice Admiral AN Pramod stated India would seek assistance from the United States and President Trump if Pakistan launched an attack, portraying India as dependent rather than self-reliant. Research traced the extended footage to the Press Information Bureau’s official YouTube channel, published on 11 May 2025. In the authentic video, the Vice Admiral makes no such remark and instead concludes his statement with, “That’s all.” Further analysis using the AI Detection tool confirmed that the viral clip was digitally manipulated with AI-generated audio, misrepresenting his actual words.
Claim:
In the viral video an X user posted with the caption
”India sells itself as a regional superpower, but its Navy Chief’s own words betray that image. If Pakistan attacks, their plan is to involve Trump, not fight back. This isn’t strategic partnership; it’s dependency in uniform”.
In the video the Vice Admiral was heard saying
“We have worked out among three services, this time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. We will complain against Pakistan to the United States of America and President Trump, like we did earlier in Operation Sindoor.”

Fact Check:
Upon conducting a reverse image search on key frames from the video, we located the full version of the video on the official YouTube channel of the Press Information Bureau (PIB), published on 11 May 2025. In this video, at the 59:57-minute mark, the Vice Admiral can be heard saying:
“This time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. That’s all.”

Further analysis was conducted using the Hive Moderation tool to examine the authenticity of the circulating clip. The results indicated that the video had been artificially generated, with clear signs of AI manipulation. This suggests that the content was not genuine but rather created with the intent to mislead viewers and spread misinformation.

Conclusion:
The viral video attributing remarks to Vice Admiral AN Pramod about India seeking U.S. and President Trump’s intervention against Pakistan is misleading. The extended speech, available on the Press Information Bureau’s official YouTube channel, contained no such statement. Instead of the alleged claim, the Vice Admiral concluded his comments by saying, “That’s all.” AI analysis using Hive Moderation further indicated that the viral clip had been artificially manipulated, with fabricated audio inserted to misrepresent his words. These findings confirm that the video is altered and does not reflect the Vice Admiral’s actual remarks.
Claim: Fake Viral Video Claiming Vice Admiral AN Pramod saying that next time if Pakistan Attack we will complain to US and Prez Trump.
Claimed On: Social Media
Fact Check: False and Misleading

Introduction
Recently, in April 2025, security researchers at Oligo Security exposed a substantial and wide-ranging threat impacting Apple's AirPlay protocol and its use via third-party Software Development Kit (SDK). According to the research, the recently discovered set of vulnerabilities titled "AirBorne" had the potential to enable remote code execution, escape permissions, and leak private data across many different Apple and third-party AirPlay-compatible devices. With well over 2.35 billion active Apple devices globally and tens of millions of third-party products that incorporate the AirPlay SDK, the scope of the problem is enormous. Those wireless-based vulnerabilities pose not only a technical threat but also increasingly an enterprise- and consumer-level security concern.
Understanding AirBorne: What’s at Stake?
AirBorne is the title given to a set of 23 vulnerabilities identified in the AirPlay communication protocol and its related SDK utilised by third-party vendors. Seventeen have been given official CVE designations. The most severe among them permit Remote Code Execution (RCE) with zero or limited user interaction. This provides hackers the ability to penetrate home networks, business environments, and even cars with CarPlay technology onboard.
Types of Vulnerabilities Identified
AirBorne vulnerabilities support a range of attack types, including:
- Zero-Click and One-Click RCE
- Access Control List (ACL) bypass
- User interaction bypass
- Local arbitrary file read
- Sensitive data disclosure
- Man-in-the-middle (MITM) attacks
- Denial of Service (DoS)
Each vulnerability can be used individually or chained together to escalate access and broaden the attack surface.
Remote Code Execution (RCE): Key Attack Scenarios
- MacOS – Zero-Click RCE (CVE-2025-24252 & CVE-2025-24206) These weaknesses enable attackers to run code on a MacOS system without any user action, as long as the AirPlay receiver is enabled and configured to accept connections from anyone on the same network. The threat of wormable malware propagating via corporate or public Wi-Fi networks is especially concerning.
- MacOS – One-Click RCE (CVE-2025-24271 & CVE-2025-24137) If AirPlay is set to "Current User," attackers can exploit these CVEs to deploy malicious code with one click by the user. This raises the level of threat in shared office or home networks.
- AirPlay SDK Devices – Zero-Click RCE (CVE-2025-24132) Third-party speakers and receivers through the AirPlay SDK are particularly susceptible, where exploitation requires no user intervention. Upon compromise, the attackers have the potential to play unauthorised media, turn microphones on, or monitor intimate spaces.
- CarPlay Devices – RCE Over Wi-Fi, Bluetooth, or USB CVE-2025-24132 also affects CarPlay-enabled systems. Under certain circumstances, the perpetrators around can take advantage of predictable Wi-Fi credentials, intercept Bluetooth PINs, or utilise USB connections to take over dashboard features, which may distract drivers or listen in on in-car conversations.
Other Exploits Beyond RCE
AirBorne also opens the door for:
- Sensitive Information Disclosure: Exposing private logs or user metadata over local networks (CVE-2025-24270).
- Local Arbitrary File Access: Letting attackers read restricted files on a device (CVE-2025-24270 group).
- DoS Attacks: Exploiting NULL pointer dereferences or misformatted data to crash processes like the AirPlay receiver or WindowServer, forcing user logouts or system instability (CVE-2025-24129, CVE-2025-24177, etc.).
How the Attack Works: A Technical Breakdown
AirPlay sends on port 7000 via HTTP and RTSP, typically encoded in Apple's own plist (property list) form. Exploits result from incorrect treatment of these plists, especially when skipping type checking or assuming invalid data will be valid. For instance, CVE-2025-24129 illustrates how a broken plist can produce type confusion to crash or execute code based on configuration.
A hacker must be within the same Wi-Fi network as the targeted device. This connection might be through a hacked laptop, public wireless with shared access, or an insecure corporate connection. Once in proximity, the hacker has the ability to use AirBorne bugs to hijack AirPlay-enabled devices. There, bad code can be released to spy, gain long-term network access, or spread control to other devices on the network, perhaps creating a botnet or stealing critical data.
The Espionage Angle
Most third-party AirPlay-compatible devices, including smart speakers, contain built-in microphones. In theory, that leaves the door open for such devices to become eavesdropping tools. While Oligo did not show a functional exploit for the purposes of espionage, the risk suggests the gravity of the situation.
The CarPlay Risk Factor
Besides smart home appliances, vulnerabilities in AirBorne have also been found for Apple CarPlay by Oligo. Those vulnerabilities, when exploited, may enable attackers to take over an automobile's entertainment system. Fortunately, the attacks would need pairing directly through USB or Bluetooth and are much less practical. Even so, it illustrates how networks of connected components remain at risk in various situations, ranging from residences to automobiles.
How to Protect Yourself and Your Organisation
- Immediate Actions:
- Update Devices: Ensure all Apple devices and third-party gadgets are upgraded to the latest software version.
- Disable AirPlay Receiver: If AirPlay is not in use, disable it in system settings.
- Restrict AirPlay Access: Use firewalls to block port 7000 from untrusted IPs.
- Set AirPlay to “Current User” to limit network-based attack.
- Organisational Recommendations:
- Communicate the patch urgency to employees and stakeholders.
- Inventory all AirPlay-enabled hardware, including in meeting rooms and vehicles.
- Isolate vulnerable devices on segmented networks until updated.
Conclusion
The AirBorne vulnerabilities illustrate that even mature systems such as Apple's are not immune from foundational security weaknesses. The extensive deployment of AirPlay across devices, industries, and ecosystems makes these vulnerabilities a systemic threat. Oligo's discovery has served to catalyse immediate response from Apple, but since third-party devices remain vulnerable, responsibility falls to users and organisations to install patches, implement robust configurations, and compartmentalise possible attack surfaces. Effective proactive cybersecurity hygiene, network segmentation, and timely patches are the strongest defences to avoid these kinds of wormable, scalable attacks from becoming large-scale breaches.
References
- https://www.oligo.security/blog/airborne
- https://www.wired.com/story/airborne-airplay-flaws/
- https://thehackernews.com/2025/05/wormable-airplay-flaws-enable-zero.html
- https://www.securityweek.com/airplay-vulnerabilities-expose-apple-devices-to-zero-click-takeover/
- https://www.pcmag.com/news/airborne-flaw-exposes-airplay-devices-to-hacking-how-to-protect-yourself
- https://cyberguy.com/security/hackers-breaking-into-apple-devices-through-airplay/