#FactCheck: Viral Video Claiming IAF Air Chief Marshal Acknowledged Loss of Jets Found Manipulated
Research Wing
Innovation and Research
PUBLISHED ON
Aug 14, 2025
10
Executive Summary:
A video circulating on social media falsely claims to show Indian Air Chief Marshal AP Singh admitting that India lost six jets and a Heron drone during Operation Sindoor in May 2025. It has been revealed that the footage had been digitally manipulated by inserting an AI generated voice clone of Air Chief Marshal Singh into his recent speech, which was streamed live on August 9, 2025.
Claim:
A viral video (archived video) (another link) shared by an X user stating in the caption “ Breaking: Finally Indian Airforce Chief admits India did lose 6 Jets and one Heron UAV during May 7th Air engagements.” which is actually showing the Air Chief Marshal has admitted the aforementioned loss during Operation Sindoor.
Fact Check:
By conducting a reverse image search on key frames from the video, we found a clip which was posted by ANI Official X handle , after watching the full clip we didn't find any mention of the aforementioned alleged claim.
On further research we found an extended version of the video in the Official YouTube Channel of ANI which was published on 9th August 2025. At the 16th Air Chief Marshal L.M. Katre Memorial Lecture in Marathahalli, Bengaluru, Air Chief Marshal AP Singh did not mention any loss of six jets or a drone in relation to the conflict with Pakistan. The discrepancies observed in the viral clip suggest that portions of the audio may have been digitally manipulated.
The audio in the viral video, particularly the segment at the 29:05 minute mark alleging the loss of six Indian jets, appeared to be manipulated and displayed noticeable inconsistencies in tone and clarity.
Conclusion:
The viral video claiming that Air Chief Marshal AP Singh admitted to the loss of six jets and a Heron UAV during Operation Sindoor is misleading. A reverse image search traced the footage that no such remarks were made. Further an extended version on ANI’s official YouTube channel confirmed that, during the 16th Air Chief Marshal L.M. Katre Memorial Lecture, no reference was made to the alleged losses. Additionally, the viral video’s audio, particularly around the 29:05 mark, showed signs of manipulation with noticeable inconsistencies in tone and clarity.
Claim: Viral Video Claiming IAF Chief Acknowledged Loss of Jets Found Manipulated
The Data Protection Data Privacy Act 2023 is the most essential step towards protecting, prioritising, and promoting the users’ privacy and data protection. The Act is designed to prioritize user consent in data processing while assuring uninterrupted services like online shopping, intermediaries, etc. The Act specifies that once a user provides consent to the following intermediary platforms, the platforms can process the data until the user withdraws the rights of it. This policy assures that the user has the entire control over their data and is accountable for its usage.
A keen Outlook
The Following Act also provides highlights for user-specific purpose, which is limited to data processing. This step prevents the misuse of data and also ensures that the processed data is being for the purpose for which it was obtained at the initial stage from the user.
Data Fudiary and Processing of Online Shopping Platforms: The Act Emphasises More on Users’ Consent. Once provided, the Data Fudiary can constantly process the data until it is specifically withdrawn by the Data Principal.
Detailed Analysis
Consent as a Foundation: The Act places the user's consent as a backbone to the data processing. It sets clear boundaries for data processing. It can be Collecting, Processing, and Storing, and must comply with users’ consent before being used.
Uninterrupted Data processing: With the given user consent, the intermediaries are not time-restrained. As long as the user does not obligate their consent, the process will be ongoing.
Consent and Order Fulfillment: Consent, once provided, encloses all the activities related to the specific purpose for which it was meant to the data it was given for subsequent actions such as order fulfilment.
Detailed Analysis
Purpose-Limited Consent: The consent given is purpose-limited. The platform cannot misuse the obtained data for its personal use.
Seamless User Experience: By ensuring that the user consent covers the full transactions, spared from the unwanted annoyance of repeated consent requests from the actual ongoing activities.
Data Retention and Rub Out on Online Platforms: Platforms must ensure data minimisation post its utilisation period. This extends to any kind of third-party processors they might take on.
Detailed Analysis
Minimization and Security Assurance: By compulsory data removal on post ultization,This step helps to reduce the volume of data platforms hold, which leads to minimizing the risk to data.
Third-Party Accountability, User Privacy Protection.
Influence from Global frameworks
The impactful changes based on global trends and similar legislation( European Union’s GDPR) here are some fruitful changes in intermediaries and social media platforms experienced after the implementation of the DPDP Act 2023.
Solidified Consent Mechanism: Platforms and intermediatries need to ensure the users’ consent is categorically given, and informed, and should be specific to which the data is obtained. This step may lead to user-friendly consent forms activities and prompts.
Data Minimizations: Platforms that tend to need to collect the only data necessary for the specific purpose mentioned and not retain information beyond its utility.
Transparency and Accountability: Data collecting Platforms need to ensure transparency in data collecting, data processing, and sharing practices. This involves more detailed policy and regular audits.
Data Portability: Users have the right to request for a copy of their own data used in format, allowing them to switch platforms effectively.
Right to Obligation: Users can have the request right to deletion of their data, also referred to as the “Right to be forgotten”.
Prescribed Reporting: Under circumstances of data breaches, intermediary platforms are required to report the issues and instability to the regulatory authorities within a specific timeline.
Data Protection Authorities: Due to the increase in data breaches, Large platforms indeed appoint data protection officers, which are responsible for the right compliance with data protection guidelines.
Disciplined Policies: Non-compliance might lead to a huge amount of fines, making it indispensable to invest in data protection measures.
Third-Party Audits: Intermediaries have to undergo security audits by external auditors to ensure they are meeting the expeditions of the following compliances.
Third-Party Information Sharing Restrictions: Sharing personal information and users’ data with third parties (such as advertisers) come with more detailed and disciplined guideline and user consent.
Conclusion
The Data Protection Data Privacy Act 2023 prioritises user consent, ensuring uninterrupted services and purpose-limited data processing. It aims to prevent data misuse, emphasising seamless user experiences and data minimisation. Drawing inspiration from global frameworks like the EU's GDPR, it introduces solidified consent mechanisms, transparency, and accountability. Users gain rights such as data portability and data deletion requests. Non-compliance results in significant fines. This legislation sets a new standard for user privacy and data protection, empowering users and holding platforms accountable. In an evolving digital landscape, it plays a crucial role in ensuring data security and responsible data handling.
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.
Source: C2PA website
The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.
Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.
The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”
Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.
Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.