#FactCheck: Viral Video Claiming IAF Air Chief Marshal Acknowledged Loss of Jets Found Manipulated
Executive Summary:
A video circulating on social media falsely claims to show Indian Air Chief Marshal AP Singh admitting that India lost six jets and a Heron drone during Operation Sindoor in May 2025. It has been revealed that the footage had been digitally manipulated by inserting an AI generated voice clone of Air Chief Marshal Singh into his recent speech, which was streamed live on August 9, 2025.
Claim:
A viral video (archived video) (another link) shared by an X user stating in the caption “ Breaking: Finally Indian Airforce Chief admits India did lose 6 Jets and one Heron UAV during May 7th Air engagements.” which is actually showing the Air Chief Marshal has admitted the aforementioned loss during Operation Sindoor.

Fact Check:
By conducting a reverse image search on key frames from the video, we found a clip which was posted by ANI Official X handle , after watching the full clip we didn't find any mention of the aforementioned alleged claim.

On further research we found an extended version of the video in the Official YouTube Channel of ANI which was published on 9th August 2025. At the 16th Air Chief Marshal L.M. Katre Memorial Lecture in Marathahalli, Bengaluru, Air Chief Marshal AP Singh did not mention any loss of six jets or a drone in relation to the conflict with Pakistan. The discrepancies observed in the viral clip suggest that portions of the audio may have been digitally manipulated.

The audio in the viral video, particularly the segment at the 29:05 minute mark alleging the loss of six Indian jets, appeared to be manipulated and displayed noticeable inconsistencies in tone and clarity.
Conclusion:
The viral video claiming that Air Chief Marshal AP Singh admitted to the loss of six jets and a Heron UAV during Operation Sindoor is misleading. A reverse image search traced the footage that no such remarks were made. Further an extended version on ANI’s official YouTube channel confirmed that, during the 16th Air Chief Marshal L.M. Katre Memorial Lecture, no reference was made to the alleged losses. Additionally, the viral video’s audio, particularly around the 29:05 mark, showed signs of manipulation with noticeable inconsistencies in tone and clarity.
- Claim: Viral Video Claiming IAF Chief Acknowledged Loss of Jets Found Manipulated
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary
A video of senior Congress leader Shashi Tharoor is widely circulating on social media, allegedly showing him praising Pakistan’s diplomatic stance over the ICC T20 World Cup issue. Many users are sharing the clip believing it to be genuine. However, research by the CyberPeace found the claim to be false. The viral video of Tharoor is a deepfake, and the Congress leader himself has described it as fabricated and fake.
Claim
A Facebook page named “Vok Sports” shared the video on February 11, 2026, claiming that Tharoor praised Pakistan. In the viral clip, he is purportedly heard saying in English that Pakistan’s diplomatic handling of the matter was “brilliant” and that it had outmanoeuvred the Indian cricket board, adding that good diplomacy could make a weak nation appear powerful.
The video was widely shared by social media users as authentic. (Archive links and post details provided.)
Fact Check
To verify the claim, we first scanned Tharoor’s official X (formerly Twitter) handle. We found a post dated February 12 in which he responded to a Pakistani journalist who had shared the video. Tharoor stated that the clip was AI-generated “fake news,” adding that neither the language nor the voice in the video was his.

A reverse image search using Google Lens led the Desk to a video uploaded on February 10, 2026, by India Today on its official YouTube channel. The visuals in this original video exactly matched those seen in the viral clip showing Tharoor speaking to the media. However, upon analysing the original footage, we found that Tharoor was speaking in Hindi about the controversy surrounding the T20 World Cup. He stated that politics should not be mixed with cricket or sports and did not praise Pakistan or the Pakistan Cricket Board at any point. This indicates that the audio in the viral clip had been manipulated and replaced. In the original video, Tharoor said that politicians should conduct politics separately, diplomats should handle diplomacy, and cricket players should focus on the game, expressing hope that cricket would move forward with the match.
- https://www.youtube.com/watch?v=GkA1mLlAT8Q&t=3s

To further verify the authenticity of the video, several AI detection tools were used. Analysis through Aurigin.ai suggested a 78 percent probability that the audio in the viral clip was AI-generated.

Conclusion
The CyberPeace confirmed that the viral video is a deepfake. Tharoor did not praise Pakistan’s diplomatic stance during the T20 World Cup controversy, and the circulating clip has been digitally manipulated.

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.

Executive Summary:
A viral online image claims to show Arvind Kejriwal, Chief Minister of Delhi, welcoming Elon Musk during his visit to India to discuss Delhi’s administrative policies. However, the CyberPeace Research Team has confirmed that the image is a deep fake, created using AI technology. The assertion that Elon Musk visited India to discuss Delhi’s administrative policies is false and misleading.


Claim
A viral image claims that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies.


Fact Check:
Upon receiving the viral posts, we conducted a reverse image search using InVid Reverse Image searching tool. The search traced the image back to different unrelated sources featuring both Arvind Kejriwal and Elon Musk, but none of the sources depicted them together or involved any such event. The viral image displayed visible inconsistencies, such as lighting disparities and unnatural blending, which prompted further investigation.
Using advanced AI detection tools like TrueMedia.org and Hive AI Detection tool, we analyzed the image. The analysis confirmed with 97.5% confidence that the image was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the merging of facial features and the alignment of clothes and background, which were artificially generated.




Moreover, a review of official statements and credible reports revealed no record of Elon Musk visiting India to discuss Delhi’s administrative policies. Neither Arvind Kejriwal’s office nor Tesla or SpaceX made any announcement regarding such an event, further debunking the viral claim.
Conclusion:
The viral image claiming that Arvind Kejriwal welcomed Elon Musk during his visit to India to discuss Delhi’s administrative policies is a deep fake. Tools like Reverse Image search and AI detection confirm the image’s manipulation through AI technology. Additionally, there is no supporting evidence from any credible sources. The CyberPeace Research Team confirms the claim is false and misleading.
- Claim: Arvind Kejriwal welcomed Elon Musk to India to discuss Delhi’s administrative policies, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading