#FactCheck -Deepfake Video of Tharoor Misleads Social Media Users
Executive Summary
A video of senior Congress leader Shashi Tharoor is widely circulating on social media, allegedly showing him praising Pakistan’s diplomatic stance over the ICC T20 World Cup issue. Many users are sharing the clip believing it to be genuine. However, research by the CyberPeace found the claim to be false. The viral video of Tharoor is a deepfake, and the Congress leader himself has described it as fabricated and fake.
Claim
A Facebook page named “Vok Sports” shared the video on February 11, 2026, claiming that Tharoor praised Pakistan. In the viral clip, he is purportedly heard saying in English that Pakistan’s diplomatic handling of the matter was “brilliant” and that it had outmanoeuvred the Indian cricket board, adding that good diplomacy could make a weak nation appear powerful.
The video was widely shared by social media users as authentic. (Archive links and post details provided.)
Fact Check
To verify the claim, we first scanned Tharoor’s official X (formerly Twitter) handle. We found a post dated February 12 in which he responded to a Pakistani journalist who had shared the video. Tharoor stated that the clip was AI-generated “fake news,” adding that neither the language nor the voice in the video was his.

A reverse image search using Google Lens led the Desk to a video uploaded on February 10, 2026, by India Today on its official YouTube channel. The visuals in this original video exactly matched those seen in the viral clip showing Tharoor speaking to the media. However, upon analysing the original footage, we found that Tharoor was speaking in Hindi about the controversy surrounding the T20 World Cup. He stated that politics should not be mixed with cricket or sports and did not praise Pakistan or the Pakistan Cricket Board at any point. This indicates that the audio in the viral clip had been manipulated and replaced. In the original video, Tharoor said that politicians should conduct politics separately, diplomats should handle diplomacy, and cricket players should focus on the game, expressing hope that cricket would move forward with the match.
- https://www.youtube.com/watch?v=GkA1mLlAT8Q&t=3s

To further verify the authenticity of the video, several AI detection tools were used. Analysis through Aurigin.ai suggested a 78 percent probability that the audio in the viral clip was AI-generated.

Conclusion
The CyberPeace confirmed that the viral video is a deepfake. Tharoor did not praise Pakistan’s diplomatic stance during the T20 World Cup controversy, and the circulating clip has been digitally manipulated.
Related Blogs

Introduction
Words come easily, but not necessarily the consequences that follow. Imagine a 15-year-old child on the internet hoping that the world will be nice to him and help him gain confidence, but instead, someone chooses to be mean on the internet, or the child becomes the victim of a new kind of cyberbullying, i.e., online trolling. The consequences of trolling can have serious repercussions, including eating disorders, substance abuse, conduct issues, body dysmorphia, negative self-esteem, and, in tragic cases, self-harm and suicide attempts in vulnerable individuals. The effects of online trolling can include anxiety, depression, and social isolation. This is one example, and hate speech and online abuse can touch anyone, regardless of age, background, or status. The damage may take different forms, but its impact is far-reaching. In today’s digital age, hate speech spreads rapidly through online platforms, often amplified by AI algorithms.
As we celebrate today, i.e., 18th June, the International Day for Countering Hate Speech, if we have ever been mean to someone on the internet, we pledge never to repeat that kind of behaviour, and if we have been the victim, we will stand against the perpetrator and report it.
This year, the theme for the International Day for Countering Hate Speech is “Hate Speech and Artificial Intelligence Nexus: Building coalitions to reclaim inclusive and secure environments free of hatred. UN Secretary-General Antonio Guterres, in his statement, said, “Today, as this year’s theme reminds us, hate speech travels faster and farther than ever, amplified by Artificial Intelligence. Biased algorithms and digital platforms are spreading toxic content and creating new spaces for harassment and abuse."
Coded Convictions: How AI Reflects and Reinforces Ideologies
Algorithms have swiftly taken the place of feelings; they tamper with your taste, and they do so with a lighter foot, invisibly. They are becoming an important component of social media user interaction and content distribution. While these tools are designed to improve user experience, they frequently inadvertently spread divisive ideologies and push extremist propaganda. This amplification can strengthen the power of extremist organisations, spread misinformation, and deepen societal tensions. This phenomenon, known as “algorithmic radicalisation,” demonstrates how social media companies may utilise a discriminating content selection approach to entice people down ideological rabbit holes and shape their ideas. AI-driven algorithms often prioritise engagement over ethics, enabling divisive and toxic content to trend and placing vulnerable groups, especially youth and minorities, at risk. The UN’s Strategy and Plan of Action on Hate Speech, launched on June 18, 2019, recognises that while AI holds promise for early detection and prevention of harmful speech, it also demands stringent human rights safeguards. Without regulation, these tools can themselves become purveyors of bias and exclusion.
India’s Constitutional Resolve and Civilizational Ethos against Hate
India has always taken pride in being inclusive and united rather than divided. As far as hate speech is concerned, India's stand is no different. The United Nations, India believes in the same values as its international counterpart. Although India has won many battles against hate speech, the war is not over and is now more prominent than ever due to the advancement in communication technologies. In India, while the right to freedom of speech and expression is protected under Article 19(1)(a), its exercise is limited subject to reasonable restrictions under Article 19(2). Landmark rulings such as Ramji Lal Modi v. State of U.P. and Amish Devgan v. UOI have clarified that speech can be curbed if it incites violence or undermines public order. Section 69A of the IT Act, 2000, empowers the government to block content, and these principles are also reflected in Section 196 of the BNS, 2023 (153A IPC) and Section 299 of the BNS, 2023 (295A IPC). Platforms are also required to track down the creators of harmful content and remove it within a reasonable hour and fulfil their due diligence requirements under IT rules.
While there is no denying that India needs to be well-equipped and prepared normatively to tackle hate propaganda and divisive forces. India’s rich culture and history, rooted in philosophies of Vasudhaiva Kutumbakam (the world is one family) and pluralistic traditions, have long stood as a beacon of tolerance and coexistence. By revisiting these civilizational values, we can resist divisive forces and renew our collective journey toward harmony and peaceful living.
CyberPeace Message
The ultimate goal is to create internet and social media platforms that are better, safer and more harmonious for each individual, irrespective of his/her/their social and cultural background. CyberPeace stands resolute on promoting digital media literacy, cyber resilience, and consistently pushing for greater accountability for social media platforms.
References
- https://www.un.org/en/observances/countering-hate-speech
- https://www.artemishospitals.com/blog/the-impact-of-trolling-on-teen-mental-health
- https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007
.webp)
Executive Summary
Amid heightened tensions following Israel and US actions against Iran, a video is being widely shared on social media. The footage shows thick black smoke rising into the sky from a location, suggesting a major explosion or attack. However, research conducted by the CyberPeace found the viral claim to be misleading. Our research revealed that the video is not recent and has no connection to the current Israel-Iran tensions. In fact, the footage is nearly a year old and shows a fire at a casino in Mexico, now being shared out of context.
Claim
Users circulating the video claim that it shows an attack on Tel Aviv, Israel. On March 1, 2026, a user on X shared the clip with the caption, “Iran has drained the oil out of Tel Aviv,” implying a devastating retaliatory strike. (Post and archive links provided above.)

Fact Check:
To verify the authenticity of the video, we extracted key frames and performed a reverse image search using Google Lens. During the search, we found the same visuals in a Spanish media report published on January 16, 2025. This confirmed that the video predates the ongoing geopolitical developments.

According to the report, the footage shows a fire at the Royal Park Casino located inside the Cinépolis plaza in Culiacán, Mexico. Local outlet Meganoticias Culiacán reported on X that the casino was “completely burned down.” The structure reportedly collapsed following the blaze, and emergency responders confirmed that several people were injured. Further keyword searches led us to the same footage on the official YouTube channel of Milenio, uploaded on January 17, 2025. The report clearly states that the fire occurred at the Royal Yacht Casino in Mexico and is unrelated to any recent military developments.

Conclusion
Evidence gathered during our research clearly establishes that the viral video is not related to any missile attack by Iran on Israel. The claim is false. The footage is from a fire incident at a casino in Mexico and is being misleadingly shared in the context of current international tensions, potentially creating unnecessary panic and confusion.