#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A video of actor Salman Khan is being widely shared on social media with the claim that he posted a special video on the occasion of Eid. However, a research by the CyberPeace found the claim to be misleading. The viral video is not recent but dates back to 2019. Meanwhile, Salman Khan did share a different video with his family this year.
Claim:
On Facebook, a user shared the viral video on March 21, 2026, with the caption ,“Salman Khan shared a special video on Eid.”
Post link and archive link:

Fact Check
To verify the claim, we examined Salman Khan’s social media accounts. On his Instagram handle, we found a video posted on March 21, 2026, in which he is seen greeting fans from a bulletproof balcony along with his family on the occasion of Eid.

This video is completely different from the viral clip and has no connection to it. Further, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. During the research, we found the same video on Salman Khan’s Instagram account, where it was originally posted on June 5, 2019.
Post link:
https://www.instagram.com/p/ByVMS6alo76/?igsh=MTA3ZDBqdGlidmRhMQ%3D%3D

Conclusion:
The viral claim is misleading. The video being shared is not recent but from 2019. Salman Khan did share a video this year, but it is different from the one going viral.

The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.

The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.

AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.

AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.

Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
References
- OECD AI lifecycle
- OECD AI system lifecycle description
- OECD AI governance lifecycle framework
- EU AI Act overview
- EU AI Act risk categories
- UNESCO Recommendation on the Ethics of AI
- AI governance lifecycle analysis
- OECD AI policy tools database

Introduction
The information of hundreds of thousands of Indians who received the COVID vaccine was Leaked in a significant data breach and posted on a Telegram channel. Numerous reports claim that sensitive information, including a person’s phone number, gender, ID card details, and date of birth, leaked over Telegram. It could be obtained by typing a person’s name into a Telegram bot.
What really happened?
The records pertaining to the mobile number registered in the CoWin portal are accessible on the Malayalam news website channel. It is also feasible to determine which vaccination was given and where it was given.
According to The Report, the list of individuals whose data was exposed includes BJP Tamil Nadu president K Annamalai, Congress MP Karti Chidambaram, and former BJP union minister for health Harsh Vardhan. Telangana’s minister of information and communication technology, Kalvakuntla Taraka Rama Rao, is also on the list.
MEITY stated in response to the data leak, “It is old data, we are still confirming it. We have requested a report on the matter.
After the media Report, the bot was disabled, but experts said the incident raised severe issues because the information might be used for identity theft, phishing emails, con games, and extortion calls. The Indian Computer Emergency Response Team (CERT-In), the government’s nodal body, has opened an investigation into the situation
The central government declared the data breach reports regarding the repository of beneficiaries against Covid to be “mischievous in nature” on Monday and claimed the ‘bot’ that purportedly accessed the confidential data was not directly accessing the CoWIN database.
According to the first complaint by CERT-In, the government’s cybersecurity division, the government claimed the bot might be displaying information from “previously stolen data.” Reports.
The health ministry refuted the claim, asserting that no bots could access the information without first verifying with a one-time password.
“It is made clear that all of these rumours are false and malicious. The health ministry’s CoWIN interface is entirely secure and has sufficient data privacy protections. The security of the data on the CoWIN portal is being ensured in every way possible, according to a statement from the health ministry.
Meity said the CoWin program or database was not directly compromised, and the shared information appeared to be taken from a previous intrusion. But the hack again highlights the growing danger of cyber assaults, particularly on official websites.

Recent cases of data leak
Dominos India 2021– Dominos India, a division of Jubilant FoodWorks, faced a cyberattack on May 22, 2021, which led to the disclosure of information from 180 million orders. The breach exposed order information, email addresses, phone numbers, and credit card information. Although Jubilant FoodWorks acknowledged a security breach, it refuted any illegal access to financial data.
Air India – A cyberattack that affected Air India in May 2021 exposed the personal information of about 4.5 million customers globally. Personal information recorded between August 26, 2011, and February 3, 2021, including names, dates of birth, contact information, passport information, ticket details, frequent flyer information from Star Alliance and Air India, and credit card information, were exposed in the breach.
Bigbasket – BigBasket, an online supermarket, had a data breach in November 2020, compromising the personal information of approximately 20 million consumers. Email IDs, password hashes, PINs, phone numbers, addresses, dates of birth, localities, and IP addresses were among the information released from an insecure database containing over 15 GB of customer data. BigBasket admitted to the incident and reported it to the Bengaluru Cyber Crime Department.
Unacademy – Unacademy, an online learning platform, experienced a data breach in May 2020, compromising the email addresses of approximately 11 million subscribers. While no sensitive information, such as financial data or passwords, was compromised, user data, including IDs, passwords, date joined, last login date, email IDs, names, and user credentials, was. The breach was detected when user accounts were uncovered for sale on the dark web.
2022 Card Data- Cybersecurity researchers from AI-driven Singapore-based CloudSEK found a threat actor offering a database of 1.2 million cards for free on a Dark Web forum for crimes on October 12, 2022. This came after a second problem involving 7.9 million cardholder records that were reported on the BidenCash website. This comprised information pertaining to State Bank of India (SBI) clients. And other well-known companies were among those targeted in high-profile data breach cases that have surfaced in recent years.

Conclusion
Data breach cases are increasing daily, and attackers are mainly attacking the healthcare sectors and health details as they can easily find personal details. This recent CoWIN case has compromised thousands of people’s data. The All-India Institute of Medical Sciences’ systems were compromised by hackers a few months ago. Over 95% of adults have had their vaccinations, according to the most recent data, even if the precise number of persons impacted by the CoWin privacy breach could not be determined.