#Fact Check: AI Video Falsely Shows Afghanistan Downing Pakistani Fighter Jet
Executive Summary:
Amid escalating tensions between Afghanistan and Pakistan, a video is being widely shared on social media claiming that Afghanistan has shot down a Pakistani fighter jet. The posts further allege that the incident marks the formal beginning of a war between the two countries. However, research conducted by the CyberPeace found the viral claim to be false and the research revealed that the circulating video is not authentic but AI-generated.
Claim
On February 24, 2026, a user on X (formerly Twitter) shared the viral video with the caption: “Afghanistan has shot down a Pakistani fighter jet! Afghanistan announces that war with Pakistan has begun.”
- Original post link: https://x.com/JyotiDevSpeaks/status/2026348257186545914
- Archived link: https://ghostarchive.org/archive/7l00Y

Fact Check:
A careful review of the viral video revealed unusual visual patterns and artificial-looking effects, raising suspicions that it may have been created using artificial intelligence.We analyzed the video using the AI detection tool Hive Moderation, which indicated an 86 percent probability that the video was AI-generated.

To further verify the findings, we scanned the footage using another AI detection platform, Sightengine. The results showed a 99 percent likelihood that the video was AI-generated.

To understand the broader context of the ongoing tensions, we conducted a keyword search and found a report published on February 22, 2026, by BBC Hindi. According to the report, Pakistan claimed it had targeted “seven terrorist hideouts and camps” along the Pakistan–Afghanistan border based on intelligence inputs. Meanwhile, a spokesperson for the Taliban government in Afghanistan stated that Pakistani airstrikes in Nangarhar and Paktika provinces resulted in the deaths of dozens of people, including women and children.
- https://www.bbc.com/hindi/articles/clyz8141397o
Conclusion
Our research confirms that the viral video claiming Afghanistan shot down a Pakistani fighter jet and formally declared war on Pakistan is fake. The footage is AI-generated and is being circulated with a false and misleading narrative.
Related Blogs

Executive Summary
A collage of two images circulating on social media is falsely claiming that the street vendor who served jhalmuri to Prime Minister Narendra Modi during an election campaign in Jhargram was actually a personnel from the Special Protection Group (SPG). Research by the CyberPeace Research Wing found the claim to be false and misleading, indicating that it is being shared as part of election-related misinformation. The vendor and the SPG personnel seen in the viral collage are two different individuals.
Claim
An X (formerly Twitter) user, “@Jeetuburdak,” shared the viral collage on April 21, 2026, with the caption: “Another scam! The jhalmuri seller turned out to be an SPG commando.” The post quickly gained traction online.

Fact Check
A close examination of the two images used in the collage shows clear visual differences between the individuals. The person seen in SPG uniform does not match the street vendor who served food to the prime minister. Reverse image searches were conducted using multiple tools to trace the origin of the images. While no verifiable source was found linking the SPG personnel’s image to the vendor, several credible reports and videos featured the actual jhalmuri seller from the campaign event.
- https://x.com/ANI/status/2045859146508177911?s=20
- https://news24online.com/cities/kolkata/who-is-the-man-that-served-jhalmuri-to-pm-modi-know-his-daily-income-and-what-he-talked-about-with-pm/811123/


According to media reports, the prime minister briefly stopped at a roadside stall during the campaign in Jhargram and interacted with the vendor while enjoying jhalmuri. The vendor was later interviewed by multiple outlets, further confirming his identity as a local seller. Additionally, technical facial comparison analysis using online tools also indicated that the two individuals in the viral collage are not the same person.

Conclusion
The claim that the jhalmuri vendor was an SPG commando is false and misleading. The viral collage shows two different individuals, and there is no evidence to support the allegation.
.webp)
Introduction
Autonomous transportation, smart cities, remote medical care, and immersive augmented reality are just a few of the revolutionary applications made possible by the global rollout of 5G technology. However, along with this revolution in connectivity, a record-breaking rise in vulnerabilities and threats has emerged, driven by software-defined networks, growing attack surfaces, and increasingly complex networks. As work on next-generation 6G networks accelerates, with commercialisation starting in 2030, security issues are piling up, including those related to AI-driven networks, terahertz communications, and quantum computing attacks. For a nation like India, poised to become a global technological leader, next-generation network procurement is not merely a technical necessity but a strategic imperative. Initiatives such as India-UK collaboration on telecom security in recent years say a lot about how international alliances are the order of the day to address these challenges.
Why Cybersecurity in 5G and 6G Networks is Crucial
With the launch of global 5G services and the rapid introduction of 6G technologies, the telecom sector is seeing a fundamental transformation. Besides expanding connectivity, future networks are also creating the building blocks for networked and highly intelligent environments. With its ultra-high speed of 10 Gbps, network slicing, and ultra-low latency, 5G provides new capabilities that are perfectly suited for mission-critical applications such as telemedicine, autonomous vehicles, and industrial IoT. Sixth-generation wireless technology is still in development, and it will be approximately one hundred times faster than fifth-generation. Here are a few drawbacks and challenges:
- Decentralised Infrastructure (edge computing nodes): Increased number of entry points for attack.
- Virtual Network Functions (VNFs): Greater vulnerability to configuration issues and software exploitation.
- Billions of IoT devices with different security states, thus forming networks that are more difficult to secure.
Although these challenges are unparalleled, the advancement in technology also creates new opportunities.
Understanding the Cyber Threat Landscape for 5G and 6G
The move to 5G and the upgrade to 6G open great opportunities, but also open doors for new cybersecurity risks. Open RAN usage offers flexibility and vendor selection but exposes the supply chain to untested third-party components and attacks. SBA security vulnerabilities can be exploited to disrupt vital network services, resulting in outages or data breaches. Similarly, widespread adoption of edge computing to reduce latency creates multiple entry points for an attacker to target. Compounding the problem is the explosion of IoT device connections through 5G, which, if breached, can fuel massive botnets capable of conducting massive distributed denial-of-service (DDoS) attacks.
Challenges in 6G
- AI-Powered Cyberattacks: AI-native 6G networks are susceptible to adversarial machine learning attacks, data model poisoning, both for security and for traffic optimisation.
- Quantum Threats: Post-quantum cryptography may be required if quantum computing renders current encryption algorithms outdated.
- Privacy Concerns with Digital Twins: 6G may result in creating enormous privacy and data protection issues in addition to offering real-time virtual replicas of the physical world.
- Cross-Border Data Flow Risks: Secure interoperability frameworks and standardised data sovereignty are essential for the worldwide rollout of 6G.
A Critical Step Toward Secure Telecom: The India-UK Partnership
India's recent foray with the UK reflects its active role in shaping the future of telecom security. Major points of the UK-India Telecom Roundtable are:
- MoU between SONIC Labs and C-DOT: Dedicated to Open RAN and AI integration security in 4G/5G deployments. This will offer supply chain diversity without sacrificing resilience.
- Research Partnerships for 6G: Partnerships with UK institutions like CHEDDAR (Cloud & Distributed Computing Hub) and the University of Glasgow 6G Research Centre are focused on developing AI-driven network security solutions, green 6G, and quantum-resistant design.
- Telecom Cybersecurity Centres of Excellence: Constructing two-way CoEs for telecom cybersecurity, ethical AI, and digital twin security models.
- Standardisation Efforts: Joint contribution to ITU for the creation of IMT-2030 standards, in a way that cybersecurity-by-design principles are integrated into worldwide 6G specifications.
- Future Initiatives:
- Application of privacy-enhancing technologies (PETs) for cross-sectoral data usage.
- Secure quantum communications to be used for satellite and submarine cable connections.
- Encouragement of native telecommunication stacks for strategic independence.
Global Policy and Regulatory Aspects
- India's Bharat 6G Vision: India will lead the global standardisation process in the Bharat 6G Alliance with a vision of inclusive, secure, and sustainable connectivity.
- International Harmonisation:
- 3GPP and ITU's joint effort towards standardisation of 6G security.
- Cross-border privacy and cybersecurity compliance system designs to enable secure flows of data.
- Cyber Diplomacy for Telecom Security: Cross-border sharing of information architectures, threat intelligence sharing, and coordinated incident response schemes are essential to 6G security resilience globally.
Building a Secure and Resilient Future for 5G and 6G
Establishing a safe and future-proof 5G and 6G environment should be an end-to-end effort involving governments, industry, and technology vendors. Security should be integrated into the underlying architecture of the networks and not an afterthought feature to be optionally provided. Active engagement in international bodies to establish homogeneous security and privacy standards across geographies is also required. Public-private partnerships, including academia partnerships, will be the driver for innovation and the creation of advanced protection mechanisms. Simultaneously, creating a competent talent pool to manage AI-based threat analysis, quantum-resistant cryptography, and next-generation cryptographic methods will be required to combat the advanced menace of new telecom technologies.
Conclusion
Given 6G on the way and 5G technologies already changing global connections, cybersecurity needs to continue to be a key focus. The partnership between India and the UK serves as an example of why the safe rise of tomorrow's networks depends on global collaboration, AI-driven security measures, plus quantum preparedness. The world can unleash the potential for transformation of 5G and 6G through combining security by design, supporting international standards, and encouraging innovation via cooperation. This will result in an online future that is not only quick and egalitarian but also solid and trustworthy.
References:
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2105225
- https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx
- https://dot.gov.in/sites/default/files/Bharat%206G%20Vision%20Statement%20-%20full.pdf
- https://www.gsma.com/solutions-and-impact/technologies/security/wp-content/uploads/2024/07/FS.40-v3.0-002-19-July.pdf

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL