#FactCheck: Old Jerusalem Clash Video Falsely Shared as Chaos at Tel Aviv Airport
Executive Summary
A video is being widely shared on social media showing a group of people clashing near a counter. The clip is being claimed to be from Ben Gurion Airport in Tel Aviv, Israel. Users allege that panic caused by Iranian missile threats has led people to try to flee the country, resulting in chaos and fights over flight tickets. However, a research by the CyberPeace found the claim to be false. Our findings reveal that the video is not related to the recent tensions and is actually from 2025.
Claim:
The viral video is being shared with the claim that chaos has erupted at Tel Aviv’s airport, with people trying to leave Israel due to Iranian attacks. An X user named “AjjuShane Experience (@AjjuShane)” shared the video with the caption: “We need tickets, we need flights, we want to leave Israel. We will not stay here until Iranian missiles crush us. Clashes are now happening at Tel Aviv’s Ben Gurion Airport.”
Post link:
- https://x.com/AjjuShane/status/2032584953112965238
- https://x.com/AjjuShane/status/2032584953112965238

Fact Check:
To verify the claim, we extracted keyframes from the video and conducted a reverse image search on Google. During the research , we found the same video on a Facebook page named Ynet, where it was shared on July 20, 2025.
- https://www.facebook.com/share/p/1NgTmpaZCs/
- https://www.facebook.com/share/p/1NgTmpaZCs/

The video carried a caption in Hebrew. Upon translation, it stated that the incident took place at “Cinema City” in Jerusalem, where dozens of Jewish youths clashed with Arab cafeteria workers. The visuals showed youths vandalizing property and throwing objects at staff members, while staff retaliated. Some individuals sustained minor injuries, but no serious harm was reported. We also found the same video on the YouTube channel of The Times of India, published on July 20, 2025. The caption mentioned that anti-Arab riots broke out inside a Cinema City theatre in Jerusalem on July 19, showing youths vandalizing the premises and clashing with Arab employees.

Conclusion:
Our research clearly shows that the viral video is from 2025 and unrelated to any recent Iran-Israel tensions. It is being misleadingly shared as a recent incident from Tel Aviv airport.
Related Blogs

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr
.webp)
Introduction
Autonomous transportation, smart cities, remote medical care, and immersive augmented reality are just a few of the revolutionary applications made possible by the global rollout of 5G technology. However, along with this revolution in connectivity, a record-breaking rise in vulnerabilities and threats has emerged, driven by software-defined networks, growing attack surfaces, and increasingly complex networks. As work on next-generation 6G networks accelerates, with commercialisation starting in 2030, security issues are piling up, including those related to AI-driven networks, terahertz communications, and quantum computing attacks. For a nation like India, poised to become a global technological leader, next-generation network procurement is not merely a technical necessity but a strategic imperative. Initiatives such as India-UK collaboration on telecom security in recent years say a lot about how international alliances are the order of the day to address these challenges.
Why Cybersecurity in 5G and 6G Networks is Crucial
With the launch of global 5G services and the rapid introduction of 6G technologies, the telecom sector is seeing a fundamental transformation. Besides expanding connectivity, future networks are also creating the building blocks for networked and highly intelligent environments. With its ultra-high speed of 10 Gbps, network slicing, and ultra-low latency, 5G provides new capabilities that are perfectly suited for mission-critical applications such as telemedicine, autonomous vehicles, and industrial IoT. Sixth-generation wireless technology is still in development, and it will be approximately one hundred times faster than fifth-generation. Here are a few drawbacks and challenges:
- Decentralised Infrastructure (edge computing nodes): Increased number of entry points for attack.
- Virtual Network Functions (VNFs): Greater vulnerability to configuration issues and software exploitation.
- Billions of IoT devices with different security states, thus forming networks that are more difficult to secure.
Although these challenges are unparalleled, the advancement in technology also creates new opportunities.
Understanding the Cyber Threat Landscape for 5G and 6G
The move to 5G and the upgrade to 6G open great opportunities, but also open doors for new cybersecurity risks. Open RAN usage offers flexibility and vendor selection but exposes the supply chain to untested third-party components and attacks. SBA security vulnerabilities can be exploited to disrupt vital network services, resulting in outages or data breaches. Similarly, widespread adoption of edge computing to reduce latency creates multiple entry points for an attacker to target. Compounding the problem is the explosion of IoT device connections through 5G, which, if breached, can fuel massive botnets capable of conducting massive distributed denial-of-service (DDoS) attacks.
Challenges in 6G
- AI-Powered Cyberattacks: AI-native 6G networks are susceptible to adversarial machine learning attacks, data model poisoning, both for security and for traffic optimisation.
- Quantum Threats: Post-quantum cryptography may be required if quantum computing renders current encryption algorithms outdated.
- Privacy Concerns with Digital Twins: 6G may result in creating enormous privacy and data protection issues in addition to offering real-time virtual replicas of the physical world.
- Cross-Border Data Flow Risks: Secure interoperability frameworks and standardised data sovereignty are essential for the worldwide rollout of 6G.
A Critical Step Toward Secure Telecom: The India-UK Partnership
India's recent foray with the UK reflects its active role in shaping the future of telecom security. Major points of the UK-India Telecom Roundtable are:
- MoU between SONIC Labs and C-DOT: Dedicated to Open RAN and AI integration security in 4G/5G deployments. This will offer supply chain diversity without sacrificing resilience.
- Research Partnerships for 6G: Partnerships with UK institutions like CHEDDAR (Cloud & Distributed Computing Hub) and the University of Glasgow 6G Research Centre are focused on developing AI-driven network security solutions, green 6G, and quantum-resistant design.
- Telecom Cybersecurity Centres of Excellence: Constructing two-way CoEs for telecom cybersecurity, ethical AI, and digital twin security models.
- Standardisation Efforts: Joint contribution to ITU for the creation of IMT-2030 standards, in a way that cybersecurity-by-design principles are integrated into worldwide 6G specifications.
- Future Initiatives:
- Application of privacy-enhancing technologies (PETs) for cross-sectoral data usage.
- Secure quantum communications to be used for satellite and submarine cable connections.
- Encouragement of native telecommunication stacks for strategic independence.
Global Policy and Regulatory Aspects
- India's Bharat 6G Vision: India will lead the global standardisation process in the Bharat 6G Alliance with a vision of inclusive, secure, and sustainable connectivity.
- International Harmonisation:
- 3GPP and ITU's joint effort towards standardisation of 6G security.
- Cross-border privacy and cybersecurity compliance system designs to enable secure flows of data.
- Cyber Diplomacy for Telecom Security: Cross-border sharing of information architectures, threat intelligence sharing, and coordinated incident response schemes are essential to 6G security resilience globally.
Building a Secure and Resilient Future for 5G and 6G
Establishing a safe and future-proof 5G and 6G environment should be an end-to-end effort involving governments, industry, and technology vendors. Security should be integrated into the underlying architecture of the networks and not an afterthought feature to be optionally provided. Active engagement in international bodies to establish homogeneous security and privacy standards across geographies is also required. Public-private partnerships, including academia partnerships, will be the driver for innovation and the creation of advanced protection mechanisms. Simultaneously, creating a competent talent pool to manage AI-based threat analysis, quantum-resistant cryptography, and next-generation cryptographic methods will be required to combat the advanced menace of new telecom technologies.
Conclusion
Given 6G on the way and 5G technologies already changing global connections, cybersecurity needs to continue to be a key focus. The partnership between India and the UK serves as an example of why the safe rise of tomorrow's networks depends on global collaboration, AI-driven security measures, plus quantum preparedness. The world can unleash the potential for transformation of 5G and 6G through combining security by design, supporting international standards, and encouraging innovation via cooperation. This will result in an online future that is not only quick and egalitarian but also solid and trustworthy.
References:
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2105225
- https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx
- https://dot.gov.in/sites/default/files/Bharat%206G%20Vision%20Statement%20-%20full.pdf
- https://www.gsma.com/solutions-and-impact/technologies/security/wp-content/uploads/2024/07/FS.40-v3.0-002-19-July.pdf