#FactCheck:Viral Video Does Not Show Congress Workers Protesting Against Rahul Gandhi
Executive Summary:
A video circulating on social media shows a group of people tearing Congress posters and raising controversial slogans. The clip is being shared with the claim that the individuals seen in the video are workers of the Congress party who were protesting against Rahul Gandhi and raising slogans against him. However, research by the CyberPeace found the viral claim to be misleading. Our research revealed that the video dates back to February 21, 2026. On that day, members of the Bharatiya Janata Yuva Morcha (BJYM) staged a protest outside a Congress office. During the demonstration, they raised slogans and tore Congress posters. The same video is now being circulated with a false narrative.
Claim
On February 24, 2026, a Facebook user shared the viral video with the caption:“Rebellion against Rahul Gandhi in Congress’ own stronghold! Party workers themselves tore posters and raised slogans — ‘Rahul Gandhi is a thief… a thief!’ This video exposes the internal truth of Congress. Congress itself is Muslim League.”

Fact Check
To verify the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the search, we found the same video uploaded on YouTube on February 21, 2026.
According to the description accompanying the video, BJP workers had staged a protest outside a Congress building. The report mentioned vandalism and stone-pelting during the protest, resulting in injuries to several individuals
- https://www.youtube.com/watch?v=pW-13mSvJ2c

Using this lead, we conducted a keyword search on Google and found a report published on February 21, 2026, by the Hindi news website Raj Express. The visuals in the report closely matched those seen in the viral clip.

According to the report, the protest in Bhopal was organized by the Bharatiya Janata Yuva Morcha in response to a T-shirt protest staged by the Youth Congress during an AI Summit held at Bharat Mandapam in New Delhi. The situation escalated when protesters marched toward the state Congress office in Shivaji Nagar. Police attempted to disperse the crowd using water cannons, but some protesters reportedly entered the Congress office premises, leading to tension.
Further, we found the same viral video on the official Facebook page of Indian National Congress - Madhya Pradesh, where it was posted on February 26, 2026. In the post, the Congress unit alleged that BJYM workers and BJP-affiliated individuals had entered the Congress office, vandalized property, and created chaos in the presence of police officials.

Conclusion
Our research found that the viral claim is misleading. The video is from February 21, 2026, when BJYM workers protested outside a Congress office and engaged in vandalism. The footage is now being falsely shared as evidence of an internal rebellion by Congress workers against Rahul Gandhi.
Related Blogs

Introduction
“GPS Spoofing” though formerly was confined to conflict zones as a consequence, has lately become a growing hazard for pilots and aircraft operators across the world, and several countries have been facing such issues. This definition stems from the US Radio Technical Commission for Aeronautics, which delivers specialized advice for government regulatory authorities. Global Positioning System (GPS) is considered an emergent part of aviation infrastructure as it supersedes traditional radio beams used to direct planes towards the landing. “GPS spoofing” occurs when a double-dealing radio signal overrides a legitimate GPS satellite alert where the receiver gets false location information. In the present times, this is the first time civilian passenger flights have faced such a significant danger, though GPS signal interference of this character has existed for over a decade. According to the Agency France-Presse (AFP), false GPS signals mislead onboard plane procedures and problematise the job of airline pilots that are surging around conflict areas. GPS spoofing may also be the outcome of military electronic warfare systems that have been deployed in zones combating regional tension. GPS spoofing can further lead to significant upheavals in commercial aviation, which include arrivals and departures of passengers apart from safety.
Spoofing might likewise involve one country’s military sending false GPS signals to an enemy plane or drone to impede its capability to operate, which has a collateral impact on airliners operating at a near distance. Collateral impairment in commercial aircraft can occur as confrontations escalate and militaries send faulty GPS signals to attempt to thwart drones and other aircraft. It could, therefore, lead to a global crisis, leading to the loss of civilian aircraft in an area already at a high-risk zone close to an operational battle area. Furthermore, GPS jamming is different from GPS Spoofing. While jamming is when the GPS signals are jammed or obstructed, spoofing is very distinct and way more threatening.
Global Reporting
An International Civil Aviation Organization (ICAO) assessment released in 2019 indicated that there were 65 spoofing incidents across the Middle East in the preceding two years, according to the C4ADS report. At the beginning of 2018, Euro control received more than 800 reports of Global Navigation Satellite System (GNSS) interference in Europe. Also, GPS spoofing in Eastern Europe and the Middle East has resulted in up to 80nm divergence from the flight route and aircraft impacted have had to depend on radar vectors from Air Traffic Control (ATC). According to Forbes, flight data intelligence website OPSGROUP, constituted of 8,000 members including pilots and controllers, has been reporting spoofing incidents since September 2023. Similarly, over 20 airlines and corporate jets flying over Iran diverted from their planned path after they were directed off the pathway by misleading GPS signals transmitted from the ground, subjugating the navigation systems of the aircraft.
In this context, vicious hackers, however at large, have lately realized how to override the critical Inertial Reference Systems (IRS) of an airplane, which is the essential element of technology and is known by the manufacturers as the “brains” of an aircraft. However, the current IRS is not prepared to counter this kind of attack. IRS uses accelerometers, gyroscopes and electronics to deliver accurate attitude, speed, and navigation data so that a plane can decide how it is moving through the airspace. GPS spoofing occurrences make the IRS ineffective, and in numerous cases, all navigation power is lost.
Red Flag from Agencies
The European Union Aviation Safety Agency (EASA) and the International Air Transport Association (IATA) correspondingly hosted a workshop on incidents where people have spoofed and obstructed satellite navigation systems and inferred that these direct a considerable challenge to security. IATA and EASA have further taken measures to communicate information about GPS tampering so that crew and pilots can make sure to determine when it is transpiring. The EASA had further pre-cautioned about an upsurge in reports of GPS spoofing and jamming happenings in the Baltic Sea area, around the Black Sea, and regions near Russia and Finland in 2022 and 2023. According to industry officials, empowering the latest technologies for civil aircraft can take several years, and while GPS spoofing incidents have been increasing, there is no time to dawdle. Experts have noted critical navigation failures on airplanes, as there have been several recent reports of alarming cyber attacks that have changed planes' in-flight GPS. As per experts, GPS spoofing could affect commercial airlines and cause further disarray. Due to this, there are possibilities that pilots can divert from the flight route, further flying into a no-fly zone or any unauthorized zone, putting them at risk.
According to OpsGroup, a global group of pilots and technicians first brought awareness and warning to the following issue when the Federal Aviation Administration (FAA) issued a forewarning on the security of flight risk to civil aviation operations over the spate of attacks. In addition, as per the civil aviation regulator Directorate General of Civil Aviation (DGCA), a forewarning circular on spoofing threats to planes' GPS signals when flying over parts of the Middle East was issued. DGCA advisory further notes the aviation industry is scuffling with uncertainties considering the contemporary dangers and information of GNSS jamming and spoofing.
Conclusion
As the aviation industry continues to grapple with GPS spoofing problems, it is entirely unprepared to combat this, although the industry should consider discovering attainable technologies to prevent them. As International conflicts become convoluted, technological solutions are unrestricted and can be pricey, intricate and not always efficacious depending on what sort of spoofing is used.
As GPS interference attacks become more complex, specialized resolutions should be invariably contemporized. Improving education and training (to increase awareness among pilots, air traffic controllers and other aviation experts), receiver technology (Creating and enforcing more state-of-the-art GPS receiver technology), ameliorating monitoring and reporting (Installing robust monitoring systems), cooperation (collaboration among stakeholders like government bodies, aviation organisations etc.), data/information sharing, regulatory measures (regulations and guidelines by regulatory and government bodies) can help in averting GPS spoofing.
References
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/false-gps-signal-surge-makes-life-hard-for-pilots/articleshow/108363076.cms?from=mdr
- https://nypost.com/2023/11/20/lifestyle/hackers-are-taking-over-planes-gps-experts-are-lost-on-how-to-fix-it/
- https://www.timesnownews.com/india/planes-losing-gps-signal-over-middle-east-dgca-flags-spoofing-threat-article-105475388
- https://www.firstpost.com/world/gps-spoofing-deceptive-gps-lead-over-20-planes-astray-in-iran-13190902.html
- https://www.forbes.com/sites/erictegler/2024/01/31/gps-spoofing-is-now-affecting-airplanes-in-parts-of-europe/?sh=48fbe725c550
- https://www.insurancejournal.com/news/international/2024/01/30/758635.htm
- https://airwaysmag.com/gps-spoofing-commercial-aviation/
- https://www.wsj.com/articles/aviation-industry-to-tackle-gps-security-concerns-c11a917f
- https://www.deccanherald.com/world/explained-what-is-gps-spoofing-that-has-misguided-around-20-planes-near-iran-iraq-border-and-how-dangerous-is-this-2708342

Introduction
In January 2026, the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness came into effect in South Korea, establishing one of the first national AI laws in the world. The bill, enacted by the National Assembly of Korea in December 2024 and implemented from January 22, 2026, aims to strike a balance between the rapid advancement of technology and clear safeguards against risks, as well as transparency, accountability, and responsible AI use. It puts Seoul and the European Union on the frontline of developing legal systems for artificial intelligence and indicates a long-term goal of becoming an AI power on the global stage.
What the AI Basic Act Covers
The AI Basic Act consists of 19 separate AI bills that are merged into a single piece of legislation that covers the lifecycle of AI, including research and development, deployment, and utilisation. It is very wide in its coverage: it refers to any AI system that influences the Korean market or users inside the country, irrespective of the country in which it is created. The law does not apply to national defence and security applications.
The law defines key concepts like artificial intelligence, generative AI, and high-impact AI and establishes the principles of ethical AI, safety, user rights, industry support, and national policy coordination. It also offers a legal foundation for the activities of the government to promote AI innovation without jeopardising the common good.
Fundamentally, the AI Basic Act is designed to establish a culture of trust between businesses and the government/citizens. It does not prohibit AI technologies and does not excessively limit innovation. Instead, it creates the framework of responsible development and economic growth.
Guardrails for Safety and Accountability
One of the defining features of the AI Basic Act is its risk-based approach. Rather than considering all AI systems as similar, it makes a distinction between ordinary and high-impact AI systems, the ones applied in sectors where the wrong or unsafe decision can have a major impact on the safety, rights, or critical infrastructure of the population. Some of them can be seen in healthcare, transportation, financial services, education, and public services.
The high-impact AI operators must integrate risk management plans, human controls, and surveillance systems. In critical decision-making situations, human control should be available at all times; that is, machines can help but not override human control where human safety or other human rights are involved.
The law enables the regulators to perform on-site checks, demand documentation, and conduct compliance investigations. Fines for breaches may go up to 30 million Korean won (approximately 21,000 US dollars). It has a one-year period of transition that is based on guidance but not enforcement, thus allowing companies time to implement compliance measures before imposing fines.
These requirements contribute to enhancing accountability by defining who is accountable for the safety outcomes. The law in South Korea is placed in the ecosystem, as opposed to the methods in which industry self-governance alone is utilised.
Transparency and Labelling Requirements
The AI Basic Act is based on transparency. The legislation ensures that users are notified before an AI system is operating, particularly with the generation of AI outputs that could be confused with human-created material. As an example, AI-generated text, images, video, or audio that may be difficult to distinguish between reality and fake must have obvious labels or watermarks to allow users to understand the source of the content.
The necessity to label is meant to fight misinformation, misleading activities, and unintended influence on the perception of the people. It is based on international anxiety regarding AI-generated content, such as deepfakes, manipulated media, and misleading online advertisements that have already been addressed separately in policy by South Korea, as well as discussions of data governance.
The transparency is also applied to the process of decision-making in AI systems. Developers and operators should be able to give explicit information about the way in which high-impact systems make their conclusions so that those who are victims of automated decisions can seek meaningful explanations. Although specific explainability criteria are in the process of being developed, the law grounds the principle that AI cannot act behind the scenes in situations where crucial decisions are being made.
Data Privacy and User Protection
The AI governance practice in South Korea is complementary to its current data protection laws, the Personal Information Protection Act (PIPA), which is broadly regarded as equivalent to major international data protection regulations like the GDPR in regard to personal data laws. The AI Basic Act provides an explanation as to how the data can be gathered, processed, and utilised within AI systems with regard to privacy rights, particularly in areas of high impact.
The law does not supersede the personal data protection policies, but it sets certain conditions on how AI developers must address the data to be utilised in training, testing, and running AIs. Operators will be required to document their data workflows and demonstrate how they guard the privacy of their users, including by transparency and consent mechanisms where necessary. This can assist in ensuring that the information that is utilised in AI functions is regulated by definite norms, and it is more difficult to avoid privacy requirements in the name of innovation.
Accountability and Governance Infrastructure
The AI Basic Act establishes a national policy framework of AI governance. The National Artificial Intelligence Strategy Committee, chaired by the President, is at the top and proposes the overall AI policy and aligns it with national objectives. The organisations that would support this are the specialised organisations that deal with safety, risk assessment, and research and the policy centre that would analyse the effects of AI on society and assist in its adoption by the industry.
This institutional structure facilitates strategic guidance as well as operational control. It is through incorporating AI governance in the administration of the people, but not into the market forces, that South Korea wishes to have the ethical and societal concerns become part of the sectors and agencies.
Promoting Innovation and Industrial Support
Although the AI Basic Act does not disregard regulation, it is not a law of restrictions. It also offers legal justification for research and development, human capital, and the growth of the AI industry, with special consideration for startups and small and medium-sized businesses. The legislation promotes AI clusters, long-term funding programmes, and policies to bring foreign talent to the Korean AI ecosystem.
This bidimensional approach of compliance and support is indicative of the broader desire of Korea to become one of the leading AI powers in the world, along with the US and China. The government has pointed out that it will encourage trust by having clear and predictable rules that will attract investment and maintain innovation and not stifle it.
What This Means Globally
The AI Basic Act of South Korea is not only interesting in its contents but also in its timing. It is also among the first thorough AI legislations to come into force in the world, and it beats the gradual regulatory implementations in other parts of the globe, like the European Union. Its system incorporates a principle-based framework, transparency requirements, accountability regulations, and industrial support, which reflects a contrasting model to either pure prescriptive risk regulation or lax self-regulation models elsewhere.
Other critics, such as industry groups and civil society organisations, have suggested that some of the protections may be more explicit, in particular to those who are harmed by AI systems, or to establish high-impact categories. Nonetheless, the framework sets a benchmark upon which most nations will pay close attention when they establish their own AI regimes.
Conclusion
The AI Basic Act puts South Korea at the forefront of national AI regulation, including very well-developed guardrails that enforce transparency, ethical control, accountability, and data protection in addition to fostering innovation. It recognises that AI could lead to economic and social advantages, yet also actual risks, particularly when systems are opaque, autonomous, or widely implemented. South Korea has gone holistically in responsible AI governance by integrating human oversight, labelling requirements, risk management planning, and governance infrastructure into law to be emulated by other countries in the years to come.
Sources
- https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws
- https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/artificial-intelligence-and-the-labour-market-in-korea_af668423/68ab1a5a-en.pdf
- https://asianintelligence.ai/south-korea
- https://aibasicact.kr/
- https://aibusinessweekly.net/p/south-korea-ai-basic-act-takes-effect-jan22-2026
- https://asiadaily.org/news/12112/

Executive Summary:
A video is being shared on social media showing an aircraft engulfed in massive flames on an airport runway. The video is being linked to the UAE. It is being claimed that a UAE airport was completely destroyed due to recent drone and missile attacks by Iran. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real, but AI-generated.
Claim:
On social media platform Facebook, a user shared the viral video on March 3, 2026, and wrote, “Amid the Iran-US-Israel conflict in the Middle East, operations at several major airports, including Dubai International Airport, have been temporarily suspended, causing thousands of flight cancellations and delays. Due to multiple missile and drone attacks from Iran, the United Arab Emirates (UAE) had shut its airspace, and limited structural damage at Dubai Airport was also confirmed, with reports of four staff members being injured. Later, considering the security situation, a limited number of flights were resumed, but full operations are still delayed due to ongoing safety concerns. This tension has significantly impacted regional aviation, travel, and global flight routes.”

Fact Check:
To verify the viral video, we searched relevant keywords on Google. However, we did not find any credible media report confirming the claim.However, we found a video report on the YouTube channel of CNN-News18 mentioning explosions near Dubai Airport after a suspected Iranian drone strike. But the visuals shown in that report are completely different from the viral video.

Upon closely examining the viral video, we noticed several inconsistencies, raising suspicion that it might be AI-generated. We then analyzed the video using the AI detection tool Sightengine. The results indicated that the video is 71 percent likely to be AI-generated.

Conclusion:
Our research found that the viral video is not real, but AI-generated.