#FactCheck - AI-Generated Image Falsely Linked to Mira–Bhayandar Bridge
Executive Summary
Mumbai’s Mira–Bhayandar bridge has recently been in the news due to its unusual design. In this context, a photograph is going viral on social media showing a bus seemingly stuck on the bridge. Some users are also sharing the image while claiming that it is from Sonpur subdivision in Bihar. However, an research by the CyberPeace has found that the viral image is not real. The bridge shown in the image is indeed the Mira–Bhayandar bridge, which is under discussion because its design causes it to suddenly narrow from four lanes to two lanes. That said, the bridge is not yet operational, and the viral image showing a bus stuck on it has been created using Artificial Intelligence (AI).
Claim
An Instagram user shared the viral image on January 29, 2026, with the caption:“Are Indian taxpayers happy to see that this is funded by their money?” The link, archive link, and screenshot of the post can be seen below.

Fact Check:
To verify the claim, we first conducted a Google Lens reverse image search. This led us to a post shared by X (formerly Twitter) user Manoj Arora on January 29. While the bridge structure in that image matches the viral photo, no bus is visible in the original post.This raised suspicion that the viral image had been digitally manipulated.

We then ran the viral image through the AI detection tool Hive Moderation, which flagged it as over 99% likely to be AI-generated

Conclusion
The CyberPeace research confirms that while the Mira–Bhayandar bridge is real and has been in the news due to its design, the viral image showing a bus stuck on the bridge has been created using AI tools. Therefore, the image circulating on social media is misleading.
Related Blogs

Introduction
In January 2026, the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness came into effect in South Korea, establishing one of the first national AI laws in the world. The bill, enacted by the National Assembly of Korea in December 2024 and implemented from January 22, 2026, aims to strike a balance between the rapid advancement of technology and clear safeguards against risks, as well as transparency, accountability, and responsible AI use. It puts Seoul and the European Union on the frontline of developing legal systems for artificial intelligence and indicates a long-term goal of becoming an AI power on the global stage.
What the AI Basic Act Covers
The AI Basic Act consists of 19 separate AI bills that are merged into a single piece of legislation that covers the lifecycle of AI, including research and development, deployment, and utilisation. It is very wide in its coverage: it refers to any AI system that influences the Korean market or users inside the country, irrespective of the country in which it is created. The law does not apply to national defence and security applications.
The law defines key concepts like artificial intelligence, generative AI, and high-impact AI and establishes the principles of ethical AI, safety, user rights, industry support, and national policy coordination. It also offers a legal foundation for the activities of the government to promote AI innovation without jeopardising the common good.
Fundamentally, the AI Basic Act is designed to establish a culture of trust between businesses and the government/citizens. It does not prohibit AI technologies and does not excessively limit innovation. Instead, it creates the framework of responsible development and economic growth.
Guardrails for Safety and Accountability
One of the defining features of the AI Basic Act is its risk-based approach. Rather than considering all AI systems as similar, it makes a distinction between ordinary and high-impact AI systems, the ones applied in sectors where the wrong or unsafe decision can have a major impact on the safety, rights, or critical infrastructure of the population. Some of them can be seen in healthcare, transportation, financial services, education, and public services.
The high-impact AI operators must integrate risk management plans, human controls, and surveillance systems. In critical decision-making situations, human control should be available at all times; that is, machines can help but not override human control where human safety or other human rights are involved.
The law enables the regulators to perform on-site checks, demand documentation, and conduct compliance investigations. Fines for breaches may go up to 30 million Korean won (approximately 21,000 US dollars). It has a one-year period of transition that is based on guidance but not enforcement, thus allowing companies time to implement compliance measures before imposing fines.
These requirements contribute to enhancing accountability by defining who is accountable for the safety outcomes. The law in South Korea is placed in the ecosystem, as opposed to the methods in which industry self-governance alone is utilised.
Transparency and Labelling Requirements
The AI Basic Act is based on transparency. The legislation ensures that users are notified before an AI system is operating, particularly with the generation of AI outputs that could be confused with human-created material. As an example, AI-generated text, images, video, or audio that may be difficult to distinguish between reality and fake must have obvious labels or watermarks to allow users to understand the source of the content.
The necessity to label is meant to fight misinformation, misleading activities, and unintended influence on the perception of the people. It is based on international anxiety regarding AI-generated content, such as deepfakes, manipulated media, and misleading online advertisements that have already been addressed separately in policy by South Korea, as well as discussions of data governance.
The transparency is also applied to the process of decision-making in AI systems. Developers and operators should be able to give explicit information about the way in which high-impact systems make their conclusions so that those who are victims of automated decisions can seek meaningful explanations. Although specific explainability criteria are in the process of being developed, the law grounds the principle that AI cannot act behind the scenes in situations where crucial decisions are being made.
Data Privacy and User Protection
The AI governance practice in South Korea is complementary to its current data protection laws, the Personal Information Protection Act (PIPA), which is broadly regarded as equivalent to major international data protection regulations like the GDPR in regard to personal data laws. The AI Basic Act provides an explanation as to how the data can be gathered, processed, and utilised within AI systems with regard to privacy rights, particularly in areas of high impact.
The law does not supersede the personal data protection policies, but it sets certain conditions on how AI developers must address the data to be utilised in training, testing, and running AIs. Operators will be required to document their data workflows and demonstrate how they guard the privacy of their users, including by transparency and consent mechanisms where necessary. This can assist in ensuring that the information that is utilised in AI functions is regulated by definite norms, and it is more difficult to avoid privacy requirements in the name of innovation.
Accountability and Governance Infrastructure
The AI Basic Act establishes a national policy framework of AI governance. The National Artificial Intelligence Strategy Committee, chaired by the President, is at the top and proposes the overall AI policy and aligns it with national objectives. The organisations that would support this are the specialised organisations that deal with safety, risk assessment, and research and the policy centre that would analyse the effects of AI on society and assist in its adoption by the industry.
This institutional structure facilitates strategic guidance as well as operational control. It is through incorporating AI governance in the administration of the people, but not into the market forces, that South Korea wishes to have the ethical and societal concerns become part of the sectors and agencies.
Promoting Innovation and Industrial Support
Although the AI Basic Act does not disregard regulation, it is not a law of restrictions. It also offers legal justification for research and development, human capital, and the growth of the AI industry, with special consideration for startups and small and medium-sized businesses. The legislation promotes AI clusters, long-term funding programmes, and policies to bring foreign talent to the Korean AI ecosystem.
This bidimensional approach of compliance and support is indicative of the broader desire of Korea to become one of the leading AI powers in the world, along with the US and China. The government has pointed out that it will encourage trust by having clear and predictable rules that will attract investment and maintain innovation and not stifle it.
What This Means Globally
The AI Basic Act of South Korea is not only interesting in its contents but also in its timing. It is also among the first thorough AI legislations to come into force in the world, and it beats the gradual regulatory implementations in other parts of the globe, like the European Union. Its system incorporates a principle-based framework, transparency requirements, accountability regulations, and industrial support, which reflects a contrasting model to either pure prescriptive risk regulation or lax self-regulation models elsewhere.
Other critics, such as industry groups and civil society organisations, have suggested that some of the protections may be more explicit, in particular to those who are harmed by AI systems, or to establish high-impact categories. Nonetheless, the framework sets a benchmark upon which most nations will pay close attention when they establish their own AI regimes.
Conclusion
The AI Basic Act puts South Korea at the forefront of national AI regulation, including very well-developed guardrails that enforce transparency, ethical control, accountability, and data protection in addition to fostering innovation. It recognises that AI could lead to economic and social advantages, yet also actual risks, particularly when systems are opaque, autonomous, or widely implemented. South Korea has gone holistically in responsible AI governance by integrating human oversight, labelling requirements, risk management planning, and governance infrastructure into law to be emulated by other countries in the years to come.
Sources
- https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws
- https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/artificial-intelligence-and-the-labour-market-in-korea_af668423/68ab1a5a-en.pdf
- https://asianintelligence.ai/south-korea
- https://aibasicact.kr/
- https://aibusinessweekly.net/p/south-korea-ai-basic-act-takes-effect-jan22-2026
- https://asiadaily.org/news/12112/

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf

Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,




Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”

Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”

We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake