#FactCheck - Viral Image of Bridge claims to be of Mumbai, but in reality it's located in Qingdao, China
Executive Summary:
The photograph of a bridge allegedly in Mumbai, India circulated through social media was found to be false. Through investigations such as reverse image searches, examination of similar videos, and comparison with reputable news sources and google images, it has been found that the bridge in the viral photo is the Qingdao Jiaozhou Bay Bridge located in Qingdao, China. Multiple pieces of evidence, including matching architectural features and corroborating videos tell us that the bridge is not from Mumbai. No credible reports or sources have been found to prove the existence of a similar bridge in Mumbai.

Claims:
Social media users claim a viral image of the bridge is from Mumbai.



Fact Check:
Once the image was received, it was investigated under the reverse image search to find any lead or any information related to it. We found an image published by Mirror News media outlet, though we are still unsure but we can see the same upper pillars and the foundation pillars with the same color i.e white in the viral image.

The name of the Bridge is Jiaozhou Bay Bridge located in China, which connects the eastern port city of the country to an offshore island named Huangdao.
Taking a cue from this we then searched for the Bridge to find any other relatable images or videos. We found a YouTube Video uploaded by a channel named xuxiaopang, which has some similar structures like pillars and road design.

In reverse image search, we found another news article that tells about the same bridge in China, which is more likely similar looking.

Upon lack of evidence and credible sources for opening a similar bridge in Mumbai, and after a thorough investigation we concluded that the claim made in the viral image is misleading and false. It’s a bridge located in China not in Mumbai.
Conclusion:
In conclusion, after fact-checking it was found that the viral image of the bridge allegedly in Mumbai, India was claimed to be false. The bridge in the picture climbed to be Qingdao Jiaozhou Bay Bridge actually happened to be located in Qingdao, China. Several sources such as reverse image searches, videos, and reliable news outlets prove the same. No evidence exists to suggest that there is such a bridge like that in Mumbai. Therefore, this claim is false because the actual bridge is in China, not in Mumbai.
- Claim: The bridge seen in the popular social media posts is in Mumbai.
- Claimed on: X (formerly known as Twitter), Facebook,
- Fact Check: Fake & Misleading
Related Blogs
%20(1).webp)
Disclaimer:
The information is based on claims made by threat actors and does not imply confirmation of the breach, by CyberPeace. CyberPeace includes this detail solely to provide factual transparency and does not condone any unlawful activities. This information is shared only for research purposes and to spread awareness. CyberPeace encourages individuals and organizations to adopt proactive cybersecurity measures to protect against potential threats.
🚨 Data Breach Alert ⚠️:
Recently The Research Wing of CyberPeace and Autobot Infosec have come across a claim on a threat actor’s dark web website alleging a data breach involving 637k+ records from Federal Bank. According to the threat actor’s claim, the data allegedly includes sensitive details such as-
- 🧑Customer Name
- 🆔Customer ID
- 🏠 Customer Address
- 🎂 Date of Birth
- 🔢 Age
- 🚻 Gender
- 📞Mobile Number
- 🪪 PAN Number
- 🚘 Driving License Number
- 🛂 Passport Number
- 🔑 UID Number
- 🗳️ Voter ID Information
The alleged data was initially discovered on a dark web website, where the threat actors allegedly claimed to be offering the breached information for sale. Following their announcement of the breach, a portion of the data was reportedly published on December 27, 2024. A few days later, the full dataset was allegedly released on the same forum.
About the Threat Actor Group:
Bashe, a ransomware group that emerged in 2024, is claimed to have evolved from the LockBit ransomware group, previously operating under the names APT73 and Eraleig. The group employs data encryption combined with extortion tactics, threatening to release sensitive information if ransom demands are unmet. Their operations primarily target critical industries, including technology, healthcare, and finance, demonstrating a strategic focus on high-value sectors.

Breakdown of the Alleged Post by the Threat Actor:
- Target: Allegedly involves Customer’s Data of Federal Bank.
- Data Volume: Claimed breach includes 637,894 records.
- Data Fields: Threat actor claims the data contains sensitive information, including Customer name, Customer ID, Date of Birth, PAN Number, Age, Gender, Father Name, Spouse Name, Driving Licence, Passport Number, UID Number, Voter ID, District, Zip Code, Home Address, Mailing Address, State etc.
Analysis:
The analysis of the alleged data breach highlights the states purportedly most impacted, along with insights into the affected age groups, gender distribution, and other key insights associated with the compromised data. This evaluation aims to provide a clearer understanding of the claimed breach's scope and its potential demographic and geographic impact.
Top States Impacted:
As per the alleged breached data, Tamil Nadu has the highest number of affected customers, accounting for a significant 34.49% of the total breach. Karnataka follows closely with 26.89%, indicating a substantial number of individuals affected in the state. In contrast states such as Uttar Pradesh, Haryana, Delhi, and Rajasthan report minimal impact, with each state having less than 1% of affected customers. Gujarat records 3.70% of the breach, with a sharp drop in affected numbers from other states, highlighting a significant disparity in the extent of the breach across regions.

Impacted Age Range Statistics:
The alleged data breach has predominantly impacted customers in the 31-40 years age group, which constitutes the largest segment at 35.80% of the affected individuals. Following this, the 21-30 years age group also shows significant impact, comprising 27.72% of those affected. The 41-50 years age group accounts for 20.55% of the impacted population, while individuals aged 50 and above represent 12.68%. In contrast, the 0-20 years age group is the least affected, with only 3.24% of customers falling into this category.

Gender Wise Statistics:
The alleged data breach has predominantly impacted male customers, who constitute the majority at 74.05% of the affected individuals. Female customers account for 23.18%, while a smaller segment, categorized as "Others," constitutes 2.77%.

The alleged dataset from the threat actors indicated that a significant portion of customers' personal identification data was compromised. This includes sensitive information such as driving licenses, passport numbers, UID numbers, voter IDs, and PAN numbers.
Significance of the Allegations:
Though the claims have not been independently verified at our end it underscores the rising risks of cyberattacks and data breaches, especially in the financial and banking sectors. If true, the exposure of such sensitive information could lead to financial fraud, identity theft, and severe reputational damage for individuals and organizations alike.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
- Monitor Your Accounts: Keep a close eye on financial and email accounts for any suspicious activity.
- Update Passwords: Change your passwords immediately and enable Multi Factor Authentication(MFA) wherever possible.
- Beware of Phishing Attacks: Threat actors may exploit the leaked data to craft targeted phishing scams. Do not click on unsolicited links or share sensitive details over email or phone.
- For Organizations: Strengthen data protection mechanisms, regularly audit security infrastructure, and respond swiftly to emerging threats.
- Report: For more assistance or to report cyber incidents, visit https://cybercrime.gov.in or contact our helpline team at helpline@cyberpeace.net.
We advise affected parties and the broader public to stay alert and take necessary precautions. CyberPeace remains committed to raising awareness about cybersecurity threats and advocating for better protection mechanisms. We urge all stakeholders to investigate the claims and ensure appropriate steps are taken to protect the impacted data, if the breach is confirmed. Our Research Wing is actively observing the situation and we aim to collaborate with the stakeholders and relevant agencies to mitigate the impact.
Stay Vigilant! Stay CyberPeaceful.
.webp)
Introduction
India's National Commission for Protection of Child Rights (NCPCR) is set to approach the Ministry of Electronics and Information Technology (MeitY) to recommend mandating a KYC-based system for verifying children's age under the Digital Personal Data Protection (DPDP) Act. The decision to approach or send recommendations to MeitY was taken by NCPCR in a closed-door meeting held on August 13 with social media entities. In the meeting, NCPCR emphasised proposing a KYC-based age verification mechanism. In this background, Section 9 of the Digital Personal Data Protection Act, 2023 defines a child as someone below the age of 18, and Section 9 mandates that such children have to be verified and parental consent will be required before processing their personal data.
Requirement of Verifiable Consent Under Section 9 of DPDP Act
Regarding the processing of children's personal data, Section 9 of the DPDP Act, 2023, provides that for children below 18 years of age, consent from parents/legal guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or lawful guardian. Additionally, behavioural monitoring or targeted advertising directed at children is prohibited.
Ongoing debate on Method to obtain Verifiable Consent
Section 9 of the DPDP Act gives parents or lawful guardians more control over their children's data and privacy, and it empowers them to make decisions about how to manage their children's online activities/permissions. However, obtaining such verifiable consent from the parent or legal guardian presents a quandary. It was expected that the upcoming 'DPDP rules,' which have yet to be notified by the Central Government, would shed light on the procedure of obtaining such verifiable consent from a parent or lawful guardian.
However, In the meeting held on 18th July 2024, between MeitY and social media companies to discuss the upcoming Digital Personal Data Protection Rules (DPDP Rules), MeitY stated that it may not intend to prescribe a ‘specific mechanism’ for Data Fiduciaries to verify parental consent for minors using digital services. MeitY instead emphasised obligations put forth on the data fiduciary under section 8(4) of the DPDP Act to implement “appropriate technical and organisational measures” to ensure effective observance of the provisions contained under this act.
In a recent update, MeitY held a review meeting on DPDP rules, where they focused on a method for determining children's ages. It was reported that the ministry is making a few more revisions before releasing the guidelines for public input.
CyberPeace Policy Outlook
CyberPeace in its policy recommendations paper published last month, (available here) also advised obtaining verifiable parental consent through methods such as Government Issued ID, integration of parental consent at ‘entry points’ like app stores, obtaining consent through consent forms, or drawing attention from foreign laws such as California Privacy Law, COPPA, and developing child-friendly SIMs for enhanced child privacy.
CyberPeace in its policy paper also emphasised that when deciding the method to obtain verifiable consent, the respective platforms need to be aligned with the fact that verifiable age verification must be done without compromising user privacy. Balancing user privacy is a question of both technological capabilities and ethical considerations.
DPDP Act is a brand new framework for protecting digital personal data and also puts forth certain obligations on Data Fiduciaries and provides certain rights to Data Principal. With upcoming ‘DPDP Rules’ which are expected to be notified soon, will define the detailed procedure for the implementation of the provisions of the Act. MeitY is refining the DPDP rules before they come out for public consultation. The approach of NCPCR is aimed at ensuring child safety in this digital era. We hope that MeitY comes up with a sound mechanism for obtaining verifiable consent from parents/lawful guardians after taking due consideration to recommendations put forth by various stakeholders, expert organisations and concerned authorities such as NCPCR.
References
- https://www.moneycontrol.com/technology/dpdp-rules-ncpcr-to-recommend-meity-to-bring-in-kyc-based-age-verification-for-children-article-12801563.html
- https://pune.news/government/ncpcr-pushes-for-kyc-based-age-verification-in-digital-data-protection-a-new-era-for-child-safety-215989/#:~:text=During%20this%20meeting%2C%20NCPCR%20issued,consent%20before%20processing%20their%20data
- https://www.hindustantimes.com/india-news/ncpcr-likely-to-seek-clause-for-parents-consent-under-data-protection-rules-101724180521788.html
- https://www.drishtiias.com/daily-updates/daily-news-analysis/dpdp-act-2023-and-the-isssue-of-parental-consent

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions