#FactCheck - AI-Generated Image Falsely Linked to Mira–Bhayandar Bridge
Executive Summary
Mumbai’s Mira–Bhayandar bridge has recently been in the news due to its unusual design. In this context, a photograph is going viral on social media showing a bus seemingly stuck on the bridge. Some users are also sharing the image while claiming that it is from Sonpur subdivision in Bihar. However, an research by the CyberPeace has found that the viral image is not real. The bridge shown in the image is indeed the Mira–Bhayandar bridge, which is under discussion because its design causes it to suddenly narrow from four lanes to two lanes. That said, the bridge is not yet operational, and the viral image showing a bus stuck on it has been created using Artificial Intelligence (AI).
Claim
An Instagram user shared the viral image on January 29, 2026, with the caption:“Are Indian taxpayers happy to see that this is funded by their money?” The link, archive link, and screenshot of the post can be seen below.

Fact Check:
To verify the claim, we first conducted a Google Lens reverse image search. This led us to a post shared by X (formerly Twitter) user Manoj Arora on January 29. While the bridge structure in that image matches the viral photo, no bus is visible in the original post.This raised suspicion that the viral image had been digitally manipulated.

We then ran the viral image through the AI detection tool Hive Moderation, which flagged it as over 99% likely to be AI-generated

Conclusion
The CyberPeace research confirms that while the Mira–Bhayandar bridge is real and has been in the news due to its design, the viral image showing a bus stuck on the bridge has been created using AI tools. Therefore, the image circulating on social media is misleading.
Related Blogs

Introduction
China is on the verge of unveiling a new policy that will address how Artificial Intelligence (AI) influences employment. On January 27, 2026, the Ministry of Human Resources and Social Security (MOHRSS) announced it would publish a paper on the contribution of AI to the labour and employment markets. The policy will include provisions to help impacted industries, expand assistance to young workers and graduates, and come up with interdisciplinary training programmes to equip individuals with jobs in an AI-enabled economy. The authorities have stressed that AI does not kill jobs but changes them, and education will be needed to assist employees in adjusting to the changes.
This announcement reflects a more proactive policy on AI-based changes in labour, showing that China intends to sustain economic modernisation through AI, as well as social stability. It also depicts wider international issues concerning the rate of automation and the necessity of considering labour and training policy.
AI and the Changing Nature of Work
AI is transforming work content and nature in industries. AI systems enhance the productivity of various functions, including data processing, logistics, and customer service, although they alter the nature of tasks carried out by humans. Extant studies indicate that although AI can automate routine activities, new occupations that require complex thinking, management of artificial intelligence, and skills related to people, including empathy, creativity, and problem-solving, may be generated.
This is the key nuance in the policy framing of China. Authorities point out that AI does not always result in massive unemployment. Instead, it transforms jobs and necessitates workers to change to new task profiles. This perspective is in line with the recent reports of the world research organisations, which predict the effects of AI as transformational and not necessarily destructive. As an example, the World Economic Forum Future Jobs Report 2023 observes that the change in technology will introduce new jobs that were not there 10 years ago, and retraining and upskilling will be instrumental in accessing those opportunities.
Key Components of China’s Policy Response
China’s forthcoming policy is expected to focus on three main areas that address both current workforce needs and future readiness.
Support for Key Industries
The policy will offer targeted assistance to sectors where artificial intelligence is gaining pace. Industries like advanced manufacturing, high-tech services, and online logistics will also get specialised assistance to assist companies in using AI to complement human labour and not just to replace it. The Chinese government tries to balance industrial upgrading with employment by channelling resources to the growth areas.
Assistance for Youth and Graduates
The youth and the recent graduates are entering a labour market that is changing rapidly. The policy aims to increase the support services to this population by career counselling, internships, and training programmes correlated with changing employer demands. According to a study by McKinsey Global Institute, the young workforce all over the globe can face disproportionate disruption in case the prospects of training are scarce, making initial career backing imperative.
Interdisciplinary Talent Development
The Chinese strategy focuses on interdisciplinary training that blends knowledge of domains and AI literacy and digital illiteracy. This is indicative of the realisation that hybrid skills are required in the future. The Organisation for Economic Cooperation and Development suggests that workers who can make it through the technical and non-technical elements of work will stand a better chance of winning in the AI age.
These components show that China’s strategy is not simply to protect existing jobs but to help workers transition to roles that leverage AI’s strengths.
Economy, Stability and Strategic Modernisation
The policy is an attempt to control technological transition as part of wider economic planning. It is an indication that the government regards AI as a structural change rather than an external shock that can be predicted and influenced by policy.
This is in contrast to some other reactions to labour markets in other countries, where the reactionary approach has been seen as a reaction to the job losses that have already become reality. The initiative by China implies that there should be a change in the manner in which one can expect change instead of reacting to change.
Global Comparisons and Shared Challenges
Governments worldwide are testing the options to adapt to the work effects of AI. The European Union is considering the individual learning account and portable training benefits, which would assist workers to gain access to reskilling opportunities in the course of their careers. In the US, there is a concerted effort by the public-private partnerships to match the development of the workforce with technological implementation.
The strategy of China has some of these components, but it stands out due to its incorporation with national planning processes. China wants the adoption of AI to help it achieve the common good and not division by connecting the workforce policy to the overall innovation and economic purpose.
Meanwhile, the issue of balancing the supply of labour with the demand of technology is a challenge of its own to countries with older populations and relatively smaller working forces. The timing and design of policy are particularly significant in China, as there is a large labour force and continuous changes in demography.
Practical Challenges and Risks
The success of China’s emerging policy will depend on effective implementation. Several practical issues will require careful attention:
Ensuring Equitable Access to Training
The labour force in China is diversified, and it goes through technology zones in cities and other rural areas. It will be paramount to make sure that the opportunity of upskilling is extended to all workers across the spectrum to prevent the further worsening of regional inequalities. Research conducted on reskilling across the globe shows that rural and low-income groups tend to lack access to training, despite the availability of programmes.
Aligning Training with Labour Demand
The programme of upskilling should be related to the market requirements. Disconnected training is prone to resulting in the production of skills that are obsolete or not applicable in actual work settings. Experience in emerging economies indicates that the involvement of employers in the training design enhances placement success on the part of the learner.
Private Sector Participation
The policy needs to be translated into employment outcomes with the help of private companies. Incentives to make firms invest in worker training, internships, and apprenticeships will enable workers to shift to AI-augmented jobs with ease.
A Model for AI Workforce Policy
The Chinese policy can serve as an example for other countries that want to balance technological advancement and labour market security. It acknowledges the fact that the effect of AI on employment is not only a technical or an economic problem but also a social challenge. Through foregrounding training, support, and coordinated action, China aims to create a future where people are ready to change and not lose their jobs to this change.
This strategy can be agreed with the suggestions of international organisations like the World Bank and the OECD, which insist on the idea of lifelong learning and flexibility of labour markets, as well as proactive investment in human capital as the main aspects of the labour policy in the future.
Conclusion
Artificial intelligence will continue to reshape work around the world. China’s forthcoming policy, which emphasises support, training and strategic integration of AI into labour markets, reflects a proactive and holistic view of technological transition. Other countries could benefit from studying this approach, especially in terms of linking workforce development with innovation goals.
By anticipating disruption and investing in people as well as technology, policymakers can help ensure that AI becomes a driver of shared economic opportunity rather than a source of exclusion. The balance between innovation and employment will shape not only economic outcomes but also social cohesion in the years ahead.
References
.webp)
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms