#FactCheck - Viral Image of Bridge claims to be of Mumbai, but in reality it's located in Qingdao, China
Executive Summary:
The photograph of a bridge allegedly in Mumbai, India circulated through social media was found to be false. Through investigations such as reverse image searches, examination of similar videos, and comparison with reputable news sources and google images, it has been found that the bridge in the viral photo is the Qingdao Jiaozhou Bay Bridge located in Qingdao, China. Multiple pieces of evidence, including matching architectural features and corroborating videos tell us that the bridge is not from Mumbai. No credible reports or sources have been found to prove the existence of a similar bridge in Mumbai.

Claims:
Social media users claim a viral image of the bridge is from Mumbai.



Fact Check:
Once the image was received, it was investigated under the reverse image search to find any lead or any information related to it. We found an image published by Mirror News media outlet, though we are still unsure but we can see the same upper pillars and the foundation pillars with the same color i.e white in the viral image.

The name of the Bridge is Jiaozhou Bay Bridge located in China, which connects the eastern port city of the country to an offshore island named Huangdao.
Taking a cue from this we then searched for the Bridge to find any other relatable images or videos. We found a YouTube Video uploaded by a channel named xuxiaopang, which has some similar structures like pillars and road design.

In reverse image search, we found another news article that tells about the same bridge in China, which is more likely similar looking.

Upon lack of evidence and credible sources for opening a similar bridge in Mumbai, and after a thorough investigation we concluded that the claim made in the viral image is misleading and false. It’s a bridge located in China not in Mumbai.
Conclusion:
In conclusion, after fact-checking it was found that the viral image of the bridge allegedly in Mumbai, India was claimed to be false. The bridge in the picture climbed to be Qingdao Jiaozhou Bay Bridge actually happened to be located in Qingdao, China. Several sources such as reverse image searches, videos, and reliable news outlets prove the same. No evidence exists to suggest that there is such a bridge like that in Mumbai. Therefore, this claim is false because the actual bridge is in China, not in Mumbai.
- Claim: The bridge seen in the popular social media posts is in Mumbai.
- Claimed on: X (formerly known as Twitter), Facebook,
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
In the multifaceted world of international trade and finance, cross-border transactions constitute the heart of economic relationships that span the globe. The threads that intertwine forming the fabric of global commerce are ceaselessly dynamic and exhibit an intricate pattern of complexity especially when it comes to the regulated movement of capital. It's a domain where economies connect, where businesses engage in sublime commerce, and where technology and regulation intersect at critical juncture. These guidelines will play a critical role in the regulation of capital, fortification of financial integrity, and transparency of regulatory and cross-border payments. The key highlights of this regulation include strict pre-authorization for non-bank entities, mandating specific accounts for import and export PA-CBs and a transaction ceiling of 25,00,000 Rupees.
The Vigilance of RBI
The Reserve Bank of India (RBI), ever vigilant in its shepherding role over the nation's financial stability and integrity, has taken decisive strides to dispel the haze that once clouded this critical sector. With the issuance of a revelatory circular dated October 31, 2023, the RBI has unveiled a groundbreaking framework that redefines the terrain for these pivotal financial entities, aptly christened as Payment Aggregators – Cross Border (PA-CB). In deploying this comprehensive array of regulations, the RBI demonstrates a robust commitment to harmonizing and synchronizing the oversight of payments within the country's financial fabric, extending its meticulous regulatory weave from domestic Payment Aggregators (PAs) to the PA-CBs, a sector previously undistinguished in formal oversight.
The prescriptive measures announced by the RBI are nothing short of a regulatory beacon that cuts through the fog of uncertainty, illuminating a clear path forward for entities dedicated to facilitating cross-border payment transactions pertaining to the import and export of permissible goods and services in India through online modes. Inclusiveness is a hallmark of the RBI’s directive, encompassing a diverse cadre of financial actors, ranging from Authorized Dealer (AD) banks and conventional Payment Aggregators (PAs), to the emergent breed of PA-CBs actively engaged in processing these critical international payment transactions.
Key Aspects of Regulation
One of the most striking aspects of this new regulatory regime is the RBI's insistence on pre-authorization. All non-bank entities providing PA-CB services are impelled to apply to the apex bank for authorisation by April 30, 2024. This is far from a perfunctory gesture; it represents a profound departure from the bygone era when these entities functioned under a patchwork of provisional guidelines and ad-hoc circulars. Indeed, with this resolute move, the RBI signals its intention to embrace these entities within its direct regulatory gambit, an acknowledgement of the shifting tides and progressive intricacies characteristic of cross-border payments.
The tapestry of new rules is complex, setting forth an array of prerequisites for entities aspiring for authorization. For instance, non-bank PA-CBs are obliged to register with the Financial Intelligence Unit-India (FIU-IND) as a preliminary step before commencing the application process. Moreover, the financial benchmarks set are notably rigorous. Non-banks must boast a minimum net worth of ₹15 crores at the time of the application—a figure that escalates to a robust ₹25 crores by the fiscal deadline of March 31, 2026.
Way Forward
As if these requirements weren't indicative enough of the RBI’s penchant for detail and precision, the guidelines become yet more granular when addressing specific types of PA-CBs. Import-only PA-CBs are mandatorily obliged to maintain an Import Collection Account (ICA) with an AD Category-I scheduled commercial bank, while export-only PA-CBs are instructed to maintain an Export Collection Account (ECA), which can be maintained in Indian Rupees (INR) or any permissible foreign currency. The nuance here is palpable; payments for import transactions must be received in a meticulously managed escrow account of the PA, prior to being funneled into the ICA for smooth settlement with overseas merchants.
Conversely, export-only PA-CBs' proceeds from international sales must be swiftly credited to the relevant currency ECA. This meticulous accounting ensures that the flow of funds is both transparent and traceable, adhering to the utmost standards of financial probity.
Yet, perhaps the most emphatic of the RBI's pronouncements is the establishment of a transaction ceiling. PA-CBs have their per-transaction limit capped at ₹25,00,000 for each unit of goods or services exchanged. This calculated move is transparent in its objective to mitigate risk—a crucial aspect when one considers the potential implications of these transactions on the country’s fiscal health and the integrity of its financial systems.
It is no exaggeration to declare that with these guidelines, the RBI is effectuating a seismic shift in the regulation of cross-border payment transactions. There's a fundamental transformation taking place—a metamorphosis—from a loosely defined existence of PA-CBs to one of distinct clarity, under the direct and unswerving supervisory gaze of the regulator. The compliance burden, indeed, has become heavier, yet the return is a compass that points decisively towards secure harbours.
As we embark upon the fresh horizons that these rules bring into view, it is imperative to acknowledge that the RBI's regulatory innovations represent far more than a mere codification of dos and don'ts. They embody a visionary stride towards safeguarding and fortifying the architecture of international payments, a critical component of India's burgeoning presence on the world economic stage.
Conclusion
The journey ahead, as we navigate these newly charted waters with the RBI's guidelines as our steadfast North Star, will no doubt be replete with challenges, adaptations and learning curves for the array of operational entities. But it is with confidence we can say, the path is set; the map is clear. The complex labyrinth of cross-border financial transactions is now demystified, and the RBI's clarion call beckons us towards a future marked by regulation, security, and above all else, reliability in the cosmopolitan tapestry of global trade. RBI’s guidelines provide a comprehensive framework for standardizing cross-border financial transactions in India. This decision is a monumental step towards maintaining cyber peace in cyberspace.
References:
- https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12561&Mode=0
- https://www2.deloitte.com/in/en/pages/tax/articles/tax-alert-Regulation-of-payment-aggregator-cross-border-pa-cb.html
- https://www.jsalaw.com/newsletters-and-updates/rbis-new-guidelines-to-govern-payment-aggregators-in-cross-border-transactions/

Introduction
A photo circulating on social media depicting modified tractors is being misrepresented as part of the 'Delhi Chalo' farmers' protest narrative. In the recent swirl of misinformation surrounding the 'Delhi Chalo' farmers' protest. A photo, ostensibly showing a phalanx of modified tractors, has been making the rounds on social media platforms, falsely tethered to the ongoing protests. This image, accompanied by a headline suggesting a mechanical metamorphosis to resist police barricades, was allegedly published by a news agency. However, beneath the surface of this viral phenomenon lies a more complex and fabricated reality.
The Movement
The 'Delhi Chalo' movement, a clarion call that resonated with thousands of farmers from the fertile plains of Punjab, the verdant fields of Haryana, and the sprawling expanses of Uttar Pradesh, has been a testament to the agrarian community's demand for assured crop prices and legal guarantees for the Minimum Support Price (MSP). The protest, which has seen the fortification of borders and the chaos at the Punjab-Haryana border on February 13, 2024, has become a crucible for the farmers' unyielding spirit.
Yet, amidst this backdrop of civil demonstration and discourse, a nefarious narrative of misinformation has taken root. The viral image, which has been shared with the fervour of wildfire, was accompanied by a screenshot of an article allegedly published by the news agency. This article, dated February 11, 2024, quoted an anonymous official who claimed that intelligence agencies had alerted the police to the protesters' plans to outfit tractors with hydraulic tools. The implication was clear: these machines had been transformed into battering rams against the bulwark of law enforcement.
The Pursuit of Truth
However, the India TV Fact Check team, in their relentless pursuit of truth, unearthed that the viral photo of these so-called modified tractors is nothing but a chimerical creation, a figment of artificial intelligence. Visual discrepancies betrayed its AI-generated nature.
This is not the first time that the misinformation has loomed over the farmers' protest. Previous instances, including a viral video of a modified tractor, have been debunked by the same fact-checking team. These efforts are a bulwark against the tide of false narratives that seek to muddy the waters of public understanding.
The claim that the photo depicted modified tractors intended for use in the ‘Delhi Chalo’ farmers' protest rally in Delhi on February 13, 2024, was a mirage.
The Fact Check
OpIndia, in their article, clarified that the photo used was a representative image created by AI and not a real photograph. To further scrutinize this viral photo, the HIVE AI detector tool was employed, indicating a 99.4% likelihood of the image being AI-generated. Thus, the claim made in the post was misleading.
The viral photo claiming that farmers had modified their tractors to avoid tear gas shells and remove barricades put up by the police during the rally was a digital illusion. The internet has become a fertile ground for the rapid spread of misinformation, reaching millions in an instant. Social media, with its complex algorithms, amplifies this spread, as any interaction, even those intended to debunk false information, inadvertently increases its reach. This phenomenon is exacerbated by 'echo chambers,' where users are exposed to a homogenous stream of content that reinforces their pre-existing beliefs, making it difficult to encounter and consider alternative perspectives.
Conclusion
The viral image depicting modified tractors for the ‘Delhi Chalo’ farmers' protest rally was a digital fabrication, a testament to the power of AI in creating convincing yet false narratives. As we navigate the labyrinth of information in the digital era, it is imperative to remain vigilant, to question the veracity of what we see and hear, and to rely on the diligent work of fact-checkers in discerning the truth. The mirage of modified machines serves as a stark reminder of the potency of misinformation and the importance of critical thinking in the age of artificial intelligence.
References
- https://www.indiatvnews.com/fact-check/fact-check-ai-generated-tractor-photo-misrepresented-delhi-chalo-farmers-protest-narrative-msp-police-barricades-punjab-haryana-uttar-pradesh-2024-02-15-917010
- https://factly.in/this-viral-image-depicting-modified-tractors-for-the-delhi-chalo-farmers-protest-rally-is-created-using-ai/

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india