#FactCheck - Viral Photo of Dilapidated Bridge Misattributed to Kerala, Originally from Bangladesh
Executive Summary:
A viral photo on social media claims to show a ruined bridge in Kerala, India. But, a reality check shows that the bridge is in Amtali, Barguna district, Bangladesh. The reverse image search of this picture led to a Bengali news article detailing the bridge's critical condition. This bridge was built-in 2002 to 2006 over Jugia Khal in Arpangashia Union. It has not been repaired and experiences recurrent accidents and has the potential to collapse, which would disrupt local connectivity. Thus, the social media claims are false and misleading.

Claims:
Social Media users share a photo that shows a ruined bridge in Kerala, India.


Fact Check:
On receiving the posts, we reverse searched the image which leads to a Bengali News website named Manavjamin where the title displays, “19 dangerous bridges in Amtali, lakhs of people in fear”. We found the picture on this website similar to the viral image. On reading the whole article, we found that the bridge is located in Bangladesh's Amtali sub-district of Barguna district.

Taking a cue from this, we then searched for the bridge in that region. We found a similar bridge at the same location in Amtali, Bangladesh.
According to the article, The 40-meter bridge over Jugia Khal in Arpangashia Union, Amtali, was built in 2002 to 2006 and was never repaired. It is in a critical condition, causing frequent accidents and risking collapse. If the bridge collapses it will disrupt communication between multiple villages and the upazila town. Residents have made temporary repairs.
Hence, the claims made by social media users are fake and misleading.
Conclusion:
In conclusion, the viral photo claiming to show a ruined bridge in Kerala is actually from Amtali, Barguna district, Bangladesh. The bridge is in a critical state, with frequent accidents and the risk of collapse threatening local connectivity. Therefore, the claims made by social media users are false and misleading.
- Claim: A viral image shows a ruined bridge in Kerala, India.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In the era of digitalisation, social media has become an essential part of our lives, with people spending a lot of time updating every moment of their lives on these platforms. Social media networks such as WhatsApp, Facebook, and YouTube have emerged as significant sources of Information. However, the proliferation of misinformation is alarming since misinformation can have grave consequences for individuals, organisations, and society as a whole. Misinformation can spread rapidly via social media, leaving a higher impact on larger audiences. Bad actors can exploit algorithms for their benefit or some other agenda, using tactics such as clickbait headlines, emotionally charged language, and manipulated algorithms to increase false information.
Impact
The impact of misinformation on our lives is devastating, affecting individuals, communities, and society as a whole. False or misleading health information can have serious consequences, such as believing in unproven remedies or misinformation about some vaccines can cause serious illness, disability, or even death. Any misinformation related to any financial scheme or investment can lead to false or poor financial decisions that could lead to bankruptcy and loss of long-term savings.
In a democratic nation, misinformation plays a vital role in forming a political opinion, and the misinformation spread on social media during elections can affect voter behaviour, damage trust, and may cause political instability.
Mitigating strategies
The best way to minimise or stop the spreading of misinformation requires a multi-faceted approach. These strategies include promoting media literacy with critical thinking, verifying information before sharing, holding social media platforms accountable, regulating misinformation, supporting critical research, and fostering healthy means of communication to build a resilient society.
To put an end to the cycle of misinformation and move towards a better future, we must create plans to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.
The widespread spread of false information on social media platforms presents serious problems for people, groups, and society as a whole. It becomes clear that battling false information necessitates a thorough and multifaceted strategy as we go deeper into comprehending the nuances of this problem.
Encouraging consumers to develop media literacy and critical thinking abilities is essential to preventing the spread of false information. Being educated is essential for equipping people to distinguish between reliable sources and false information. Giving individuals the skills to assess information critically will enable them to choose the content they share and consume with knowledge. Public awareness campaigns should be used to promote and include initiatives that aim to improve media literacy in school curriculum.
Ways to Stop Misinformation
As we have seen, misinformation can cause serious implications; the best way to minimise or stop the spreading of misinformation requires a multifaceted approach; here are some strategies to combat misinformation.
- Promote Media Literacy with Critical Thinking: Educate individuals about how to critically evaluate information, fact check, and recognise common tactics used to spread misinformation, the users must use their critical thinking before forming any opinion or perspective and sharing the content.
- Verify Information: we must encourage people to verify the information before sharing, especially if it seems sensational or controversial, and encourage the consumption of news or any information from a reputable source of news that follows ethical journalistic standards.
- Accountability: Advocate for social media networks' openness and responsibility in the fight against misinformation. Encourage platforms to put in place procedures to detect and delete fraudulent content while boosting credible sources.
- Regulate Misinformation: Looking at the current situation, it is important to advocate for policies and regulations that address the spread of misinformation while safeguarding freedom of expression. Transparency in online communication by identifying the source of information and disclosing any conflict of interest.
- Support Critical Research: Invest in research and study on the sources, impacts, and remedies to misinformation. Support collaborative initiatives by social scientists, psychologists, journalists, and technology to create evidence-based techniques for countering misinformation.
Conclusion
To prevent the cycle of misinformation and move towards responsible use of the Internet, we must create strategies to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Risk Management
The ‘Information Security Profile’ prioritises and informs cybersecurity operations based on the company's risk administration procedures. It assists in choosing areas of focus for security operations that represent the desired results for producers by supporting periodic risk evaluations and validating company motivations. A thorough grasp of the business motivations and safety requirements unique to the Production system and its surroundings is necessary in order to manage cybersecurity threats. Because every organisation has different risks and uses ICS and IT in different ways, there will be variations in how the profile is implemented.
Companies are currently adopting industry principles and cybersecurity requirements, which the Manufacturing Information is intended to supplement, not replace. Manufacturers have the ability to identify crucial operations for key supply chains and can order expenditures in a way that will optimise their impact on each dollar. The Profile's primary objective is to lessen and manage dangers associated with cybersecurity more effectively. The Cybersecurity Framework and the Profile are not universally applicable methods for controlling security risks for essential infrastructure.
Producers will always face distinct risks due to their distinct dangers, weaknesses, and tolerances for danger. Consequently, the ways in which companies adopt security protocols will also change.
Key Cybersecurity Functions: Identify, Protect, Detect, Respond, and Recover
- Determine
Create the organisational knowledge necessary to control the potential hazards of cybersecurity to information, systems, resources, and competencies. The Identify Function's tasks are essential for using the Framework effectively. An organisation can concentrate its efforts in a way that aligns with its approach to risk mitigation and company needs by having a clear understanding of the business environment, the financial resources that assist with vital operations, and the associated cybersecurity threats. Among the outcome characteristics that fall under this function are risk evaluation, mitigation strategy, the administration of assets, leadership, and the business environment.
- Protect
Create and put into place the necessary measures to guarantee the provision of crucial infrastructure amenities. The Protect Function's operations enable the limitation or containment of the possible impact of a cybersecurity incident. Instances of results Access Management, Knowledge and Instruction, Data Safety and Security, Data Protection Processes and Instructions, Repair, and Defensive Systems are some of the classifications that fall under this role.
- Detect
Create and carry out the necessary actions to determine whether a cybersecurity event has occurred. The Detect Function's operations make it possible to find vulnerability occurrences in an efficient way. This function's result subcategories include things like abnormalities and incidents, constant security monitoring, and identification processes.
- React
Create and carry out the necessary plans to address a cybersecurity event that has been discovered. The Response Function's operations facilitate the capacity to mitigate the effects of a possible cybersecurity incident. Within this Scope, emergency planning, interactions, analysis, prevention, and enhancements are a few examples of result categories.
- Recover
Create and carry out the necessary actions to uphold resilience tactics and restore any services or competencies that were hampered by a cybersecurity incident. In order to lessen the effects of a vulnerability incident, the Recovery Function's efforts facilitate a prompt return to regular operations. The following are a few instances of outcome subcategories under this role: communications, enhancements, and recovery planning.
Conclusion
The Information Security Profile, when seen in the framework of risk mitigation, offers producers a tactical method to deal with the ever-changing cybersecurity danger scenario. The assessment directs safeguarding operations prioritisation by recognising specific business reasons and connecting with corporate goals. The Profile enhances the cybersecurity standards and established industry guidelines by taking into account the differences in vulnerabilities and organisational subtleties among producers. It highlights the significance of a customised strategy, acknowledging that every business has unique risks and weaknesses.
The fundamental tasks of the Framework, to Identify, Protect, Detect, Respond, and Recover, serve as a thorough roadmap, guaranteeing a proactive and flexible approach to cybersecurity. The Profile's ultimate goal is to increase the efficacy of risk mitigation techniques, understanding that cybersecurity is a constantly shifting and evolving subject for the manufacturing sector.
References
- https://csrc.nist.gov/news/2020/cybersecurity-framework-v1-1-manufacturing-profile
- https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8183r1.pdf
- https://mysecuritymarketplace.com/reports/cybersecurity-framework-version-1-1-manufacturing-profile/