#FactCheck - Viral Photo of Dilapidated Bridge Misattributed to Kerala, Originally from Bangladesh
Executive Summary:
A viral photo on social media claims to show a ruined bridge in Kerala, India. But, a reality check shows that the bridge is in Amtali, Barguna district, Bangladesh. The reverse image search of this picture led to a Bengali news article detailing the bridge's critical condition. This bridge was built-in 2002 to 2006 over Jugia Khal in Arpangashia Union. It has not been repaired and experiences recurrent accidents and has the potential to collapse, which would disrupt local connectivity. Thus, the social media claims are false and misleading.

Claims:
Social Media users share a photo that shows a ruined bridge in Kerala, India.


Fact Check:
On receiving the posts, we reverse searched the image which leads to a Bengali News website named Manavjamin where the title displays, “19 dangerous bridges in Amtali, lakhs of people in fear”. We found the picture on this website similar to the viral image. On reading the whole article, we found that the bridge is located in Bangladesh's Amtali sub-district of Barguna district.

Taking a cue from this, we then searched for the bridge in that region. We found a similar bridge at the same location in Amtali, Bangladesh.
According to the article, The 40-meter bridge over Jugia Khal in Arpangashia Union, Amtali, was built in 2002 to 2006 and was never repaired. It is in a critical condition, causing frequent accidents and risking collapse. If the bridge collapses it will disrupt communication between multiple villages and the upazila town. Residents have made temporary repairs.
Hence, the claims made by social media users are fake and misleading.
Conclusion:
In conclusion, the viral photo claiming to show a ruined bridge in Kerala is actually from Amtali, Barguna district, Bangladesh. The bridge is in a critical state, with frequent accidents and the risk of collapse threatening local connectivity. Therefore, the claims made by social media users are false and misleading.
- Claim: A viral image shows a ruined bridge in Kerala, India.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The Ministry of Electronics and Information Technology (MeitY) recently issued the “Email Policy of Government of India, 2024.” It is an updated email policy for central government employees, requiring the exclusive use of official government emails managed by the National Informatics Centre (NIC) for public duties. The policy replaces 2015 guidelines and prohibits government employees, contractors, and consultants from using their official email addresses on social media or other websites unless authorised for official functions. The policy aims to reinforce cybersecurity measures and protocols, maintain secure communications, and ensure compliance across departments. It is not legally binding, but its gazette notification ensures compliance and maintains cyber resilience in communications. The updated policy is also aligned with the newly enacted Digital Personal Data Protection Act, 2023.
Brief Highlights of Email Policy of Government of India, 2024
- The Email Policy of the Government of India, 2024 is divided into three parts namely, Part I: Introduction, Part II: Terms of Use, Part III: Functions, duties and Responsibilities, and with an annexe attached to it defining the meaning of certain organisation types in relation to this policy.
- The policy direct to not use NICeMail address for registering on any social media or other websites or mobile applications, save for the performance of official duties or with due authorisation from the authority competent.
- Under this new policy, “core use organisations” (central government departments and other government-controlled entities that do not provide goods or services on commercial terms) and its users shall use only NICeMail for official purposes.
- However, where the Core Use Organisation has an office or establishment outside India, to ensure availability of local communication channels under exigent circumstances may use alternative email services hosted outside India with all due approval.
- Core Use Organisations, including those dealing with national security, have their own independent email servers and can continue operating their independent email servers provided the servers are hosted in India. They should also consider migrating their email services to NICeMail Services for security and uniform policy enforcement.
- The policy also requires departments that currently use @gov.in or @nic.in to instead migrate to @departmentname.gov.in mail domains so that information sanctity and integrity can be maintained when officials are transferred from one department/ministry to another, and so that the ministry/department doesn’t lose access to the official communication. For this, the department or ministry in question must register the domain name with NIC. For instance, MeitY has registered the mail domain @meity.gov.in. The policy gives government departments six months time period complete this migration.
- The policy also makes distinction between (1) Organisation-linked email addresses and (2) Service-linked email addresses. The policy in respect of “organisation-linked email addresses” is laid down in paragraphs 5.3.2(a) and 5.4 to 5.6.3. And the policy in respect of “service-linked email addresses” is laid down in paragraphs 5.3.2(b) and 5.7 to 5.7.2 under the official document of said policy.
- Further, the new policy includes specific directives on separating the email addresses of regular government employees from those of contractors or consultants to improve operational clarity.
CyberPeace Policy Outlook
The revised Email Policy of the Government of India reflects the government’s proactive response to countering the evolving cybersecurity challenges and aims to maintain cyber resilience across the government department’s email communications. The policy represents a significant step towards securing inter government and intra-government communications. We as a cybersecurity expert organisation emphasise the importance of protecting sensitive data against cyber threats, particularly in a world increasingly targeted by sophisticated phishing and malware attacks, and we advocate for safe and secure online communication and information exchange. Email communications hold sensitive information and therefore require robust policies and mechanisms in place to safeguard the communications and ensure that sensitive data is shielded through regulated and secure email usage with technical capabilities for safe use. The proactive step taken by MeitY is commendable and aligned with securing governmental communication channels.
References:
- https://www.meity.gov.in/writereaddata/files/Email-policy-30-10-2024.pdf-(Official document for Email Policy of Government of India, 2024.
- https://www.hindustantimes.com/india-news/dont-use-govt-email-ids-for-social-media-central-govt-policy-for-employees-101730312997936.html#:~:text=Government%20employees%20must%20not%20use,email%20policy%20issued%20on%20Wednesday
- https://bwpeople.in/article/new-email-policy-issued-for-central-govt-employees-to-strengthen-cybersecurity-measures-537805
- https://www.thehindu.com/news/national/centre-notifies-email-policy-for-ministries-central-departments/article68815537.ece
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)

Introduction
Entrusted with the responsibility of leading the Global Education 2030 Agenda through the Sustainable Development Goal 4, UNESCO’s Institute for Lifelong Learning in collaboration with the Media and Information Literacy and Digital Competencies Unit has recently launched a Media and Information Literacy Course for Adult Educators. The course aligns with The Pact for The Future adopted at The United Nations Summit of the Future, September 2024 - asking for increased efforts towards media and information literacy from its member countries. The course is free for Adult Educators to access and is available until 31st May 2025.
The Course
According to a report by Statista, 67.5% of the global population uses the internet. Regardless of the age and background of the users, there is a general lack of understanding on how to spot misinformation, targeted hate, and navigating online environments in a manner that is secure and efficient. Since misinformation (largely spread online) is enabled by the lack of awareness, digital literacy becomes increasingly important. The course is designed keeping in mind that many active adult educators are yet to get an opportunity to hone their skills with regard to media and information through formal education. Self-paced, a total of 10 hours, this course covers basics such as concepts of misinformation and disinformation, artificial intelligence, and combating hate speech, and offers a certificate on completion.
CyberPeace Recommendations
As this course is free of cost, can be done in a remote capacity, and covers basics regarding digital literacy, all eligible are encouraged to take it up to familiarise themselves with such topics. However, awareness regarding the availability of this course, alongside who can avail of this opportunity can be further worked on so a larger number can avail its benefits.
CyberPeace Recommendations To Enhance Positive Impact
- Further Collaboration: As this course is open to adult educators, one can consider widening the scope through active engagement with Independent organisations and even Individual internet users who are willing to learn.
- Engagement with Educational Institutions: After launching a course, an interactive outreach programme and connecting with relevant stakeholders can prove to be beneficial. Since this course requires each individual adult educator to sign up to avail the course, partnering with educational universities, institutes, etc. is encouraged. In the Indian context, active involvement with training institutes such as DIET (District Institute of Education and Training), SCERT (State Council of Educational Research and Training), NCERT (National Council of Educational Research and Training), and Open Universities, etc. could be initiated, facilitating greater awareness and more participation.
- Engagement through NGOs: NGOs (focused on digital literacy) with a tie-up with UNESCO, can aid in implementing and encouraging awareness. A localised language approach option can be pondered upon for inclusion as well.
Conclusion
Though a long process, tackling misinformation through education is a method that deals with the issue at the source. A strong foundation in awareness and media literacy is imperative in the age of fake news, misinformation, and sensitive data being peddled online. UNESCO’s course launch garners attention as it comes from an international platform, is free of cost, truly understands the gravity of the situation, and calls for action in the field of education, encouraging others to do the same.
References
- https://www.uil.unesco.org/en/articles/media-and-information-literacy-course-adult-educators-launched
- https://www.unesco.org/en/articles/celebrating-global-media-and-information-literacy-week-2024
- https://www.unesco.org/en/node/559#:~:text=UNESCO%20believes%20that%20education%20is,must%20be%20matched%20by%20quality.