#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Misinformation spreads differently with respect to different host environments, making localised cultural narratives and practices major factors in how an individual deals with it when presented in a certain place and to a certain group. In the digital age, with time-sensitive data, an overload of information creates a lot of noise which makes it harder to make informed decisions. There are also cases where customary beliefs, biases, and cultural narratives are presented in ways that are untrue. These instances often include misinformation related to health and superstitions, historical distortions, and natural disasters and myths. Such narratives, when shared on social media, can lead to widespread misconceptions and even harmful behaviours. For example, it may also include misinformation that goes against scientific consensus or misinformation that contradicts simple, objectively true facts. In such ambiguous situations, there is a higher probability of people falling back on patterns in determining what information is right or wrong. Here, cultural narratives and cognitive biases come into play.
Misinformation and Cultural Narratives
Cultural narratives include deep-seated cultural beliefs, folklore, and national myths. These narratives can also be used to manipulate public opinion as political and social groups often leverage them to proceed with their agenda. Lack of digital literacy and increasing information online along with social media platforms and their focus on generating algorithms for engagement aids this process. The consequences can even prove to be fatal.
During COVID-19, false claims targeted certain groups as being virus spreaders fueled stigmatisation and eroded trust. Similarly, vaccine misinformation, rooted in cultural fears, spurred hesitancy and outbreaks. Beyond health, manipulated narratives about parts of history are spread depending on the sentiments of the people. These instances exploit emotional and cultural sensitivities, emphasizing the urgent need for media literacy and awareness to counter their harmful effects.
CyberPeace Recommendations
As cultural narratives may lead to knowingly or unknowingly spreading misinformation on social media platforms, netizens must consider preventive measures that can help them build resilience against any biased misinformation they may encounter. The social media platforms must also develop strategies to counter such types of misinformation.
- Digital and Information Literacy: Netizens must encourage developing digital and information literacy in a time of information overload on social media platforms.
- The Role Of Media: The media outlets can play an active role, by strictly providing fact-based information and not feeding into narratives to garner eyeballs. Social media platforms also need to be careful while creating algorithms focused on consistent engagement.
- Community Fact-Checking: As localised information prevails in such cases, owing to the time-sensitive nature, immediate debunking of precarious information by authorities at the ground level is encouraged.
- Scientifically Correct Information: Starting early and addressing myths and biases through factual and scientifically correct information is also encouraged.
Conclusion
Cultural narratives are an ingrained part of society, and they might affect how misinformation spreads and what we end up believing. Acknowledging this process and taking counter measures will allow us to move further and take steps for intervention regarding tackling the spread of misinformation specifically aided by cultural narratives. Efforts to raise awareness and educate the public to seek sound information, practice verification checks, and visit official channels are of the utmost importance.
References
- https://www.icf.com/insights/cybersecurity/developing-effective-responses-to-fake-new
- https://www.dw.com/en/india-fake-news-problem-fueled-by-digital-illiteracy/a-56746776
- https://www.apa.org/topics/journalism-facts/how-why-misinformation-spreads
.webp)
Introduction
Google is set to change its storage and access of users' "Location History" in Google Maps, reducing the data retention period and making it impossible for the company to access it. This change will significantly impact "geofence warrants," a controversial legal tool used by authorities to force Google to hand over information about all users within a given location during a specific timeframe. This decision is a significant win for privacy advocates and criminal defense attorneys who have long decried these warrants.
The company aims to protect people's privacy by removing the repository of location data dating back months or years. Geofence warrants, which provide police with sensitive data on individuals, are considered dangerous and could turn innocent people into suspects.
Understanding Geofence Warrants
Geofence warrants, also known as reverse-location warrants, are used by law enforcement agencies to obtain locational data stored by tech companies within a specified geographical area and timeframe to identify devices near a crime scene. In contrast to general warrants, which allow law enforcement agencies to obtain data of one individual (usually the suspect), geofence warrants enable law enforcement authorities to obtain data for all individuals in a specific location and subsequently track and trace any device that may be linked to a crime scene. Geofence warrants have become a major issue, with law enforcement agencies utilising them to obtain location data from tech companies.
Privacy Concerns of Geofence Warrants
While Geofence warrants allow law enforcement agencies to determine and identify potential suspects, these warrants have sparked controversy for their invasive characteristics. Civil rights activities and various technology companies have raised concerns over the impact of these warrants on the rights of data principals. It is noted that geofence warrants mark a rise in cases of state surveillance and police harassment. Not only is any data principal in the vicinity of the crime scene classified as a potential suspect, but companies are also compelled to submit identifying personal data on every device/phone in a marked geographic space.
From Surveillance to Safeguards
Geofence warrants have become a contentious tool for law enforcement worldwide, with concerns over privacy and civil liberties, especially in sensitive situations like protests and healthcare. Google is considering allowing users to store their location data on their devices, potentially ending the use of geofence warrants, which law enforcement agencies use to obtain location data from tech companies.
Google is changing its handling of Location History data, moving it on-device instead of on its servers. The default data retention period will be reduced. Google Maps' product director, Marlo McGriff, stated that the company will automatically encrypt backed-up data for cloud backups, preventing anyone from reading it. When these changes are implemented, Google will have no geodata fishing options for users. Google confirmed that it will no longer be able to respond to new geofence warrants once these changes are implemented, as it will not have access to the relevant data. The changes were designed to put an end to dragnet searches of location data.
Conclusion
Google's decision to change storage and access policies for users' location history in Google Maps marks a pivotal step in the ongoing narrative of law enforcement's misuse of geofence warrants. This move aims to safeguard individual privacy by significantly restricting the data retention period and limiting Google's ability to comply with geofence warrants. This change is welcomed by privacy advocates and legal professionals who express concerns over the intrusive nature of these warrants, which may potentially turn innocent individuals into suspects based on their proximity to a crime scene. As technology companies take steps to enhance user privacy, the evolving landscape calls for a balance between law enforcement needs and protecting individual rights in an era of increasing digital surveillance.
References:
- https://telecom.economictimes.indiatimes.com/news/internet/google-to-end-geofence-warrant-requests-for-users-location-data/106081499
- https://www.forbes.com/sites/cyrusfarivar/2023/12/14/google-just-killed-geofence-warrants-police-location-data/?sh=313da3c32c86
- https://timesofindia.indiatimes.com/gadgets-news/explained-how-google-maps-is-preventing-authorities-from-accessing-users-location-history-data/articleshow/106086639.cms

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.