#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.
Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.
Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.
Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL
Introduction
Personalised advertisements deploy a mechanism that derives from the collection of the user’s data. Although it allows for a more tailored user experience, one cannot ignore the method through which this is achieved. Recently, as per a report by the Indian Express on 13th November 2024, Meta has come up with a less personalised ad option on Facebook and Instagram for its users in the European Union (EU). This was done due to the incompatibility of their previous ad offer with the EU’s Digital Markets Act (DMA).
Relevant Legislation
In October 2023, Meta came up with a “Pay or Consent” option for their users in the EU. It gave the users two options: either to pay a monthly subscription fee to avail of the ad-free usage variant of Facebook and Instagram, or to give consent to see personalised ads based on the user’s data. This consent model was introduced in their attempts to comply with the EU’s DMA. However, this was found to be incompatible with the said mandate, according to the EU regulators, as they believed that the users should not only have the option to consent to ads but also have access to less personalised but equivalent alternatives. It is this decision that pushed Meta to come up with less personalised ad options for users in the EU. The less-personalised ad option claims to rely on limited data and show ads that are only based on the context of what is being viewed i.e. during a Facebook or Instagram session requiring a minimum set of data points such as location, age, gender, and the user’s engagement with the ads. However, choosing this option also allows for such ads to be less skippable.
The EU’s Digital Markets Act came into force on November 1, 2022. The purpose was to make the digital marketing sector fairer and in doing so, identify what they consider to be “Gatekeepers” (core platform services such as messenger services, search engines, and app stores) and a list of do’s and don’ts for them. One of them, applicable to the case mentioned above, is the effective consent required by the user in case the gatekeeper decides to target advertisements enabled by tracking the users' activity outside the gatekeeper's core platform services.
The Indian Context
Although no such issues have been raised in India yet, it is imperative to know that in the Indian context, the DPDP (Digital Personal Data Protection) Act 2023 governs personal data regulation. This includes rules for Data Fiduciaries (those who, alone or in partnership with others, determine the means and purpose of processing personal data), the Data Principal (those who give data), Consent Managers, and even rules regarding processing data of children.
CyberPeace Recommendations:
At the level of the user, one can take steps to ensure limited collection of personal data by following the mentioned steps:
- Review Privacy Settings- Reviewing Privacy settings for one’s online accounts and devices is a healthy practice to avoid giving unnecessary information to third-party applications.
- Private Browsing- Browsing through private mode or incognito is encouraged, as it prevents websites from tracking your activity and personal data.
- Using Ad-blockers- Certain websites have a user option to block ads when the user first visits their page. Availing of this prevents spam advertisements from the respective websites.
- Using VPN- Using Virtual Private Networks enables users to hide their IP address and their data to be encrypted, preventing third-party actors from tracking the users' online activities
- Other steps include clearing cookies and cache data and using the location-sharing feature with care.
Conclusion
Meta’s compliance with the EU’s DMA signals that social media platforms cannot circumnavigate their way around rules. Balancing the services provided while respecting user privacy is of the utmost importance. The EU has set precedence for a system that respects this and can be used as an example to help set guidelines for how other countries can continue to deal with similar issues and set standards accordingly.
References
- https://indianexpress.com/article/technology/tech-news-technology/meta-less-personalised-ads-eu-regulatory-demands-9667266/
- https://rainmaker.co.in/blog/view/the-price-of-personalization-how-targeted-advertising-breaches-data-privacy-and-challenges-the-gdprs-shield
- https://www.infosecurity-magazine.com/magazine-features/fines-data-protection-violations/
- https://www.forbes.com/councils/forbestechcouncil/2023/09/01/the-landscape-of-personalized-advertising-efficiency-versus-privacy/
- https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next
- https://economictimes.indiatimes.com/news/how-to/how-to-safeguard-privacy-in-the-era-of-personalised-ads/articleshow/102748711.cms?from=mdr
- https://www.business-standard.com/technology/tech-news/facebook-instagram-users-in-europe-can-opt-for-less-personalised-ads-124111201558_1.html
- https://digital-markets-act.ec.europa.eu/about-dma_en
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.
Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.
Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.
This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading