#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In a world perpetually in motion, the currents of the information superhighway surge ceaselessly, molding perceptions, shaping realities, and often blurring the lines that tether truth to its moorings. At the heart of this relentless churn lies a conundrum that has become all too familiar, in which veracity is obscured by the shadow-play of misinformation. Emblematic of this dilemma is the narrative of Virat Kohli, a name that has become synonymous not only with cricketing brilliance but with the complexities of a modern era where digital echo chambers amplify half-truths and outright fabrications with alarming efficacy.
It is within this intricate fabric of the digital realm that the saga of Virat Kohli—a titan of cricket whose arsenal of strokes and strategic acumen have captivated audiences worldwide—takes on a dimension that transcends the sport. The speculative murmurs have been converted into roaring waves of misinformation, crafting a narrative that, while devoid of truth, assumes a disconcerting life. This digital osmosis, the transmutation from a quiet inkling to a deafening chorus of credibility, exemplifies the troublesome dynamic that has come to define our interactions with news media in the 21st century.
Fact check: Viral Misinformation
A viral post about Virat Kohli's mother suffering from liver issues has gone viral on social media. The claim came after Kohli withdrew from the India-England test series citing 'personal reasons'. Vikas Kohli, brother of Virat Kohli, clarified on Instagram that the viral news about their mother is false. He clarified that their mother is doing well and the viral claim is false. Vikas Kohli's Instagram page dismissed the viral claim, stating that he noticed the fake news and requested the media not to spread such news without proper information.
Fake Health Crisis
As this virulent strain of rumour regarding the health of Saroj Kohli, Virat Kohli’s mother, began to swell into the digital domain, it brought to the forefront a critical examination of the checks and balances within our networks of communication. Saroj, whose resilience and nurturing presence had been an anchor in the athlete's storied journey, undeservedly became the nucleus of a fictitious tale of despair, giving us pause to reflect on the ethical boundaries of storytelling in the world of clicks and views.
Vikas Kohli—the elder brother of Virat Kohli—took to social media, the very platform from which the falsehood originated, to stand as the bulwark against the spread of this groundless narrative.
The Consequences
The consequences of such falsehoods and their rapid dissemination are manifold, affecting individuals and communities in profound ways. The motivations behind the proliferation of deceitful stories are as labyrinthine as the networks they traverse - from manipulation and economic incentives to the pursuit of sheer sensationalism or cynical entertainment, each strand intertwines to form an intricate web wherein truth struggles to assert itself.
Conclusion
In the ceaseless expanses of the digital cosmos, where one can easily drift into the void of falsities, let the narrative of Virat Kohli stand as a sentinel, a reminder of our duty to navigate these waters with vigilance and to preserve the sanctity of truth. Amidst the vast ocean of content that laps in our consciousness, it is precisely this unwavering dedication to facts that will act as our compass, enabling us to discern the credible beacons from the deceptive mirages and ultimately ensuring that our discourse remains moored in the bedrock of reality.
References
- https://www.thequint.com/news/webqoof/virat-kohli-mother-sick-liver-disease-fact-check
- https://indianexpress.com/article/sports/cricket/ind-vs-eng-virat-kohlis-brother-dismisses-fake-news-circulating-about-their-mother-9136144/
- https://www.outlookindia.com/amp/story/sports/ind-vs-eng-virat-kohlis-brother-slams-fake-news-on-their-mothers-health
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/

Introduction
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. The Indian Ministry of Home Affairs approved a scheme for the establishment of the Indian Cyber Crime Coordination Centre (I4C) in October2018, which was inaugurated by Home Minister Amit Shah in January 2020. I4C is envisaged to act as the nodal point to curb Cybercrime in the country. Recently, on 13th March2024, the Centre designated the Indian Cyber Crime Coordination Centre (I4C) as an agency of the Ministry of Home Affairs (MHA) to perform the functions under the Information Technology Act, 2000, to inform about unlawful cyber activities.
The gazetted notification dated 13th March 2024 read as follows:
“In exercise of the powers conferred by clause (b) of sub-section (3) of section 79 of the Information Technology Act 2000, Central Government being the appropriate government hereby designate the Indian Cybercrime Coordination Centre (I4C), to be the agency of the Ministry of Home Affairs to perform the functions under clause (b) of sub-section (3) of section79 of Information Technology Act, 2000 and to notify the instances of information, data or communication link residing in or connected to a computer resource controlled by the intermediary being used to commit the unlawful act.”
Impact
Now, the Indian Cyber Crime Coordination Centre (I4C) is empowered to issue direct takedown orders under 79(b)(3) of the IT Act, 2000. Any information, data or communication link residing in or connected to a computer resource controlled by any intermediary being used to commit unlawful acts can be notified by the I4C to the intermediary. If an intermediary fails to expeditiously remove or disable access to a material after being notified, it will no longer be eligible for protection under Section 79 of the IT Act, 2000.
Safe Harbour Provision
Section79 of the IT Act also serves as a safe harbour provision for the Intermediaries. The safe harbour provision under Section 79 of the IT Act states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used to commit something unlawful. Furthermore, Intermediaries are also obliged to perform due diligence on their platforms and comply with the rules & regulations and maintain and promote a safe digital environment on the respective platforms.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, The government has also mandated that a ‘significant social media intermediary’ must appoint a Chief Compliance Officer (CCO), Resident Grievance Officer (RGO), and Nodal Contact Person and publish periodic compliance report every month mentioning the details of complaints received and action taken thereon.
I4C's Role in Safeguarding Cyberspace
The Indian Cyber Crime Coordination Centre (I4C) is actively working towards initiatives to combat the emerging threats in cyberspace. I4C is one of the crucial extensions of the Ministry of Home Affairs, Government of India, working extensively to combat cyber crimes and ensure the overall safety of netizens. The ‘National Cyber Crime Reporting Portal’ equipped with a 24x7 helpline number 1930, is one of the key component of the I4C.
Components Of The I4C
- National Cyber Crime Threat Analytics Unit
- National Cyber Crime Reporting Portal
- National Cyber Crime Training Centre
- Cyber Crime Ecosystem Management Unit
- National Cyber Crime Research and Innovation Centre
- National Cyber Crime Forensic Laboratory Ecosystem
- Platform for Joint Cyber Crime Investigation Team.
Conclusion
I4C, through its initiatives and collaborative efforts, plays a pivotal role in safeguarding cyberspace and ensuring the safety of netizens. I4C reinforces India's commitment to combatting cybercrime and promoting a secure digital environment. The recent development by designating the I4C as an agency to notify the instances of unlawful activities in cyberspace serves as a significant step to counter cybercrime and promote an ethical and safe digital environment for netizens.
References
- https://www.deccanherald.com/india/centre-designates-i4c-as-agency-of-mha-to-notify-unlawful-activities-in-cyber-world-2936976
- https://www.business-standard.com/india-news/home-ministry-authorises-i4c-to-issue-takedown-notices-under-it-act-124031500844_1.html
- https://www.hindustantimes.com/india-news/it-ministry-empowers-i4c-to-notify-instances-of-cybercrime-101710443217873.html
- https://i4c.mha.gov.in/about.aspx#:~:text=Objectives%20of%20I4C,identifying%20Cybercrime%20trends%20and%20patterns