#FactCheck - False Claim about Video of Sadhu Lying in Fire at Mahakumbh 2025
Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
In the recent advisory the Indian Computer Emergency Response Team (CERT-In) has released a high severity warning in the older versions of the software across Apple devices. This high severity rating is because of the multiple vulnerabilities reported in Apple products which could allow the attacker to unfold the sensitive information, and execute arbitrary code on the targeted system. This warning is extremely useful to remind of the necessity to have the software up to date to prevent threats of a cybernature. It is important to update the software to the latest versions and cyber hygiene practices.
Devices Affected:
CERT-In advisory highlights significant risks associated with outdated software on the following Apple devices:
- iPhones and iPads: iOS versions that are below 18 and the 17.7 release.
- Mac Computers: All macOS builds before 14.7 (20G71), 13.7 (20H34), and earlier 20.2 for Sonoma, Ventura, Sequoia, respectively.
- Apple Watches: watchOS versions prior to 11
- Apple TVs: tvOS versions prior to 18
- Safari Browsers: versions prior to 18
- Xcode: versions prior to 16
- visionOS: versions prior to 2
Details of the Vulnerabilities:
The vulnerabilities discovered in these Apple products could potentially allow attackers to perform the following malicious activities:
- Access sensitive information: The attackers could easily access the sensitive information stored in other parts of the violated gadgets.
- Execute arbitrary code: The web page could be compromised with malcode and run on the targeted system which in the worst scenario would give the intruder full Administrator privileges on the device.
- Bypass security restrictions: Measures agreed to safeguard the device and information contained on it may be easily bypassed and the system left open to more proliferation.
- Cause denial-of-service (DoS) attacks: The vulnerabilities could be used to cause the targeted device or service to be unavailable to the rightful users.
- Perform spoofing attacks: There could be a situation where the attackers created fake entities or users or accounts to have a way into important information or do other unauthorized activities.
- Elevate privileges: It is also stated that weaknesses might be exploited to authorize the attacker a higher level of privileges in the system they are targets.
- Engage in cross-site scripting (XSS) attacks: Some of them make the associated Web applications/sites prone to XSS attacks by injecting hostile scripts into Web page code.
Vulnerabilities:
CVE-2023-42824
- Attack vector could allow a local attacker to elevate their privileges and potentially execute arbitrary code.
Affected System
- Apple's iOS and iPadOS software
CVE-2023-42916
- To improve the out of bounds read it was mitigated with improved input validation which was resolved later.
Affected System
- Safari, iOS, iPadOS, macOS, and Apple Watch Series 4 and later devices running watchOS 10.2
CVE-2023-42917
- leads to arbitrary code execution, and there have been reports of it being exploited in earlier versions of iOS.
Affected System
- Apple's Safari browser, iOS, iPadOS, and macOS Sonoma systems
Recommended Actions for Users:
To mitigate these risks, that users take immediate action:
- Update Software: Ensure all your devices are on the most current version of the operating systems they use. Repetitive updates have important security updates that fix identified weaknesses or flaws within the system.
- Monitor Device Activity: Stay vigilant if something doesn’t seem right; if your gadgets are accessed by someone who isn’t you.
- Always use strong, distinct passwords and use two-factor authentication.
- Install and update the antivirus and Firewall softwares.
- Avoid downloading any applications or clicking link from unknown sources
Conclusion:
The advisory from CERT-In, clearly demonstrates the fundamental need of keeping the software on all Apple devices up to date. Consumers need to act right away to patch their devices and apply best security measures like using multiple factors for login and system scanning. This advisory has come out when Apple has just released new products into the market such as the iPhone 16 series in India. When consumers embrace new technologies it is important for them to observe relevant measures of security precautions. Maintaining good cyber hygiene is a critical process for the protection against new threats.
Reference:
- https://www.cert-in.org.in/s2cMainServlet?pageid=PUBVLNOTES02&VLCODE=CIAD-2023-0043
- https://www.cve.org/CVERecord?id=CVE-2023-42916
- https://www.cve.org/CVERecord?id=CVE-2023-42917
- https://www.bizzbuzz.news/technology/gadjets/cert-in-issues-advisory-on-vulnerabilities-affecting-iphones-ipads-and-macs-1337253#google_vignette
- https://www.wionews.com/videos/india-warns-apple-users-of-high-severity-security-risks-in-older-software-761396

Executive Summary:
The rise in cybercrime targeting vulnerable individuals, particularly students and their families, has reached alarming levels. Impersonation scams, where fraudsters pose as Law Enforcement Officers, have become increasingly sophisticated, exploiting fear, urgency, and social stigma. This report delves into recent incidents of ransom scams involving fake CBI officers, highlighting the execution methods, psychological impact on victims, and preventive measures. The goal is to raise public awareness and equip individuals with the knowledge needed to protect themselves from such fraudulent activities.
Introduction:
Cybercriminals are evolving their tactics, with impersonation and social engineering at the forefront. Scams involving fake law enforcement officers have become rampant, preying on the fear of legal repercussions and the desire to protect loved ones. This report examines incidents where scammers impersonated CBI officers to extort money from families of students, emphasizing the urgent need for awareness, verification, and preventive measures.
Case Study:
This case study explains how the scammers impersonate themselves for the money targeting students' families.
Targets receive calls from scammers posing as CBI officers. Mostly the families of students are targeted by the fraudsters using sophisticated impersonation and emotional manipulation tactics. In our case study, the targets received calls from unknown international numbers, falsely claiming that the students, along with their friends, were involved in a fabricated rape case. The parents get calls during school or college hours, a time when it is particularly difficult and chaotic for parents to reach their children, adding to the panic and sense of urgency. The scammers manipulate the parents by stating that, due to the students' clean records, they are not officially arrested but would face severe legal consequences unless a sum of money is paid immediately.
Although in these specific cases, the parents did not pay the money, many parents in our country fall victim to such scams, paying large sums out of fear and desperation to protect their children’s futures. The fear of legal repercussions, social stigma, and the potential damage to the students' reputations, the scammers used high-pressure tactics to force compliance.
These incidents may result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities. This underscores the urgent need for awareness, verification of authority, and prompt reporting of such scams to prevent further victimisation
Modus Operandi:
- Caller ID Spoofing: The scammer used a unknown number and spoofing techniques to mimic a legitimate law enforcement authority.
- Fear Induction: The fraudster played on the family's fear of social stigma, manipulating them into compliance through emotional blackmail.
Analysis:
Our research found that the unknown international numbers used in these scams are not real but are puppet numbers often used for prank calls and fraudulent activities. This incident also raises concerns about data breaches, as the scammers accurately recited students' details, including names and their parents' information, adding a layer of credibility and increasing the pressure on the victims. These incidents result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities.
Impact on Victims:
- Financial and Psychological Losses: The family may face substantial financial losses, coupled with emotional and psychological distress.
- Loss of Trust in Authorities: Such scams undermine trust in official communication and law enforcement channels.
- Exploitation of Fear and Urgency: Scammers prey on emotions such as fear, urgency, and social stigma to manipulate victims.
- Sophisticated Impersonation Techniques: Using caller ID spoofing, Virtual/Temporary numbers and impersonation of Law Enforcement Officers adds credibility to the scam.
- Lack of Verification: Victims often do not verify the caller's identity, leading to successful scams.
- Significant Psychological Impact: Beyond financial losses, these scams cause lasting emotional trauma and distrust in institutions.
Recommendations:
- Cross-Verification: Always cross-verify with official sources before acting on such claims. Always contact official numbers listed on trusted Government websites to verify any claims made by callers posing as law enforcement.
- Promote Awareness: Educational institutions should conduct regular awareness programs to help students and families recognize and respond to scams.
- Encourage Prompt Reporting: Reporting such incidents to authorities can help track scammers and prevent future cases. Encourage victims to report incidents promptly to local authorities and cybercrime units.
- Enhance Public Awareness: Continuous public awareness campaigns are essential to educate people about the risks and signs of impersonation scams.
- Educational Outreach: Schools and colleges should include Cybersecurity awareness as part of their curriculum, focusing on identifying and responding to scams.
- Parental Guidance and Support: Parents should be encouraged to discuss online safety and scam tactics with their children regularly, fostering a vigilant mindset.
Conclusion:
The rise of impersonation scams targeting students and their families is a growing concern that demands immediate attention. By raising awareness, encouraging verification of claims, and promoting proactive reporting, we can protect vulnerable individuals from falling victim to these manipulative and harmful tactics. It is high time for the authorities, educational institutions, and the public to collaborate in combating these scams and safeguarding our communities. Strengthening data protection measures and enhancing public education on the importance of verifying claims can significantly reduce the impact of these fraudulent schemes and prevent further victimisation.
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)