#FactCheck - False Claim about Video of Sadhu Lying in Fire at Mahakumbh 2025
Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

The race for global leadership in AI is in full force. As China and the US emerge as the ‘AI Superpowers’ in the world, the world grapples with the questions around AI governance, ethics, regulation, and safety. Some are calling this an ‘AI Arms Race.’ Most of the applications of these AI systems are in large language models for commercial use or military applications. Countries like Germany, Japan, France, Singapore, and India are now participating in this race and are not mere spectators.
The Government of India’s Ministry of Electronics and Information Technology (MeitY) has launched the IndiaAI Mission, an umbrella program for the use and development of AI technology. This MeitY initiative lays the groundwork for supporting an array of AI goals for the country. The government has allocated INR 10,300 crore for this endeavour. This mission includes pivotal initiatives like the IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
There are several challenges and opportunities that India will have to navigate and capitalize on to become a significant player in the global AI race. The various components of India’s ‘AI Stack’ will have to work well in tandem to create a robust ecosystem that yields globally competitive results. The IndiaAI mission focuses on building large language models in vernacular languages and developing compute infrastructure. There must be more focus on developing good datasets and research as well.
Resource Allocation and Infrastructure Development
The government is focusing on building the elementary foundation for AI competitiveness. This includes the procurement of AI chips and compute capacity, about 10,000 graphics processing units (GPUs), to support India’s start-ups, researchers, and academics. These GPUs have been strategically distributed, with 70% being high-end newer models and the remaining 30% comprising lower-end older-generation models. This approach ensures that a robust ecosystem is built, which includes everything from cutting-edge research to more routine applications. A major player in this initiative is Yotta Data Services, which holds the largest share of 9,216 GPUs, including 8,192 Nvidia H100s. Other significant contributors include Amazon AWS's managed service providers, Jio Platforms, and CtrlS Datacenters.
Policy Implications: Charting a Course for Tech Sovereignty and Self-reliance
With this government initiative, there is a concerted effort to develop indigenous AI models and reduce tech dependence on foreign players. There is a push to develop local Large Language Models and domain-specific foundational models, creating AI solutions that are truly Indian in nature and application. Many advanced chip manufacturing takes place in Taiwan, which has a looming China threat. India’s focus on chip procurement and GPUs speaks to a larger agenda of self-reliance and sovereignty, keeping in mind the geopolitical calculus. This is an important thing to focus on, however, it must not come at the cost of developing the technological ‘know-how’ and research.
Developing AI capabilities at home also has national security implications. When it comes to defence systems, control over AI infrastructure and data becomes extremely important. The IndiaAI Mission will focus on safe and trusted AI, including developing frameworks that fit the Indian context. It has to be ensured that AI applications align with India's security interests and can be confidently deployed in sensitive defence applications.
The big problem here to solve here is the ‘data problem.’ There must be a focus on developing strategies to mitigate the data problem that disadvantages the Indian AI ecosystem. Some data problems are unique to India, such as generating data in local languages. While other problems are the ones that appear in every AI ecosystem development lifecycle namely generating publicly available data and licensed data. India must strengthen its ‘Digital Public Infrastructure’ and data commons across sectors and domains.
India has proposed setting up the India Data Management Office to serve as India’s data regulator as part of its draft National Data Governance Framework Policy. The MeitY IndiaAI expert working group report also talked about operationalizing the India Datasets Platform and suggested the establishment of data management units within each ministry.
Economic Impact: Growth and Innovation
The government’s focus on technology and industry has far-reaching economic implications. There is a push to develop the AI startup ecosystem in the country. The IndiaAI mission heavily focuses on inviting ideas and projects under its ambit. The investments will strengthen the IndiaAI startup financing system, making it easier for nascent AI businesses to obtain capital and accelerate their development from product to market. Funding provisions for industry-led AI initiatives that promote social impact and stimulate innovation and entrepreneurship are also included in the plan. The government press release states, "The overarching aim of this financial outlay is to ensure a structured implementation of the IndiaAI Mission through a public-private partnership model aimed at nurturing India’s AI innovation ecosystem.”
The government also wants to establish India as a hub for sustainable AI innovation and attract top AI talent from across the globe. One crucial aspect that needs to be worked on here is fostering talent and skill development. India has a unique advantage, that is, top-tier talent in STEM fields. Yet we suffer from a severe talent gap that needs to be addressed on a priority basis. Even though India is making strides in nurturing AI talents, out-migration of tech talent is still a reality. Once the hardware manufacturing “goods-side” of economics transitions to service delivery in the field of AI globally, India will need to be ready to deploy its talent. Several structural and policy interfaces, like the New Education Policy and industry-academic partnership frameworks, allow India to capitalize on this opportunity.
India’s talent strategy must be robust and long-term, focusing heavily on multi-stakeholder engagement. The government has a pivotal role here by creating industry-academia interfaces and enabling tech hubs and innovation parks.
India's Position in the Global AI Race
India’s foreign policy and geopolitical standpoint have been one of global cooperation. This must not change when it comes to AI. Even though this has been dubbed as the “AI Arms Race,” India should encourage worldwide collaboration on AI R&D through collaboration with other countries in order to strengthen its own capabilities. India must prioritise more significant open-source AI development, work with the US, Europe, Australia, Japan, and other friendly countries to prevent the unethical use of AI and contribute to the formation of a global consensus on the boundaries for AI development.
The IndiaAI Mission will have far-reaching implications for India’s diplomatic and economic relations. The unique proposition that India comes with is its ethos of inclusivity, ethics, regulation, and safety from the get-go. We should keep up the efforts to create a powerful voice for the Global South in AI. The IndiaAI Mission marks a pivotal moment in India's technological journey. Its success could not only elevate India's status as a tech leader but also serve as a model for other nations looking to harness the power of AI for national development and global competitiveness. In conclusion, the IndiaAI Mission seeks to strengthen India's position as a global leader in AI, promote technological independence, guarantee the ethical and responsible application of AI, and democratise the advantages of AI at all societal levels.
References
- Ashwini Vaishnaw to launch IndiaAI portal, 10 firms to provide 14,000 GPUs. (2025, February 17). https://www.business-standard.com/. Retrieved February 25, 2025, from https://www.business-standard.com/industry/news/indiaai-compute-portal-ashwini-vaishnaw-gpu-artificial-intelligence-jio-125021700245_1.html
- Global IndiaAI Summit 2024 being organized with a commitment to advance responsible development, deployment and adoption of AI in the country. (n.d.). https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2029841
- India to Launch AI Compute Portal, 10 Firms to Supply 14,000 GPUs. (2025, February 17). apacnewsnetwork.com. https://apacnewsnetwork.com/2025/02/india-to-launch-ai-compute-portal-10-firms-to-supply-14000-gpus/
- INDIAai | Pillars. (n.d.). IndiaAI. https://indiaai.gov.in/
- IndiaAI Innovation Challenge 2024 | Software Technology Park of India | Ministry of Electronics & Information Technology Government of India. (n.d.). http://stpi.in/en/events/indiaai-innovation-challenge-2024
- IndiaAI Mission To Deploy 14,000 GPUs For Compute Capacity, Starts Subsidy Plan. (2025, February 17). www.businessworld.in. Retrieved February 25, 2025, from https://www.businessworld.in/article/indiaai-mission-to-deploy-14000-gpus-for-compute-capacity-starts-subsidy-plan-548253
- India’s interesting AI initiatives in 2024: AI landscape in India. (n.d.). IndiaAI. https://indiaai.gov.in/article/india-s-interesting-ai-initiatives-in-2024-ai-landscape-in-india
- Mehra, P. (2025, February 17). Yotta joins India AI Mission to provide advanced GPU, AI cloud services. Techcircle. https://www.techcircle.in/2025/02/17/yotta-joins-india-ai-mission-to-provide-advanced-gpu-ai-cloud-services/
- IndiaAI 2023: Expert Group Report – First Edition. (n.d.). IndiaAI. https://indiaai.gov.in/news/indiaai-2023-expert-group-report-first-edition
- Satish, R., Mahindru, T., World Economic Forum, Microsoft, Butterfield, K. F., Sarkar, A., Roy, A., Kumar, R., Sethi, A., Ravindran, B., Marchant, G., Google, Havens, J., Srichandra (IEEE), Vatsa, M., Goenka, S., Anandan, P., Panicker, R., Srivatsa, R., . . . Kumar, R. (2021). Approach Document for India. In World Economic Forum Centre for the Fourth Industrial Revolution, Approach Document for India [Report]. https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
- Stratton, J. (2023, August 10). Those who solve the data dilemma will win the A.I. revolution. Fortune. https://fortune.com/2023/08/10/workday-data-ai-revolution/
- Suri, A. (n.d.). The missing pieces in India’s AI puzzle: talent, data, and R&D. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2025/02/the-missing-pieces-in-indias-ai-puzzle-talent-data-and-randd?lang=en
- The AI arms race. (2024, February 13). Financial Times. https://www.ft.com/content/21eb5996-89a3-11e8-bf9e-8771d5404543
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms

Introduction
The Australian Parliament has passed the world’s first legislation regarding a ban on social media for children under 16. This was done citing risks to the mental and physical well-being of children and the need to contain misogynistic influence on them. The debate surrounding the legislation is raging strong, as it is the first proposal of its kind and would set precedence for how other countries can assess their laws regarding children and social media platforms and their priorities.
The Legislation
Currently trailing an age-verification system (such as biometrics or government identification), the legislation mandates a complete ban on underage children using social media, setting the age limit to 16 or above. Further, the law does not provide exemptions of any kind, be it for pre-existing accounts or parental consent. With federal elections approaching, the law seeks to address parental concerns regarding measures to protect their children from threats lurking on social media platforms. Every step in this regard is being observed with keen interest.
The Australian Prime Minister, Anthony Albanese, emphasised that the onus of taking responsible steps toward preventing access falls on the social media platforms, absolving parents and their children of the same. Social media platforms like TikTok, X, and Meta Platforms’ Facebook and Instagram all come under the purview of this legislation.
CyberPeace Overview
The issue of a complete age-based ban raises a few concerns:
- It is challenging to enforce digitally as children might find a way to circumnavigate such restrictions. An example would be the Cinderella Law, formally known as the Shutdown Law, which the Government of South Korea had implemented back in 2011 to reduce online gaming and promote healthy sleeping habits among children. The law mandated the prohibition of access to online gaming for children under the age of 16 between 12 A.M. to 6 A.M. However, a few drawbacks rendered it less effective over time. Children were able to use the login IDs of adults, switch to VPN, and even switch to offline gaming. In addition, parents also felt the government was infringing on the right to privacy and the restrictions were only for online PC games and did not extend to mobile phones. Consequently, the law lost relevance and was repealed in 2021.
- The concept of age verification inherently requires collecting more personal data and inadvertently opens up concerns regarding individual privacy.
- A ban is likely to reduce the pressure on tech and social media companies to develop and work on areas that would make their services a safe child-friendly environment.
Conclusion
Social media platforms can opt for an approach that focuses on how to create a safe environment online for children as they continue to deliberate on restrictions. An example of an impactful-yet-balanced step towards the protection of children on social media while respecting privacy is the U.K.'s Age-Appropriate Design Code (UK AADC). It is the U.K.’s implementation of the European Union’s General Data Protection Regulation (GDPR), prepared by the ICO (Information Commissioner's Office), the U.K. data protection regulator. It follows a safety-by-design approach for children. As we move towards a future that is predominantly online, we must continue to strive and create a safe space for children and address issues in innovative ways.
References
- https://indianexpress.com/article/technology/social/australia-proposes-ban-on-social-media-for-children-under-16-9657544/
- https://www.thehindu.com/opinion/op-ed/should-children-be-barred-from-social-media/article68661342.ece
- https://forumias.com/blog/debates-on-whether-children-should-be-banned-from-social-media/
- https://timesofindia.indiatimes.com/education/news/why-banning-kids-from-social-media-wont-solve-the-youth-mental-health-crisis/articleshow/113328111.cms
- https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code
- https://www.techinasia.com/s-koreas-cinderella-law-finally-growing-up-teens-may-soon-be-able-to-play-online-after-midnight-again
- https://wp.towson.edu/iajournal/2021/12/13/video-gaming-addiction-a-case-study-of-china-and-south-korea/
- https://www.dailysabah.com/world/asia-pacific/australia-passes-worlds-1st-total-social-media-ban-for-children