#Fact Old image of Hindu Priest with Donald trump at White house goes viral as recent.
Executive Summary:
Our Team recently came across a post on X (formerly twitter) where a photo widely shared with misleading captions was used about a Hindu Priest performing a vedic prayer at Washington after recent elections. After investigating, we found that it shows a ritual performed by a Hindu priest at a private event in White House to bring an end to the Covid-19 Pandemic. Always verify claims before sharing.

Claim:
An image circulating after Donald Trump’s win in the US election shows Pujari Harish Brahmbhatt at the White House recently.

Fact Check:
The analysis was carried out and found that the video is from an old post that was uploaded in May 2020. By doing a Reverse Image Search we were able to trace the sacred Vedic Shanti Path or peace prayer was recited by a Hindu priest in the Rose Garden of the White House on the occasion of National Day of Prayer Service with other religious leaders to pray for the health, safety and well-being of everyone affected by the coronavirus pandemic during those difficult days, and to bring an end to Covid-19 Pandemic.

Conclusion:
The viral claim mentioning that a Hindu priest performed a Vedic prayer at the White House during Donald Trump’s presidency isn’t true. The photo is actually from a private event in 2020 and provides misleading information.
Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
- Claim: Hindu priest held a Vedic prayer at the White House under Trump
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Governments worldwide are enacting cybersecurity laws to enhance resilience and secure cyberspace against growing threats like data breaches, cyber espionage, and state-sponsored attacks in the digital landscape. As a response, the EU Council has been working on adopting new laws and regulations under its EU Cybersecurity Package- a framework to enhance cybersecurity capacities across the EU to protect critical infrastructure, businesses, and citizens. Recently, the Cyber Solidarity Act was adopted by the Council, which aims to improve coordination among EU member states for increased cyber resilience. Since regulations in the EU play a significant role in shaping the global regulatory environment, it is important to keep an eye on such developments.
Overview of the Cyber Solidarity Act
The Act sets up a European Cyber Security Alert System consisting of Cross-Border Cyber Hubs across Europe to collect intelligence and act on cyber threats by leveraging emerging technology such as Artificial Intelligence (AI) and advanced data analytics to share warnings on cyber threats with other cyber data centres across the national borders of the EU. This is expected to assist authorities in responding to cyber threats and incidents more quickly and effectively.
Further, it provides for the creation of a new Cybersecurity Emergency Mechanism to enhance incident response systems in the EU. This will include testing the vulnerabilities in critical sectors like transport, energy, healthcare, finance, etc., and creating a reserve of private parties to provide mutual technical assistance for incident response requests from EU member-states or associated third countries of the Digital Europe Programme in case of a large-scale incident.
Finally, it also provides for the establishment of a European Cybersecurity Incident Review Mechanism to monitor the impact of the measures under this law.
Key Themes
- Greater Integration: The success of this Act depends on the quality of cooperation and interoperability between various governmental stakeholders across defence, diplomacy, etc. with regard to data formats, taxonomy, data handling and data analytics tools. For example, Cross-Border Cyber Hubs are mandated to take the interoperability guidelines set by the European Union Agency for Cybersecurity (ENISA) as a starting point for information-sharing principles with each other.
- Public-Private Collaboration: The Act provides a framework to govern relationships between stakeholders such as the public sector, the private sector, academia, civil society and the media, identifying that public-private collaboration is crucial for strengthing EUs cyber resilience. In this regard, National Cyber Hubs are proposed to carry out the strengthening of information sharing between public and private entities.
- Centralized Regulation: The Act aims to strengthen all of the EU's cyber solidarity by outlining dedicated infrastructure for improved coordination and intelligence-sharing regarding cyber events among member states. Equal matching contribution for procuring the tools, infrastructure and services is to be made by each selected member state and the European Cybersecurity Competence Centre, a body tasked with funding cybersecurity projects in the EU.
- Setting a Global Standard: The underlying rationale behind strengthening cybersecurity in the EU is not just to protect EU citizens from cyber-threats to their fundamental rights but also to drive norms for world-class standards for cybersecurity for essential and critical services, an initiative several countries rely on.
Conclusion
In the current digital landscape, governments, businesses, critical sectors and people are increasingly interconnected through information and network connection systems and are using emerging technologies like AI, exposing them to multidimensional vulnerabilities in cyberspace. The EU in this regard continues to be a leader in setting standards for the safety of participants in the digital arena through regulations regarding cybersecurity. The Cyber Solidarity Act’s design including cross-border cooperation, public-private collaboration, and proactive incident-monitoring and response sets a precedent for a unified approach to cybersecurity. As the EU’s Cybersecurity Package continues to evolve, it will play a crucial role in ensuring a secure and resilient digital future for all.
Sources
- https://www.consilium.europa.eu/en/press/press-releases/2024/12/02/cybersecurity-package-council-adopts-new-laws-to-strengthen-cybersecurity-capacities-in-the-eu/
- https://data.consilium.europa.eu/doc/document/PE-94-2024-INIT/en/pdf
- https://digital-strategy.ec.europa.eu/en/policies/cybersecurity-strategy
- https://www.weforum.org/stories/2024/10/cybersecurity-regulation-changes-nis2-eu-2024/

Introduction
In today’s hyper-connected world, information spreads faster than ever before. But while much attention is focused on public platforms like Facebook and Twitter, a different challenge lurks in the shadows: misinformation circulating on encrypted and closed-network platforms such as WhatsApp and Telegram. Unlike open platforms where harmful content can be flagged in public, private groups operate behind a digital curtain. Here, falsehoods often spread unchecked, gaining legitimacy because they are shared by trusted contacts. This makes encrypted platforms a double-edged sword. It is essential for privacy and free expression, yet uniquely vulnerable to misuse.
As Prime Minister Narendra Modi rightly reminded,
“Think 10 times before forwarding anything,” warning that even a “single fake news has the capability to snowball into a matter of national concern.”
The Moderation Challenge with End-to-End Encryption
Encrypted messaging platforms were built to protect personal communication. Yet, the same end-to-end encryption that shields users’ privacy also creates a blind spot for moderation. Authorities, researchers, and even the platforms themselves cannot view content circulating in private groups, making fact-checking nearly impossible.
Trust within closed groups makes the problem worse. When a message comes from family, friends, or community leaders, people tend to believe it without questioning and quickly pass it along. Features like large group chats, broadcast lists, and “forward to many” options further speed up its spread. Unlike open networks, there is no public scrutiny, no visible counter-narrative, and no opportunity for timely correction.
During the COVID-19 pandemic, false claims about vaccines spread widely through WhatsApp groups, undermining public health campaigns. Even more alarming, WhatsApp rumors about child kidnappers and cow meat in India triggered mob lynchings, leading to the tragic loss of life.
Encrypted platforms, therefore, represent a unique challenge: they are designed to protect privacy, but, unintentionally, they also protect the spread of dangerous misinformation.
Approaches to Curbing Misinformation on End-to-End Platforms
- Regulatory: Governments worldwide are exploring ways to access encrypted data on messaging platforms, creating tensions between the right to user privacy and crime prevention. Approaches like traceability requirements on WhatsApp, data-sharing mandates for platforms in serious cases, and stronger obligations to act against harmful viral content are also being considered.
- Technological Interventions: Platforms like WhatsApp have introduced features such as “forwarded many times” labels and limits on mass forwarding. These tools can be expanded further by introducing AI-driven link-checking and warnings for suspicious content.
- Community-Based Interventions: Ultimately, no regulation or technology can succeed without public awareness. People need to be inoculated against misinformation through pre-bunking efforts and digital literacy campaigns. Fact-checking websites and tools also have to be taught.
Best Practices for Netizens
Experts recommend simple yet powerful habits that every user can adopt to protect themselves and others. By adopting these, ordinary users can become the first line of defence against misinformation in their own communities:
- Cross-Check Before Forwarding: Verify claims from trusted platforms & official sources.
- Beware of Sensational Content: Headlines that sound too shocking or dramatic probably need checking. Consult multiple sources for a piece of news. If only one platform/ channel is carrying sensational news, it is likely to be clickbait or outright false.
- Stick to Trusted News Sources: Verify news through national newspapers and expert commentary. Remember, not everything on the internet/television is true.
- Look Out for Manipulated Media: Now, with AI-generated deepfakes, it becomes more difficult to tell the difference between original and manipulated media. Check for edited images, cropped videos, or voice messages without source information. Always cross-verify any media received.
- Report Harmful Content: Report misinformation to the platform it is being circulated on and PIB’s Fact Check Unit.
Conclusion
In closed, unmonitored groups, platforms like WhatsApp and Telegram often become safe havens where people trust and forward messages from friends and family without question. Once misinformation takes root, it becomes extremely difficult to challenge or correct, and over time, such actions can snowball into serious social, economic and national concerns.
Preventing this is a matter of shared responsibility. Governments can frame balanced regulations, but individuals must also take initiative: pause, think, and verify before sharing. Ultimately, the right to privacy must be upheld, but with reasonable safeguards to ensure it is not misused at the cost of societal trust and safety.
References
- India WhatsApp ‘child kidnap’ rumours claim two more victims (BBC) The people trying to fight fake news in India (BBC)
- Press Information Bureau – PIB Fact Check
- Brookings Institution – Encryption and Misinformation Report (2021)
- Curtis, T. L., Touzel, M. P., Garneau, W., Gruaz, M., Pinder, M., Wang, L. W., Krishna, S., Cohen, L., Godbout, J.-F., Rabbany, R., & Pelrine, K. (2024). Veracity: An Open-Source AI Fact-Checking System. arXiv.
- NDTV – PM Modi cautions against fake news (2022)
- Times of India – Govt may insist on WhatsApp traceability (2019)
- Medianama – Telegram refused to share ISIS channel data (2019)

Introduction
Misinformation is rampant all over the world and impacting people at large. In 2023, UNESCO commissioned a survey on the impact of Fake News which was conducted by IPSOS. This survey was conducted in 16 countries that are to hold national elections in 2024 with a total of 2.5 billion voters and showed how pressing the need for effective regulation had become and found that 85% of people are apprehensive about the repercussions of online disinformation or misinformation. UNESCO has introduced a plan to regulate social media platforms in light of these worries, as they have become major sources of misinformation and hate speech online. This action plan is supported by the worldwide opinion survey, highlighting the urgent need for strong actions. The action plan outlines the fundamental principles that must be respected and concrete measures to be implemented by all stakeholders associated, i.e., government, regulators, civil society and the platforms themselves.
The Key Areas in Focus of the Action Plan
The focus area of the action plan is on the protection of the Freedom of Expression while also including access to information and other human rights in digital platform governance. The action plan works on the basic premise that the impact on human rights becomes the compass for all decision-making, at every stage and by every stakeholder. Groups of independent regulators work in close coordination as part of a wider network, to prevent digital companies from taking advantage of disparities between national regulations. Moderation of content as a feasible and effective option at the required scale, in all regions and all languages.
The algorithms of these online platforms, particularly the social media platforms are established, but it is too often geared towards maximizing engagement rather than the reliability of information. Platforms are required to take on more initiative to educate and train users to be critical thinkers and not just hopers. Regulators and platforms are in a position to take strong measures during particularly sensitive conditions ranging from elections to crises, particularly the information overload that is taking place.
Key Principles of the Action Plan
- Human Rights Due Diligence: Platforms are required to assess their impact on human rights, including gender and cultural dimensions, and to implement risk mitigation measures. This would ensure that the platforms are responsible for educating users about their rights.
- Adherence to International Human Rights Standards: Platforms must align their design, content moderation, and curation with international human rights standards. This includes ensuring non-discrimination, supporting cultural diversity, and protecting human moderators.
- Transparency and Openness: Platforms are expected to operate transparently, with clear, understandable, and auditable policies. This includes being open about the tools and algorithms used for content moderation and the results they produce.
- User Access to Information: Platforms should provide accessible information that enables users to make informed decisions.
- Accountability: Platforms must be accountable to their stakeholders which would include the users and the public, which would ensure that redressal for content-related decisions is not compromised. This accountability extends to the implementation of their terms of service and content policies.
Enabling Environment for the application of the UNESCO Plan
The UNESCO Action Plan to counter misinformation has been created to create an environment where freedom of expression and access to information flourish, all while ensuring safety and security for digital platform users and non-users. This endeavour calls for collective action—societies as a whole must work together. Relevant stakeholders, from vulnerable groups to journalists and artists, enable the right to expression.
Conclusion
The UNESCO Action Plan is a response to the dilemma that has been created due to the information overload, particularly, because the distinction between information and misinformation has been so clouded. The IPSOS survey has revealed the need for an urgency to address these challenges in the users who fear the repercussions of misinformation.
The UNESCO action plan provides a comprehensive framework that emphasises the protection of human rights, particularly freedom of expression, while also emphasizing the importance of transparency, accountability, and education in the governance of digital platforms as a priority. By advocating for independent regulators and encouraging platforms to align with international human rights standards, UNESCO is setting the stage for a more responsible and ethical digital ecosystem.
The recommendations include integrating regulators through collaborations and promoting global cooperation to harmonize regulations, expanding the Digital Literacy campaign to educate users about misinformation risks and online rights, ensuring inclusive access to diverse content in multiple languages and contexts, and monitoring and refining tech advancements and regulatory strategies as challenges evolve. To ultimately promote a true online information landscape.
Reference
- https://www.unesco.org/en/articles/online-disinformation-unesco-unveils-action-plan-regulate-social-media-platforms
- https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/unesco_ipsos_survey.pdf
- https://dig.watch/updates/unesco-sets-out-strategy-to-tackle-misinformation-after-ipsos-survey