#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The much-awaited DPDP Rules have now finally been released in the official Gazette on 3rd January 2025 for consultation. The draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) invites objections and suggestions from stakeholders that can be submitted on MyGov (https://mygov.in) by 18th February 2025.
DPDP Rules at Glance
- Processing of Children's Data: The draft rules say that ‘A Data Fiduciary shall adopt appropriate technical and organisational measures to ensure that verifiable consent of the parent is obtained before the processing of any personal data of a child’. It entails that children below 18 will need parents' consent to create social media accounts.
- The identity of the parents and their age can be verified through reliable details of identity and age available with the Data Fiduciary, voluntarily provided identity proof or virtual token mapped to the same. The data fiduciaries are also required to observe due diligence for checking that the individual identifying themselves as the parent is an adult who is identifiable, if required, in connection with compliance with any law for the time being in force in India. Additionally, the government will also extend exemptions from these specific provisions pertaining to processing of children's data to educational institutions, and child welfare organisations.
- Processing of Personal Data Outside India: The draft rules specify that the transfer of personal data outside India, whether it is processed within the country or outside in connection with offering goods or services to individuals in India, is permitted only if the Data Fiduciary complies with the conditions prescribed by the Central Government through general or specific orders.
- Intimation of Personal Data Breach: On becoming aware of a personal data breach, the Data Fiduciary must promptly notify the affected Data Principals in a clear and concise manner through their user account or registered communication method. This notification should include a description of the breach (nature, extent, timing, and location), potential consequences for the Data Principal, measures taken or planned to mitigate risks, recommended safety actions for the Data Principal, and contact information of a representative to address queries. Additionally, the Data Fiduciary must inform the Board without delay, providing details of the breach, its likely impact, and initial findings. Within 72 hours (or a longer period allowed by the Board upon request), the Data Fiduciary must submit updated information, including the facts and circumstances of the breach, mitigation measures, findings about the cause, steps to prevent recurrence, and a report on notifications given to affected Data Principals.
- Data Protection Board: The draft rules propose establishing the Data Protection Board, which will function as a digital office, enabling remote hearings, and will hold powers to investigate breaches, impose penalties, and perform related regulatory functions.
Journey of Digital Personal Data Protection Act, 2023
The foundation for the single statute legislation on Data Protection was laid down in 2017, in the famous ‘Puttaswami judgment,’ which is also well recognised as the Aadhar Card judgment. In this case, ‘privacy’ was recognised as intrinsic to the right to life and personal liberty, guaranteed by Article 21 of the Constitution of India, thus making ‘Right to Privacy’ a fundamental right. In the landmark Puttaswamy ruling, the apex court of India stressed the need for a comprehensive data protection law.
Eight years on and several draft bills later, the Union Cabinet approved the Digital Personal Data Protection Bill (DPDP) on 5th July 2023. The bill was tabled in the Lok Sabha on 3rd August 2023, and It was passed by Lok Sabha on 7th August, and the bill passed by Rajya Sabha on 9th August and got the president's assent on 11th August 2023; and India finally came up with the ‘Digital Personal Data Protection Act, 2023. This is a significant development that has the potential to bring about major improvements to online privacy and the handling of digital personal data by the platforms.
The Digital Personal Data Protection Act, 2023, is a newly-enacted legislation designed to protect individuals' digital personal data. It aims to ensure compliance by Data Fiduciaries and imposes specific obligations on both Data Principals and Data Fiduciaries. The Act promotes consent-based data collection practices and establishes the Data Protection Board to oversee compliance and address grievances. Additionally, it includes provisions for penalties of up to ₹250 crores in the event of a data breach. However, despite the DPDP Act being passed by parliament last year, the Act has not yet taken effect since its rules and regulations are still not finalised.
Conclusion
It is heartening to see that the Ministry of Electronics and Technology (MeitY) has finally released the draft of the much-awaited DPDP rules for consultation from stakeholders. Though noting certain positive aspects, there is still room for addressing certain gaps and multiple aspects under the draft rules that require attention. The public consultation, including the inputs from the tech platforms, is likely to see critical inputs on multiple aspects under the proposed rules. One such key area of interest will be the requirement of verifiable parental consent, which will likely include recommendations for a balanced approach which maintains children’s safety and mechanisms for the requirement of verifiable consent. The Provisions permitting government access to personal data on grounds of national security are also expected to face scrutiny. The proposed rules, after the consultation process, will be taken into consideration for finalisation after 18th February 2025. The move towards establishing a robust data protection law in India signals a significant step toward enhancing trust and accountability in the digital ecosystem. However, its success will hinge on effective implementation, clear compliance mechanisms, and the adaptability of stakeholders to this evolving regulatory landscape.
References

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full

Introduction
As our reliance on digital communication technologies increases, so do the risks associated with the same. The propagation of false information is a significant concern. According to the World Economic Forum's 2024 Global Risk Report, India ranks the highest for misinformation and disinformation risk. Indian Vice President Shri Jagdeep Dhankhar emphasized the importance of transparency and accountability in the digital information age, addressing Indian Information Service officer trainees at the Vice President's Enclave on 18th June 2024. He has highlighted the issue of widespread misinformation and the need to regulate it. He stated “Information is power, information is too dangerous a power, information is that power which has to be regulated’’.
VC calls for regulation of the Information Landscape
The Vice President of India, Shri Dhankhar, has called on young Indian Information Service officers to act swiftly to neutralize misinformation on social media. He emphasized the importance of protecting individuals and institutions from fake narratives set afloat on social media. The VP called for the officers to act as information warriors, protecting the privacy and reputation of affected individuals or institutions.
The VP also highlighted India's vibrant democracy and the need for trust in the government. He called for the neutralization of motivated narratives set by global media and stressed the importance of not allowing others to calibrate them. He also emphasized the need to promote India's development narrative globally, highlighting its rich cultural heritage and diversity. He has expressed the need to regulate information, saying “Unregulated information & fake news can create a disaster of un-imaginable proportion.”
MeitY Advisory dated 1st March 2024
As regards to the issue of misinformation, the recently-issued advisory by the Ministry of Electronics and Information Technology (MeitY), specifies that all users should be well informed about the consequences of dealing with unlawful information on online platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform. Measures to combat deepfakes or misinformation have also been discussed in the advisory. The advisory necessitates identifying synthetically-created content across various formats, and advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Conclusion
The battle against the growing incidences of misinformation and disinformation will not be easily won: developing a robust regulatory framework to counter online misinformation is essential. Alongside the regulatory framework, the government should encourage digital literacy campaigns, promote prebunking and debunking strategies and collaborate with relevant organisations such as cybersecurity experts, fact-checking entities, researchers, and policy analysts to combat misinformation on the Internet. Vice President Jagdeep Dhankhar's statement scores the need to regulate information to prevent the spread of fake news or misinformation.
References:
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2026304
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf