#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html

Introduction
As digital platforms rapidly become repositories of information related to health, YouTube has emerged as a trusted source people look to for answers. To counter rampant health misinformation online, the platform launched YouTube Health, a program aiming to make “high-quality health information available to all” by collaborating with health experts and content creators. While this is an effort in the right direction, the program needs to be tailored to the specificities of the Indian context if it aims to transform healthcare communication in the long run.
The Indian Digital Health Context
India’s growing internet penetration and lack of accessible healthcare infrastructure, especially in rural areas, facilitates a reliance on digital platforms for health information. However, these, especially social media, are rife with misinformation. Supplemented by varying literacy levels, access disparities, and lack of digital awareness, health misinformation can lead to serious negative health outcomes. The report ‘Health Misinformation Vectors in India’ by DataLEADS suggests a growing reluctance surrounding conventional medicine, with people looking for affordable and accessible natural remedies instead. Social media helps facilitate this shift. However, media-sharing platforms such as WhatsApp, YouTube, and Facebook host a large chunk of health misinformation. The report identifies that cancer, reproductive health, vaccines, and lifestyle diseases are four key areas susceptible to misinformation in India.
YouTube’s Efforts in Promoting Credible Health Content
YouTube Health aims to provide evidence-based health information with “digestible, compelling, and emotionally supportive health videos,” from leading experts to everyone irrespective of who they are or where they live. So far, it executes this vision through:
- Content Curation: The platform has health source information panels and content shelves highlighting videos regarding 140+ medical conditions from authority sources like All India Institute of Medical Sciences (AIIMS), National Institute of Mental Health and Neurosciences (NIMHANS), Max Healthcare etc., whenever users search for health-related topics.
- Localization Strategies: The platform offers multilingual health content in regional languages such as Hindi, Tamil, Telugu, Marathi, Kannada, Malayalam, Punjabi, and Bengali, apart from English. This is to help health information reach viewers across most of the country.
- Verification of professionals: Healthcare professionals and organisations can apply to YouTube’s health feature for their videos to be authenticated as an authority health source on the platform and for their videos to show up on the ‘Health Sources’ shelf.
Challenges
- Limited Reach: India has a diverse linguistic ecosystem. While health information is made available in over 8 languages, the number is not enough to reach everyone in the country. Efforts to reach more people in vernacular languages need to be ramped up. Further, while there were around 50 billion views of health content on YouTube in 2023, it is difficult to measure the on-ground outcomes of those views.
- Lack of Digital Literacy: Misinformation on digital platforms cannot be entirely curtailed owing to the way algorithms are designed to enhance user engagement. However, uploading authoritative health information as a solution may not be enough, if users lack awareness about misinformation and the need to critically evaluate and trust only credible sources. In India, this critical awareness remains largely underdeveloped.
Conclusion
Considering that India has over 450 million users, by far the highest number of users in any country in the world, the platform has recognized that it can play a transformative role in the country’s digital health ecosystem. To accomplish its mission “to combat the societal threat of medical misinformation,” YouTube will have to continue to take several proactive measures. There is scope for strengthening collaborations with Indian public health agencies and trusted public figures, national and regional, to provide credible health information to all. The approach will have to be tailored to India’s vast linguistic diversity, by encouraging capacity-building for vernacular creators to produce credible content. Finally, multiple stakeholders will need to come together to promote digital literacy through education campaigns about identifying trustworthy sources.
Sources
- https://indianexpress.com/article/technology/tech-news-technology/youtube-health-dr-garth-graham-interview-9746673/
- https://economictimes.indiatimes.com/news/india/cancer-misinformation-extremely-prevalent-in-india-trust-in-science-medicine-crucial-report/articleshow/115931783.cms?from=mdr
- https://health.youtube/our-mission/
- https://health.youtube/features-application/
- https://backlinko.com/youtube-users
.webp)
Introduction
Google is set to change its storage and access of users' "Location History" in Google Maps, reducing the data retention period and making it impossible for the company to access it. This change will significantly impact "geofence warrants," a controversial legal tool used by authorities to force Google to hand over information about all users within a given location during a specific timeframe. This decision is a significant win for privacy advocates and criminal defense attorneys who have long decried these warrants.
The company aims to protect people's privacy by removing the repository of location data dating back months or years. Geofence warrants, which provide police with sensitive data on individuals, are considered dangerous and could turn innocent people into suspects.
Understanding Geofence Warrants
Geofence warrants, also known as reverse-location warrants, are used by law enforcement agencies to obtain locational data stored by tech companies within a specified geographical area and timeframe to identify devices near a crime scene. In contrast to general warrants, which allow law enforcement agencies to obtain data of one individual (usually the suspect), geofence warrants enable law enforcement authorities to obtain data for all individuals in a specific location and subsequently track and trace any device that may be linked to a crime scene. Geofence warrants have become a major issue, with law enforcement agencies utilising them to obtain location data from tech companies.
Privacy Concerns of Geofence Warrants
While Geofence warrants allow law enforcement agencies to determine and identify potential suspects, these warrants have sparked controversy for their invasive characteristics. Civil rights activities and various technology companies have raised concerns over the impact of these warrants on the rights of data principals. It is noted that geofence warrants mark a rise in cases of state surveillance and police harassment. Not only is any data principal in the vicinity of the crime scene classified as a potential suspect, but companies are also compelled to submit identifying personal data on every device/phone in a marked geographic space.
From Surveillance to Safeguards
Geofence warrants have become a contentious tool for law enforcement worldwide, with concerns over privacy and civil liberties, especially in sensitive situations like protests and healthcare. Google is considering allowing users to store their location data on their devices, potentially ending the use of geofence warrants, which law enforcement agencies use to obtain location data from tech companies.
Google is changing its handling of Location History data, moving it on-device instead of on its servers. The default data retention period will be reduced. Google Maps' product director, Marlo McGriff, stated that the company will automatically encrypt backed-up data for cloud backups, preventing anyone from reading it. When these changes are implemented, Google will have no geodata fishing options for users. Google confirmed that it will no longer be able to respond to new geofence warrants once these changes are implemented, as it will not have access to the relevant data. The changes were designed to put an end to dragnet searches of location data.
Conclusion
Google's decision to change storage and access policies for users' location history in Google Maps marks a pivotal step in the ongoing narrative of law enforcement's misuse of geofence warrants. This move aims to safeguard individual privacy by significantly restricting the data retention period and limiting Google's ability to comply with geofence warrants. This change is welcomed by privacy advocates and legal professionals who express concerns over the intrusive nature of these warrants, which may potentially turn innocent individuals into suspects based on their proximity to a crime scene. As technology companies take steps to enhance user privacy, the evolving landscape calls for a balance between law enforcement needs and protecting individual rights in an era of increasing digital surveillance.
References:
- https://telecom.economictimes.indiatimes.com/news/internet/google-to-end-geofence-warrant-requests-for-users-location-data/106081499
- https://www.forbes.com/sites/cyrusfarivar/2023/12/14/google-just-killed-geofence-warrants-police-location-data/?sh=313da3c32c86
- https://timesofindia.indiatimes.com/gadgets-news/explained-how-google-maps-is-preventing-authorities-from-accessing-users-location-history-data/articleshow/106086639.cms