#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
The most recent cable outages in the Red Sea, which caused traffic to slow down throughout the Middle East, South Asia, and even India, Pakistan and several parts of the UAE, like Etilasat and Du networks, also experienced comparable internet outages, serve as a reminder that the physical backbone of the internet is both routine and extremely important. Cloud platforms reroute traffic, e-commerce stalls, financial transactions stutter, and governments face the fragility of something they long believed to be seamless when systems like SMW4 and IMEWE malfunction close to Jeddah. Concerns over the susceptibility of undersea information highways have been raised by the incident. Given the ongoing conflict in the Red Sea region, where Yemen’s Houthi rebels have been waging a campaign against commercial shipping in retaliation for the Israel-Hamas war in Gaza. The effects are seen immediately. The argument over whether global connection is genuinely robust or just operating on borrowed time was reignited by these recent failures, which compelled key providers to reroute flows.
A geopolitical signal is what looks like a “technical glitch.” Accidents in contested waters are rarely simply accidents, and the inability to quickly assign blame highlights how brittle this ostensibly flawless digital world is.
The Paradox of Essential yet Exposed Infrastructure
This is not an isolated accident. Undersea cables, which carry more than 97% of all internet traffic worldwide, connect continents at the speed of light, and support the cloud infrastructures that contemporary societies rely on, are the brains of the digital economy., as cautioned by NATO’s Cooperative Cyber Defence Centre of Excellence. In a sense, they are our unseen electrical grid; without them, connectivity breaks down. However, they continue to be incredibly fragile in spite of their significance. Anchors and fishing gear frequently damage cables, which are no thicker than a garden hose, and they break more than a hundred times annually on average. Most faults can be swiftly fixed or relocated, but when several cuts happen in strategic areas, like the 2022 Tonga eruption or the current Red Sea crisis, nations and economies are exposed to being isolated for days.
The geopolitical risks are far more urgent. Subsea cables traverse disputed waters, land in hostile regimes, and cross oceans without regard for political boundaries. This makes them appealing for espionage, where state actors can tap or alter flows covertly, as well as sabotage, when service is interrupted to prevent access. Deliberate cable strikes have been likened by NATO specialists to the destruction of bridges or highways: if you choke the arteries, you choke the economy. Ironically, the most susceptible locations are not far below the surface but rather where cables emerge. These landing sites, which handle billions of dollars’ worth of trade, can have less security than a conventional bank office.
The New Theatre of Geopolitics
Legal frameworks exist, but they are patchwork. Intentional damage is illegal under the UN Convention on the Law of the Sea and previous agreements, but attribution is still infamously challenging. Covert sabotage and intelligence operations are examples of legal grey areas in hybrid warfare scenarios. Even during times of peace, national governments that rely on their continuous operation but find it difficult to extend sovereignty into international waters, private telecom consortia, and content giants like Google and Amazon that now finance their own cables share the burden of protection.
Cables convey influence in addition to data. Strategic leverage belongs to whoever can secure them, tap them or cut them during a fight. Even though landing stations are the entry points for billions of dollars’ worth of international trade, they frequently offer less security than a commercial bank branch.
India at the Crossroads of Digital Geopolitics
India’s reliance on underwater cables presents both advantages and disadvantages. India presents a classic single-point-of-failure danger, with more than 95% of its international data traffic being routed through a 6-km coastal stretch close to Versova, Mumbai. Red Sea disruptions have previously demonstrated how swiftly chokepoints located far from India’s coast may impede its digital arteries, placing a burden on government functions, defence communications, and financial flows. However, this same vulnerability also makes India a crucial player in the global discussion around digital sovereignty. It is not only an infrastructure exercise; it is also a strategic and constitutional necessity to be able to diversify landing places, expedite clearances, and develop indigenous repair capability.
India’s geographic location also presents opportunities. India’s location along East-West cable lines makes it an ideal location for robust connectivity as the Indo-Pacific region becomes the defining region of geopolitics in the twenty-first century. India may change from being a passive recipient of connectivity to a shaper of its governance by investing in distributed cable architecture and strengthening partnerships through initiatives like Quad and IPEF. Its aspirations for global influence must be balanced with its home regulatory lethargy. By doing this, India can secure not only bandwidth but also sovereignty itself by converting subsea cables from hidden liabilities into tools of economic might and geopolitical leverage.
CyberPeace Insights
If cables are considered essential infrastructure, then their safety demands the same level of attention that we give to ports, airports, and electrical grids. Stronger landing station defences, redundancy in route, and sincere public-private collaborations are now a necessity rather than an option.
The Red Sea incident is a call to action rather than a singular disruption. The robustness of underwater cables will determine whether the internet is a sustainable resource or a brittle luxury susceptible to the next outage as reliance on the cloud grows and 5G spreads.
References
- https://forumias.com/blog/answered-assess-the-strategic-significance-of-undersea-cable-networks-for-indias-digital-economy-and-national-security-discuss-the-vulnerabilities-of-this-infrastructure-and-suggest-measures-to-e/
- https://www.reuters.com/world/middle-east/red-sea-cable-cuts-disrupt-internet-across-asia-middle-east-2025-09-07/
- https://pulse.internetsociety.org/blog/what-can-we-learn-from-africas-multiple-submarine-cable-outages
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report