#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A viral video circulating on social media inaccurately suggests that it shows Israel moving nuclear weapons in preparation for an assault on Iran, but a detailed research has established that it instead shows a SpaceX Starship rocket (Starship 36) being towed for a pre-planned test in Texas, USA, and the footage does not provide any evidence to back-up the claim of an Israeli action or a nuclear missile.

Claim:
Multiple posts on social media sharing a video clip of what appeared to be a large, missile-like object being towed to an unknown location by a very large vehicle and stated it is Israel preparing for a nuclear attack on Iran.
The caption of the video said: "Israel is going to launch a nuclear attack on Iran! #Israel”. The viral post received lots of engagement, helpingClaim: to spread misinformation and unfounded fear about the rising conflicts in the Middle East.

Fact check:
By doing reverse image search using the key frames of the viral footage, this landed us at a Facebook post dated June 16, 2025.

A YouTube livestream from NASASpaceflight is dated 15th June 2025. Both sources make it clear that the object was clearly identified as SpaceX Starship 36. This rocket was being towed at SpaceX's Texas facility in advance of a static fire test and as part of the overall preparation for the 10th test flight. In the video, there is clearly no military ordinance or personnel, or Israel’s nuclear attack on Iran markings.
More support for our conclusions came from several articles from SPACE.com, which briefly reported on the Starship's explosion shortly thereafter during various testing iterations.



Also, there was no mention of any Israeli nuclear mobilization by any reputable media or defence agencies. The resemblance between a large rocket and a missile likely added some confusion. Below is a video describing the difference, but the context and upload location have no relation to the State of Israel or Iran.

Conclusion:
The viral video alleging that the actual video showed Israel getting ready to launch a nuclear attack on Iran is false and misleading. In fact, the video was from Texas, showing the civilian transport of SpaceX’s Starship 36. This highlighted how easily unrelated videos can be used to create panic and spread misinformation. If you plan on sharing claims like this, verify them instead using trusted websites and tools.
- Claim: Misleading video on Israel is ready to go nuclear on Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915