#FactCheck - Edited Video Falsely Attributed to Arnab Goswami; Claim of Remarks Against Prime Minister Modi Is Misleading
A video has been going viral on social media in recent days in which Republic TV’s Editor-in-Chief and anchor Arnab Goswami can allegedly be heard using objectionable language against Prime Minister Narendra Modi. While sharing the video, users are claiming that Arnab Goswami publicly made controversial remarks about the Prime Minister.
An investigation by CyberPeace Foundation found this claim to be completely false. Our probe revealed that the viral video is edited and is being circulated on social media with a misleading narrative. In the original video, Arnab Goswami was not making any personal statement; rather, he was referring to an old statement made by Congress leader Rahul Gandhi.
Viral Claim
An Instagram user posted this video on 5 January 2026. In the video, a voice resembling Arnab Goswami is heard saying, “Ye jo Narendra Modi hain, ye chhe mahine baad ghar se nahi nikal paayenge aur Hindustan ke log inhein danda maarenge.”
(Translation: “This Narendra Modi will not be able to step out of his house after six months, and the people of India will beat him with sticks.”)
The post link, its archive link, and screenshots can be seen below:
- Instagram link: https://www.instagram.com/reel/DTHrO7bk7Rf/?igsh=MThzbzBlcm82eWN0ZA%3D%3D
- Archive link: https://archive.ph/oaYsf

Fact Check
To verify the viral claim, we first examined the video using Google Lens search. During this process, we found a video published on 18 July 2024 on the official YouTube channel of Republic Bharat. The investigation revealed that this video is the longer (extended) version of the viral clip.
After carefully watching the full video, it became clear that Arnab Goswami was not making the statement himself. Instead, he was referring to a remark made by Congress leader Rahul Gandhi during the 2020 Delhi Assembly elections against Prime Minister Narendra Modi. This confirms that the viral video was clipped and presented out of context.
The related video link can be seen below: https://www.youtube.com/shorts/KlQV25-3l8s

In the next step of the investigation, to verify whether Rahul Gandhi had indeed made such a statement, we conducted a customized keyword search on Google. During this, we found a video published on 6 February 2020 on the official YouTube channel of India Today.
In this video, recorded during a public event ahead of the 2020 Delhi Assembly elections, Rahul Gandhi is seen sharply attacking Prime Minister Narendra Modi, stating that if the Prime Minister fails to resolve the issue of unemployment in the country, the youth would beat him with sticks.
The video link is given below: https://www.youtube.com/watch?v=t5qCSA5nG9Y

Conclusion
The CyberPeace Foundation’s investigation found this claim to be completely fake. The viral video is edited and is being shared in a misleading context. In the original video, Arnab Goswami was referring to an old statement made by Rahul Gandhi, which was selectively clipped and presented in a way that falsely suggests Arnab Goswami himself made objectionable remarks against Prime Minister Narendra Modi.
Related Blogs

Overview:
After the blackout on July 19, 2024, which affected CrowdStrike’s services worldwide, cybercriminals began to launch many phishing attacks and distribute malware. These activities mainly affect CrowdStrike customers, using the confusion as a way to extort information through fake support sites. The analysis carried out by the Research Wing of CyberPeace and Autobot Infosec has identified several phishing links and malicious campaigns.
The Exploitation:
Cyber adversaries have registered domains that are similar to CrowdStrike’s brand and have opened fake accounts on social media platforms. These are fake platforms that are employed to defraud users into surrendering their personal and sensitive details for use in other fraudulent activities.
Phishing Campaign Links:
- crowdstrike-helpdesk[.]com
- crowdstrikebluescreen[.]com
- crowdstrike-bsod[.]com
- crowdstrikedown[.]site
- crowdstrike0day[.]com
- crowdstrikedoomsday[.]com
- crowdstrikefix[.]com
- crashstrike[.]com
- crowdstriketoken[.]com
- fix-crowdstrike-bsod[.]com
- bsodsm8r[.]xamzgjedu[.]com
- crowdstrikebsodfix[.]blob[.]core[.]windows[.]net
- crowdstrikecommuication[.]app
- fix-crowdstrike-apocalypse[.]com
- supportportal-crowdstrike-com[.]translate[.]goog
- crowdstrike-cloudtrail-storage-bb-126d5e[.]s3[.]us-west-1[.]amazonaws[.]com
- crowdstrikeoutage[.]info
- clownstrike[.]co[.]uk
- crowdstrikebsod[.]com
- whatiscrowdstrike[.]com
- clownstrike[.]co
- microsoftcrowdstrike[.]com
- crowdfalcon-immed-update[.]com
- crowdstuck[.]org
- failstrike[.]com
- winsstrike[.]com
- crowdpass[.]com
In one case, a PDF file is being circulated with CrowdStrike branding, saying ‘Download The Updater,’ which is a link to a ZIP file. The ZIP file is a compressed file that has an executable file with a virus. This is a clear sign that the hackers are out to take advantage of the current situation by releasing the malware as an update.




In another case, there is a malicious Microsoft Word document that is currently being shared, which claims to offer a solution on how to deal with this CrowdStrike BSOD bug. But there is a hidden risk in the document. When users follow the instructions and enable the embedded macro, it triggers the download of an information-stealing malware from a remote host. This is a form of malware that is used to steal information and is not well recognized by most security software. Also it sends the stolen data to the samesame remote host but with different port number, which likey works as the CnC server for the campaign.
- Name New_Recovery_Tool_to_help_with_CrowdStrike_issue_impacting_Windows[.]docm
- MD5 dd2100dfa067caae416b885637adc4ef
- SHA-1 499f8881f4927e7b4a1a0448f62c60741ea6d44b
- SHA-256 803727ccdf441e49096f3fd48107a5fe55c56c080f46773cd649c9e55ec1be61
- URLS http://172.104.160[.]126:8099/payload2.txt, http://172.104.160[.]126:5000/Uploadss


Recent Outage Impact:
On July 19, 2024, CrowdStrike faced a global outage that originated from an update of its Falcon Sensor security software. This outage affected many government organizations and companies in different industries, such as finance, media, and telecommunications. The event led to numerous complaints from the users who experienced problems like blue screen of death and system failure. Although, CrowdStrike has admitted to the problem and is in the process of fixing it.
Preventive Measures:
- Organize regular awareness sessions to educate the employees about the phishing techniques and how they can avoid the phishing scams, emails, links, and websites.
- MFA should be used for login to the sensitive accounts and systems for an improvement on the security levels.
- Make sure all security applications including the antivirus and anti-malware are up to date to help in the detection of phishing scams.
- This includes putting in place of measures such as alert on account activity or login patterns to facilitate early detection of phishing attempts.
- Encourage employees and users to inform the IT department as soon as they have any suspicions regarding phishing attempts.
Conclusion:
The recent CrowdStrike outage is a perfect example of how cybercriminals take advantage of the situation and user’s confusion and anxiety. Thus, people and organizations can keep themselves from these threats and maintain the confidentiality of their information by being cautious and adhering to the proper standards. To get the current information on the BSOD problem and the detailed instructions on its solution, visit CrowdStrike’s support center. Reported problems should be handled with caution and regular backup should be made to minimize the effects.
References:
- https://app.any.run/tasks/2c0ffc87-4059-4d6f-8306-1258cf33aa54/
- https://app.any.run/tasks/48e18e33-2007-49a8-aa60-d04c21e8fa11
- https://www.virustotal.com/gui/file/19001dd441e50233d7f0addb4fcd405a70ac3d5e310ff20b331d6f1a29c634f0/relations
- https://www.virustotal.com/gui/file/803727ccdf441e49096f3fd48107a5fe55c56c080f46773cd649c9e55ec1be61/detection
- https://www.joesandbox.com/analysis/1478411#iocs

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007

Introduction
Children today are growing up amidst technology, and the internet has become an important part of their lives. The internet provides a wealth of recreational and educational options and learning environments to children, but it also presents extensively unseen difficulties, particularly in the context of deepfakes and misinformation. AI is capable of performing complex tasks in a fast time. However, misuse of AI technologies led to increasing cyber crimes. The growing nature of cyber threats can have a negative impact on children wellbeing and safety while using the Internet.
India's Digital Environment
India has one of the world's fastest-growing internet user bases, and young netizens here are getting online every passing day. The internet has now become an inseparable part of their everyday lives, be it social media or online courses. But the speed at which the digital world is evolving has raised many privacy and safety concerns increasing the chance of exposure to potentially dangerous content.
Misinformation: The raising Concern
Today, the internet is filled with various types of misinformation, and youngsters are especially vulnerable to its adverse effects. With the diversity in the language and culture in India, the spread of misinformation can have a vast negative impact on society. In particular, misinformation in education has the power to divulge young brains and create hindrances in their cognitive development.
To address this issue, it is important that parents, academia, government, industry and civil society start working together to promote digital literacy initiatives that educate children to critically analyse online material which can ease navigation in the digital realm.
DeepFakes: The Deceptive Mirage:
Deepfakes, or digitally altered videos and/or images made with the use of artificial intelligence, pose a huge internet threat. The possible ramifications of deepfake technology are concerning in India, since there is a high level of dependence on the media. Deepfakes can have far-reaching repercussions, from altering political narratives to disseminating misleading information.
Addressing the deepfake problem demands a multifaceted strategy. Media literacy programs should be integrated into the educational curriculum to assist youngsters in distinguishing between legitimate and distorted content. Furthermore, strict laws as well as technology developments are required to detect and limit the negative impact of deepfakes.
Safeguarding Children in Cyberspace
● Parental Guidance and Open Communication: Open communication and parental guidance are essential for protecting children's internet safety. It's a necessity to have open discussions about the possible consequences and appropriate internet use. Understanding the platforms and material children are consuming online, parents should actively participate in their children's online activities.
● Educational Initiatives: Comprehensive programs for digital literacy must be implemented in educational settings. Critical thinking abilities, internet etiquette, and knowledge of the risks associated with deepfakes and misinformation should all be included in these programs. Fostering a secure online environment requires giving young netizens the tools they need to question and examine digital content.
● Policies and Rules: Admitting the threats or risks posed by misuse of advanced technologies such as AI and deepfake, the Indian government is on its way to coming up with dedicated legislation to tackle the issues arising from misuse of deepfake technology by the bad actors. The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts are made to identify misinformation and deepfakes. Legal frameworks need to be equipped to handle the challenges posed by AI. Accountability in AI is a complex issue that requires comprehensive legal reforms. In light of various cases reported about the misuse of deepfakes and spreading such deepfake content on social media, It is advocated that there is a need to adopt and enforce strong laws to address the challenges posed by misinformation and deepfakes. Working with technological companies to implement advanced content detection tools and ensuring that law enforcement takes swift action against those who misuse technology will act as a deterrent among cyber crooks.
● Digital parenting: It is important for parents to keep up with the latest trends and digital technologies. Digital parenting includes understanding privacy settings, monitoring online activity, and using parental control tools to create a safe online environment for children.
Conclusion
As India continues to move forward digitally, protecting children in cyberspace has become a shared responsibility. By promoting digital literacy, encouraging open communication and enforcing strong laws, we can create a safer online environment for younger generations. Knowledge, understanding, and active efforts to combat misinformation and deeply entrenched myths are the keys to unlocking the safety net in the online age. Social media Intermediaries or platforms must ensure compliance under IT Rules 2021, IT Act, 2000 and the newly enacted Digital Personal Data Protection Act, 2023. It is the shared responsibility of the government, parents & teachers, users and organisations to establish safe online space for children.