#FactCheck - Viral Video Distorts Rahul Gandhi’s Speech to Push False Religious Claim
Executive Summary
A video of the Leader of the Opposition in the Lok Sabha and Congress MP Rahul Gandhi is being widely shared on social media. In the clip, Gandhi is seen saying that he does not know what “G Gram G” is. Several users are sharing the video with the claim that Rahul Gandhi insulted Lord Ram. However, CyberPeace research found that the claim is misleading. Rahul Gandhi was not referring to Lord Ram in the video. Instead, he was speaking about a newly introduced law titled Viksit Bharat–G RAM G (VB–G RAM G), which has been brought in to replace the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA). The viral clip has been shared with a false narrative.
Claim
On January 22, 2026, an Instagram user apnisarkar2024 shared the video claiming, “Rahul Gandhi once again insulted Shri Ram.” (Link, archive link, and screenshot available above)
- https://www.instagram.com/reel/DTzeiy0k3l5
- https://perma.cc/J3A3-NGBM?type=standard

Research
As part of the Research, we first closely examined the viral video. In the clip, Rahul Gandhi is heard saying: “I don’t know what Gram G is. I don’t even know the name of this new law… what is G Gram G…” At no point in the video does Rahul Gandhi mention Lord Ram or make any comment related to religion. To verify the context, we extracted keyframes from the viral clip and conducted a Google Lens search. This led us to a longer version of the same speech uploaded on the official YouTube channel of the Indian National Congress on January 22, 2026. The viral segment appears after the 39:50-minute mark.
The video is from the National MGNREGA Convention held in New Delhi, where Rahul Gandhi criticised the central government over the replacement of MGNREGA with the new VB–G RAM G law. During his speech, he expressed his opposition to the new legislation and stated that he was unfamiliar with its details. Throughout the address, he did not mention or refer to Lord Ram in any manner.

Conclusion
Rahul Gandhi’s remarks in the viral video were related to the newly introduced VB–G RAM G law and were part of his criticism of the central government’s policy decisions. He did not insult Lord Ram. The video is being shared on social media with a misleading and false claim.
Related Blogs

Introduction
The Indian healthcare sector has been transforming remarkably. This is mainly due to the development of emerging technologies such as AI and IoT. The rapid adoption of technology in healthcare delivery such as AI and IoT integration along with telemedicine, digital health solutions, and Electronic Medical Records (EMR) have enhanced the efficacy of hospitals, driving growth. The integration of AI and IoT devices in healthcare can improve patient care, health record management, and telemedicine and reshape the medical landscape as we know it. However, their implementation must be safe, with robust security and ethical safeguards in place.
The Transformative Power of AI and IoT in Revolutionising Healthcare
IoT devices for healthcare such as smartwatches, wearable patches, and ingestive sensors are equipped with sensors. These devices take physiological parameters in real-time, including heart rate, blood pressure, glucose level, etc. This can be forwarded automatically from these wearables to healthcare providers and EHR systems. Real-time patient health data enable doctors to monitor progress and intervene when needed.
The sheer volume of data generated by IoT healthcare devices opens avenues for applying AI. AI and ML algorithms can analyse patient data for patterns that further provide diagnostic clues and predict adverse events before they occur. A combination of AI and IoT opens avenues for proactive and personalised medicine tailored to specific patient profiles. This amalgamation can be a bridge between healthcare accessibility and quality. And, especially in rural and underserved areas, it can help receive timely and effective medical consultations, significantly improving healthcare outcomes. Moreover, the integration of AI-powered chatbots and virtual health assistants is enhancing patient engagement by providing instant medical advice and appointment scheduling.
CyberPeace Takeaway, the Challenges and the Way Forward
Some of the main challenges associated with integrating AI and IoT in healthcare include cybersecurity due to data privacy concerns, lack of interoperability, and skill gaps in implementation. Addressing these requires enhanced measures or specific policies, such as:
- Promoting collaborations among governments, regulators, industry, and academia to foster a healthcare innovation ecosystem such as public-private partnerships and funding opportunities to drive collaborative advancements in the sector. Additionally, engaging in capacity-building programs to upskill professionals.
- Infrastructural development, including startup support for scalable AI and IoT solutions. Engaging in creating healthcare-specific cybersecurity enhancements to protect sensitive data. According to a 2024 report by Check Point Software Technologies, the Indian healthcare sector has experienced an average of 6,935 cyberattacks per week, compared to 1,821 attacks per organisation globally in 2024.
Conclusion
The Deloitte survey highlights that on average hospitals spend 8–10% of their IT budget on cybersecurity techniques, such as hiring professionals and acquiring tools to minimise cyber-attacks to the maximum extent. Additionally, this spending is likely to increase to 12-15 % in the next two years moving towards proactive measures for cybersecurity.
The policy frameworks and initiatives are also carried out by the government. One of the Indian government's ways of driving innovation in AI and IoT in healthcare is through initiatives under the National Digital Health Mission (NDHM), the National Health Policy and the Digital India Initiative.
Though the challenges presented by data privacy and cyber security persist, the strong policies, public-private collaborations, capacity-building initiatives and the evolving startup ecosystem carry AI and IoT’s potential forward from the thoughtful merging of innovative health technologies, delivery models, and analytics. If the integration complexities are creatively tackled, these could profoundly improve patient outcomes while bending the healthcare cost curve.
References
- https://www.ndtv.com/business-news/indian-healthcare-sector-faced-6-935-cyberattacks-per-week-in-last-6-months-report-5989240
- https://www.businesstoday.in/technology/news/story/meity-nasscom-coe-collaborates-with-start-ups-to-enhance-healthcare-with-ai-iot-458739-2024-12-27
- https://www2.deloitte.com/content/dam/Deloitte/in/Documents/risk/in-ra-deloitte-dsci-hospital-report-noexp.pdf
- https://medium.com/@shibilahammad/the-transformative-potential-of-iot-and-ai-in-healthcare-78a8c7b4eca1

Digital vulnerabilities like cyber-attacks and data breaches proliferate rapidly in the hyper-connected world that is created today. These vulnerabilities can compromise sensitive data like personal information, financial data, and intellectual property and can potentially threaten businesses of all sizes and in all sectors. Hence, it has become important to inform all stakeholders about any breach or attack to ensure they can be well-prepared for the consequences of such an incident.
The non-reporting of reporting can result in heavy fines in many parts of the world. Data breaches caused by malicious acts are crimes and need proper investigation. Organisations may face significant penalties for failing to report the event. Failing to report data breach incidents can result in huge financial setbacks and legal complications. To understand why transparency is vital and understanding the regulatory framework that governs data breaches is the first step.
The Current Indian Regulatory Framework on Data Breach Disclosure
A data breach essentially, is the unauthorised processing or accidental disclosure of personal data, which may occur through its acquisition, sharing, use, alteration, destruction, or loss of access. Such incidents can compromise the affected data’s confidentiality, integrity, or availability. In India, the Information Technology Act of 2000 and the Digital Personal Data Protection Act of 2023 are the primary legislation that tackles cybercrimes like data breaches.
- Under the DPDP Act, neither materiality thresholds nor express timelines have been prescribed for the reporting requirement. Data Fiduciaries are required to report incidents of personal data breach, regardless of their sensitivity or impact on the Data Principal.
- The IT (Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013, the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, along with the Cyber Security Directions, under section 70B(6) of the IT Act, 2000, relating to information security practices, procedure, prevention, response and reporting of cyber incidents for Safe & Trusted Internet prescribed in 2022 impose mandatory notification requirements on service providers, intermediaries, data centres and corporate entities, upon the occurrence of certain cybersecurity incidents.
- These laws and regulations obligate companies to report any breach and any incident to regulators such as the CERT-In and the Data Protection Board.
The Consequences of Non-Disclosure
A non-disclosure of a data breach has a manifold of consequences. They are as follows:
- Legal and financial penalties are the immediate consequence of a data breach in India. The DPDP Act prescribes a fine of up to Rs 250 Crore from the affected parties, along with suits of a civil nature and regulatory scrutiny. Non-compliance can also attract action from CERT-In, leading to more reputational damage.
- In the long term, failure to disclose data breaches can erode customer trust as they are less likely to engage with a brand that is deemed unreliable. Investor confidence may potentially waver due to concerns about governance and security, leading to stock price drops or reduced funding opportunities. Brand reputation can be significantly tarnished, and companies may struggle with retaining and attracting customers and employees. This can affect long-term profitability and growth.
- Companies such as BigBasket and Jio in 2020 and Haldiram in 2022 have suffered from data breaches recently. Poor transparency and delay in disclosures led to significant reputational damage, legal scrutiny, and regulatory actions for the companies.
Measures for Improvement: Building Corporate Reputation via Transparency
Transparency is critical when disclosing data breaches. It enhances trust and loyalty for a company when the priority is data privacy for stakeholders. Ensuring transparency mitigates backlash. It demonstrates a company’s willingness to cooperate with authorities. A farsighted approach instils confidence in all stakeholders in showcasing a company's resilience and commitment to governance. These measures can be further improved upon by:
- Offering actionable steps for companies to establish robust data breach policies, including regular audits, prompt notifications, and clear communication strategies.
- Highlighting the importance of cooperation with regulatory bodies and how to ensure compliance with the DPDP Act and other relevant laws.
- Sharing best public communications practices post-breach to manage reputational and legal risks.
Conclusion
Maintaining transparency when a data breach happens is more than a legal obligation. It is a good strategy to retain a corporate reputation. Companies can mitigate the potential risks (legal, financial and reputational) by informing stakeholders and cooperating with regulatory bodies proactively. In an era where digital vulnerabilities are ever-present, clear communication and compliance with data protection laws such as the DPDP Act build trust, enhance corporate governance, and secure long-term business success. Proactive measures, including audits, breach policies, and effective public communication, are critical in reinforcing resilience and fostering stakeholder confidence in the face of cyber threats.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.cert-in.org.in/PDF/CERT-In_Directions_70B_28.04.2022.pdf
- https://chawdamrunal.medium.com/the-dark-side-of-covering-up-data-breaches-why-transparency-is-crucial-fe9ed10aac27
- https://www.dlapiperdataprotection.com/index.html?t=breach-notification&c=IN

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/