#FactCheck - Viral Video Claiming to Show Kashmir Avalanche Is AI-Generated
Executive Summary
A video is being shared on social media claiming to show an avalanche in Kashmir. The caption of the post alleges that the incident occurred on February 6. Several users sharing the video are also urging people to avoid unnecessary travel to hilly regions. CyberPeace’s research found that the video being shared as footage of a Kashmir avalanche is not real. The research revealed that the viral video is AI-generated.
Claim
The video is circulating widely on social media platforms, particularly Instagram, with users claiming it shows an avalanche in Kashmir on February 6. The archived version of the post can be accessed here. Similar posts were also found online. (Links and archived links provided)

Fact Check:
To verify the claim, we searched relevant keywords on Google. During this process, we found a video posted on the official Instagram account of the BBC. The BBC post reported that an avalanche occurred near a resort in Sonamarg, Kashmir, on January 27. However, the BBC post does not contain the viral video that is being shared on social media, indicating that the circulating clip is unrelated to the real incident.

A close examination of the viral video revealed several inconsistencies. For instance, during the alleged avalanche, people present at the site are not seen panicking, running for cover, or moving toward safer locations. Additionally, the movement and flow of the falling snow appear unnatural. Such visual anomalies are commonly observed in videos generated using artificial intelligence. As part of the research , the video was analyzed using the AI detection tool Hive Moderation. The tool indicated a 99.9% probability that the video was AI-generated.

Conclusion
Based on the evidence gathered during our research , it is clear that the video being shared as footage of a Kashmir avalanche is not genuine. The clip is AI-generated and misleading. The viral claim is therefore false.
Related Blogs
.webp)
Introduction
The digital communication landscape in India is set to change significantly as the Department of Telecommunications is preparing to implement new rules for messaging apps that operate using SIM cards. This step is part of the government’s effort to tackle cybercrime at its roots by enforcing stricter verification and reducing the number of communication platforms that can be misused. One clear change that users will notice is that WhatsApp Web sessions will now be automatically logged out every six hours, disrupting the previously uninterrupted use across multiple devices. Although this may appear to be a simple inconvenience, the measure is part of a broader plan to address the growing problem of cyber fraud. Cybercriminals exploit messaging apps like WhatsApp without keeping the registered SIM in the device, making it difficult to trace fraud. These efforts are surely gonna address these challenges at the root.
The Incident: What Has Changed?
The new regulations will make it mandatory for messaging platforms to create a direct link between user accounts and verified SIM identities. By this method, every account in the network can be associated with a valid and traceable mobile number. Because of this requirement, it is expected that WhatsApp is going to tighten the management of device sessions. The six-hour logout cycle for WhatsApp Web is implemented to prevent long-lived and unmonitored sessions that are sometimes taken advantage of in account takeovers, device-based breaches, and remote access scams. This change significantly affects the user experience. WhatsApp Web, often used for communication, customer support, and coordination, will now require more frequent authentication through mobile devices. Though mobile access remains uninterrupted, desktop and browser-linked sessions will be subjected to tighter security controls.
Why Identity-Linked Messaging Matters
India is facing a rapidly evolving cybercrime ecosystem in which messaging applications play a central role. Scammers often rely on fake, unverified, or illegally obtained SIM cards to create temporary accounts that can be used for various illegal activities, such as sending phishing messages, impersonating government officials, and deceiving victims through call centres set up for scams.
The new rules take into consideration the following main issues:
- Anonymity of accounts makes large-scale fraud possible: Criminals operate bulk scams using hundreds of SIM-linked accounts.
- Freedom to drop identities: Illegal SIMs are discarded after fraud, making it difficult for the police to trace the criminals.
- Multi-device vulnerabilities that last for a long time: Access without permission to WhatsApp Web sessions that last for a long time is seen as the main reason for OTP theft, account hijacking, and on-device social engineering.
The government wants to disrupt these foundations by enforcing stricter traceability.
A Sector Under Strain: Misuse of Messaging Platforms
Messaging apps have turned out to be the most important thing in India's digital life, from communication to enterprise. This very widespread use of messaging apps has made them an easy target for cybercriminals.
The scams that are frequently visible are:
- WhatsApp groupsare used for job and loan scams
- False communication from banks, government departments, and payment applications
- Sextortion and blackmail through unverified accounts
- Remote-access fraud with attackers who are watching WhatsApp Web sessions
- Coordinated spread of false information and distribution of deepfake videos
The employment of AI-generated personas and "SIM farms" has made it harder to secure the systems even more. Unless there is a very strict linking of users to authenticated SIM credentials, the platforms might degenerate into uncontrollable rafts of cybercrime.
Government and Regulatory Response
The Department of Telecommunications is initiating a process of stricter compliance measures and cooperating with the Ministry of Home Affairs, along with the Indian Cyber Crime Coordination Centre. The main points of the directions include the following:
- Identity verification linked to a SIM is mandatory for the creation of messaging accounts
- Device re-authentication on platforms often starts with WhatsApp Web
- Coordination with the telecom operators to the extent of getting suspicious login patterns
- Protocols for the sharing of data with law enforcement in the course of cybercrime investigations
- Compliance checks of digital platforms to verify adherence to national safety guidelines
This coordinated effort reflects the understanding that the security of communication platforms is the responsibility of both the regulators and the service providers.
The Bigger Picture: Strengthening India’s Digital Trust
The fresh regulations are in step with the worldwide trend where the platforms of messaging have to be more responsible, as governments are demanding more and more from them. The same discussions are going on in the EU, UK, and certain Southeast Asian regions.
For India, it is imperative to enhance identity management because:
- The nation has the largest base of messaging users in the whole world
- Cybercrime is increasing at a rate quicker than that of traditional crime
- Digital government services rely on communications that are secure
- Identity integrity is the basis for trust in online transactions and digital payments
The six-hour logout policy for WhatsApp Web is a small action, but it is an indication of a bigger transformation towards a regulation that is active rather than just policing that is reactive.
What Needs to Happen Next?
The implementation of SIM-linked regulations must involve several subsequent measures to make them effective.
- Strengthening Digital Literacy: It is necessary to educate users about the benefits of frequent logouts and security improvements.
- Ensuring Privacy Protections: The DPDP Act should create a strong barrier against the misuse of personal data in identity-linked messaging that will be implemented.
- Collaboration with Platforms: Messaging services should seek to secure authentication under the compromise of safety checks.
- Monitoring SIM fraud at the source: Illicit SIM provisioning enforcement is the main source of criminals, not just changing their methods.
- Continuous Review and Feedback: Policymaking needs to keep pace with real-life difficulties and new inventions in technology.
Conclusion
India's announcement to impose regulations on messaging apps with SIM linkage is a major step forward in preventing cybercrime from occurring in the first place. Although the immediate effect, like the six-hour logout requirement for WhatsApp Web, may annoy users, it is nevertheless part of a bigger goal: to develop a more secure and trustworthy digital communication environment.
Securing the communication that links millions of people is vital as India becomes more and more digital. Through a combination of regulatory measures, technological protection, and user education, the country is headed toward a time when criminals in the cyber world will find it very difficult to operate and where consumers will be able to interact online with much more confidence and safety.
References
- https://thehackernews.com/2025/12/india-orders-messaging-apps-to-work.html
- https://indianexpress.com/article/explained/explained-sci-tech/whatsapp-web-automatic-log-out-six-hourse-reason-10394142/
- https://www.ndtv.com/india-news/explained-how-will-new-sim-binding-rule-affect-whatsapp-signal-telegram-9728710
- https://www.hindustantimes.com/india-news/no-whatsapp-without-active-sim-centre-issues-new-rules-dot-sim-binding-prevent-cyber-crimes-101764495810135.html

AI and other technologies are advancing rapidly. This has ensured the rapid spread of information, and even misinformation. LLMs have their advantages, but they also come with drawbacks, such as confident but inaccurate responses due to limitations in their training data. The evidence-driven retrieval systems aim to address this issue by using and incorporating factual information during response generation to prevent hallucination and retrieve accurate responses.
What is Retrieval-Augmented Response Generation?
Evidence-driven Retrieval Augmented Generation (or RAG) is an AI framework that improves the accuracy and reliability of large language models (LLMs) by grounding them in external knowledge bases. RAG systems combine the generative power of LLMs with a dynamic information retrieval mechanism. The standard AI models rely solely on pre-trained knowledge and pattern recognition to generate text. RAG pulls in credible, up-to-date information from various sources during the response generation process. RAG integrates real-time evidence retrieval with AI-based responses, combining large-scale data with reliable sources to combat misinformation. It follows the pattern of:
- Query Identification: When misinformation is detected or a query is raised.
- Evidence Retrieval: The AI searches databases for relevant, credible evidence to support or refute the claim.
- Response Generation: Using the evidence, the system generates a fact-based response that addresses the claim.
How is Evidence-Driven RAG the key to Fighting Misinformation?
- RAG systems can integrate the latest data, providing information on recent scientific discoveries.
- The retrieval mechanism allows RAG systems to pull specific, relevant information for each query, tailoring the response to a particular user’s needs.
- RAG systems can provide sources for their information, enhancing accountability and allowing users to verify claims.
- Especially for those requiring specific or specialised knowledge, RAG systems can excel where traditional models might struggle.
- By accessing a diverse range of up-to-date sources, RAG systems may offer more balanced viewpoints, unlike traditional LLMs.
Policy Implications and the Role of Regulation
With its potential to enhance content accuracy, RAG also intersects with important regulatory considerations. India has one of the largest internet user bases globally, and the challenges of managing misinformation are particularly pronounced.
- Indian regulators, such as MeitY, play a key role in guiding technology regulation. Similar to the EU's Digital Services Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate platforms to publish compliance reports detailing actions against misinformation. Integrating RAG systems can help ensure accurate, legally accountable content moderation.
- Collaboration among companies, policymakers, and academia is crucial for RAG adaptation, addressing local languages and cultural nuances while safeguarding free expression.
- Ethical considerations are vital to prevent social unrest, requiring transparency in RAG operations, including evidence retrieval and content classification. This balance can create a safer online environment while curbing misinformation.
Challenges and Limitations of RAG
While RAG holds significant promise, it has its challenges and limitations.
- Ensuring that RAG systems retrieve evidence only from trusted and credible sources is a key challenge.
- For RAG to be effective, users must trust the system. Sceptics of content moderation may show resistance to accepting the system’s responses.
- Generating a response too quickly may compromise the quality of the evidence while taking too long can allow misinformation to spread unchecked.
Conclusion
Evidence-driven retrieval systems, such as Retrieval-Augmented Generation, represent a pivotal advancement in the ongoing battle against misinformation. By integrating real-time data and credible sources into AI-generated responses, RAG enhances the reliability and transparency of online content moderation. It addresses the limitations of traditional AI models and aligns with regulatory frameworks aimed at maintaining digital accountability, as seen in India and globally. However, the successful deployment of RAG requires overcoming challenges related to source credibility, user trust, and response efficiency. Collaboration between technology providers, policymakers, and academic experts can foster the navigation of these to create a safer and more accurate online environment. As digital landscapes evolve, RAG systems offer a promising path forward, ensuring that technological progress is matched by a commitment to truth and informed discourse.
References
- https://experts.illinois.edu/en/publications/evidence-driven-retrieval-augmented-response-generation-for-onlin
- https://research.ibm.com/blog/retrieval-augmented-generation-RAG
- https://medium.com/@mpuig/rag-systems-vs-traditional-language-models-a-new-era-of-ai-powered-information-retrieval-887ec31c15a0
- https://www.researchgate.net/publication/383701402_Web_Retrieval_Agents_for_Evidence-Based_Misinformation_Detection

Executive Summary:
A video clip being circulated on social media allegedly shows the Hon’ble President of India, Smt. Droupadi Murmu, the TV anchor Anjana Om Kashyap and the Hon’ble Chief Minister of Uttar Pradesh, Shri Yogi Adityanath promoting a medicine for diabetes. While The CyberPeace Research Team did a thorough investigation, the claim was found to be not true. The video was digitally edited, with original footage of the heavy weight persons being altered to falsely suggest their endorsement of the medication. Specific discrepancies were found in the lip movements and context of the clips which indicated AI Manipulation. Additionally, the distinguished persons featured in the video were actually discussing unrelated topics in their original footage. Therefore, the claim that the video shows endorsements of a diabetes drug by such heavy weights is debunked. The conclusion drawn from the analysis is that the video is an AI creation and does not reflect any genuine promotion. Furthermore, it's also detected by AI voice detection tools.

Claims:
A video making the rounds on social media purporting to show the Hon'ble President of India, Smt. Draupadi Murmu, TV anchor Anjana Om Kashyap, and Hon'ble Chief Minister of Uttar Pradesh Shri Yogi Adityanath giving their endorsement to a diabetes medicine.

Fact Check:
Upon receiving the post we carefully watched the video and certainly found some discrepancies between lip synchronization and the word that we can hear. Also the voice of Chief Minister of Uttar Pradesh Shri Yogi Adityanath seems to be suspicious which clearly indicates some sign of fabrication. In the video, we can hear Hon'ble President of India Smt. Droupadi Murmu endorses a medicine that cured her diabetes. We then divided the video into keyframes, and reverse-searched one of the frames of the video. We landed on a video uploaded by Aaj Tak on their official YouTube Channel.

We found something similar to the same viral video, we can see the courtesy written as Sansad TV. Taking a cue from this we did some keyword searches and found another video uploaded by the YouTube Channel Sansad TV. In this video, we found no mention of any diabetes medicine. It was actually the Swearing in Ceremony of the Hon’ble President of India, Smt. Droupadi Murmu.

In the second part, there was a man addressed as Dr. Abhinash Mishra who allegedly invented the medicine that cures diabetes. We reverse-searched the image of that person and landed at a CNBC news website where the same face was identified as Dr Atul Gawande who is a professor at Harvard School of Public Health. We watched the video and found no sign of endorsing or talking about any diabetes medicine he invented.

We also extracted the audio from the viral video and analyzed it using the AI audio detection tool named Eleven Labs, which found the audio very likely to be created using the AI Voice generation tool with the probability of 98%.

Hence, the Claim made in the viral video is false and misleading. The Video is digitally edited using different clips and the audio is generated using the AI Voice creation tool to mislead netizens. It is worth noting that we have previously debunked such voice-altered news with bogus claims.
Conclusion:
In conclusion, the viral video claiming that Hon'ble President of India, Smt. Droupadi Murmu and Chief Minister of Uttar Pradesh Shri Yogi Adityanath promoted a diabetes medicine that cured their diabetes, is found to be false. Upon thorough investigation it was found that the video is digitally edited from different clips, the clip of Hon'ble President of India, Smt. Droupadi Murmu is taken from the clip of Oath Taking Ceremony of 15th President of India and the claimed doctor Abhinash Mishra whose video was found in CNBC News Outlet. The real name of the person is Dr. Atul Gawande who is a professor at Harvard School of Public Health. Online users must be careful while receiving such posts and should verify before sharing them with others.
Claim: A video is being circulated on social media claiming to show distinguished individuals promoting a particular medicine for diabetes treatment.
Claimed on: Facebook
Fact Check: Fake & Misleading