#FactCheck - Viral Video Claiming Snowfall Near Ambience Mall in Gurugram Is Misleading
A video circulating widely on social media claims to show snowfall near Ambience Mall in Gurugram, Haryana. The clip is being shared alongside assertions that Gurugram witnessed snowfall for the first time in its history amid a severe cold wave in January 2026. However, an research by Cyber Peace Foundation has found the claim to be misleading. Our verification reveals that the viral video is not recent and has been available online since March 2023.
The Claim
On 14 January 2026, a Facebook user shared the video with the caption,“Something truly unbelievable happened today — Gurgaon witnessed snowfall for the first time in its history!”Through this post, the user implied that the visuals showed snowfall near Ambience Mall during the ongoing cold wave.link and screeshot
- https://www.instagram.com/reel/DTfS9X9DyBo/?utm_source=ig_embed&ig_rid=239ddaf7-ec53-4b1d-8f3b-a5e39540b3ee
- https://archive.ph/JVjHf

Fact Check:
To verify the claim, we conducted a detailed search using relevant keywords but found no credible media reports or official statements to support it. Although Gurugram’s temperature dropped to 0.6 degrees Celsius amid an IMD-issued cold wave warning, there is no evidence to suggest that the city experienced snowfall or hail. As of January 16, 2026, weather records and official sources confirm that no such weather event occurred in Gurugram.
A reverse image search of keyframes extracted from the viral clip traced the same footage to a video uploaded on the YouTube channel Crazy Tube on March 21, 2023. This establishes that the video has been in circulation for nearly three years. In the original upload, the person filming clearly mentions that the visuals were recorded near a toll plaza, further indicating that the clip is unrelated to the recent weather conditions in Gurugram. Link and Screen Shot

We also came across an X (formerly Twitter) post from March 19, 2023, which featured images similar to those seen in the viral video. The post described the visuals as being from a hailstorm in Gurugram, indicating that the content predates the current weather conditions and is unrelated to the recent cold wave.

Conclusion:
The video is old and predates January 2026, and has been on the internet at least since March 2023. While Gurugram recorded a low of 0.6 degrees Celsius amid an IMD cold wave warning, the city did not experience snowfall or hail as of 16 January 2026. News reports from March 2023 confirm heavy rain and hail in Delhi and adjoining areas, including parts of Gurugram, but there is no evidence of snowfall in January 2026. Hence, the claim made in the post is MISLEADING.
Related Blogs

Introduction
Cybercrimes have been traversing peripheries and growing at a fast pace. Cybercrime is known to be an offensive action that either targets or operates through a computer, a computer network or a networked device, according to Kaspersky. In the “Era of globalisation” and a “Digitally coalesced world”, there has been an increase in International cybercrime. Cybercrime could be for personal or political objectives. Nevertheless, Cybercrime aims to sabotage networks for motives other than gain and be carried out either by organisations or individuals. Some of the cybercriminals have no national boundaries and are considered a global threat. They are likewise inordinately technically adept and operate avant-garde strategies.
The 2023 Global Risk Report points to exacerbating geopolitical apprehensions that have increased the advanced persistent threats (APTs), which are evolving globally as they are ubiquitous. Christine Lagarde, the president of the European Central Bank and former head of the International Monetary Fund (IMF), in 2020 cautioned that a cyber attack could lead to a severe economic predicament. Contemporary technologies and hazardous players have grown at an exceptional gait over the last few decades. Also, cybercrime has heightened on the agenda of nation-states, establishments and global organisations, as per the World Economic Forum (WEF).
The Role of the United Nations Ad Hoc Committee
In two shakes, the United Nations (UN) has a major initiative to develop a new and more inclusive approach to addressing cybercrime and is presently negotiating a new convention on cybercrime. The following convention seeks to enhance global collaboration in the combat against cybercrime. The UN has a central initiative to develop a unique and more inclusive strategy for addressing cybercrime. The UN passed resolution 74/247, which designated an open-ended ad hoc committee (AHC) in December 2019 entrusted with setting a broad global convention on countering the use of information and Communication Technologies (ICTs) for illicit pursuits.
The Cybercrime treaty, if adopted by the UN General Assembly (UNGA) would be the foremost imperative UN mechanism on a cyber point. The treaty could further become a crucial international legal framework for global collaboration on arraigning cyber criminals, precluding and investigating cybercrime. There have correspondingly been numerous other national and international measures to counter the criminal use of ICTs. However, the UN treaty is intended to tackle cybercrime and enhance partnership and coordination between states. The negotiations of the Ad Hoc Committee with the member states will be completed by early 2024 to further adopt the treaty during the UNGA in September 2024.
However, the following treaty is said to be complex. Some countries endorse a treaty that criminalises cyber-dependent offences and a comprehensive spectrum of cyber-enabled crimes. The proposals of Russia, Belarus, China, Nicaragua and Cuba have included highly controversial recommendations. Nevertheless, India has backed for criminalising crimes associated with ‘cyber terrorism’ and the suggestions of India to the UN Ad Hoc committee are in string with its regulatory strategy in the country. Similarly, the US, Japan, the UK, European Union (EU) member states and Australia want to include core cyber-dependent crimes.
Nonetheless, though a new treaty could become a practical instrument in the international step against cybercrime, it must conform to existing global agencies and networks that occupy similar areas. This convention will further supplement the "Budapest Cybercrime Convention" on cybercrime that materialised in the 1990s and was signed in Budapest in the year 2001.
Conclusion
According to Cyber Security Ventures, global cybercrime is expected to increase by 15 per cent per year over the next five years, reaching USD 10.5 trillion annually by 2025, up from USD 3 trillion in 2015. The UN cybercrime convention aims to be more global. That being the case, next-generation tools should have state-of-the-art technology to deal with new cyber crimes and cyber warfare. The global crevasse in nation-states due to cybercrime is beyond calculation. It could lead to a great cataclysm in the global economy and threaten the political interest of the countries on that account. It is crucial for global governments and international organisations. It is necessary to strengthen the collaboration between establishments (public and private) and law enforcement mechanisms. An “appropriately designed policy” is henceforward the need of the hour.
References
- https://www.kaspersky.co.in/resource-center/threats/what-is-cybercrime
- https://www.cyberpeace.org/
- https://www.interpol.int/en/Crimes/Cybercrime
- https://www.bizzbuzz.news/bizz-talk/ransomware-attacks-on-startups-msmes-on-the-rise-in-india-cyberpeace-foundation-1261320
- https://www.financialexpress.com/business/digital-transformation-cyberpeace-foundation-receives-4-million-google-org-grant-3282515/
- https://www.chathamhouse.org/2023/08/what-un-cybercrime-treaty-and-why-does-it-matter
- https://www.weforum.org/agenda/2023/01/global-rules-crack-down-cybercrime/
- https://www.weforum.org/publications/global-risks-report-2023/
- https://www.imf.org/external/pubs/ft/fandd/2021/03/global-cyber-threat-to-financial-systems-maurer.htm
- https://www.eff.org/issues/un-cybercrime-treaty#:~:text=The%20United%20Nations%20is%20currently,of%20billions%20of%20people%20worldwide.
- https://cybersecurityventures.com/hackerpocalypse-cybercrime-report-2016/
- https://www.coe.int/en/web/cybercrime/the-budapest-convention
- https://economictimes.indiatimes.com/tech/technology/counter-use-of-technology-for-cybercrime-india-tells-un-ad-hoc-group/articleshow/92237908.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst
- https://consultation.dpmc.govt.nz/un-cybercrime-convention/principlesandobjectives/supporting_documents/Background.pdf
- https://unric.org/en/a-un-treaty-on-cybercrime-en-route/

Introduction
The Kerala High Court banned the use of mobile phones during office hours on the 2nd of December 2024, and issued an Official Memorandum titled, ‘Indulgence In Online Gaming And Watching Social Media Content During Office Hours’. This memorandum, issued by the Registrar General, prohibits mobile phone usage for personal activities such as gaming and social media during working hours. This memorandum aims to curb the productivity woes and reinforce professional discipline and further ensure the smooth functioning of the office operations.
The memorandum reiterated its earlier notices from 2009 and 2013, where the High Court had emphasised that violations would be taken seriously. This reflects the High Court’s commitment to maintaining efficiency and professionalism in the workplace. According to the memorandum, controlling officers will monitor the staff for violations and strict actions will be taken if the rules are flouted.
Background
The circumstances that led to the Kerala HC’s decision are as follows: staff engaged in playing online games, browsing social media, watching videos or movies and even engaging in online shopping or trading during work hours, excluding the allocated lunch recess (as per the memorandum).
As mentioned earlier, this memorandum is not the first of its kind. There were similar directives that were issued in 2009 and 2013 to target the poor productivity standards, rooted in the staff members' behaviours. The present memorandum is unlike the previously mentioned ones as, it specifically addresses the rise in mobile-based distractions, like online gaming and trading. The present directive does not outline any exceptions to senior officials with designated responsibilities, and emphasises universal adherence for all levels of the workforce.
According to Cell Phones at Workplace Statistics, around 97% of workers use their smartphones during work hours, mixing personal and job-related activities. And more than 55% of managers say that cell phones are a major reason for lower productivity among employees.
Therefore, it can be safely concluded that even though smartphones have become indispensable tools for communication, their misuse has wider implications for overall organisational productivity.
CyberPeace Outlook
The Kerala High Court's decision to restrict personal mobile phone usage during work hours underscores the importance of fostering a disciplined and focused workplace environment. While smartphones are vital for communication, their misuse poses significant productivity challenges. Some proactive steps that employers can take are implementing clear policies, conducting regular training sessions and promoting a culture of accountability. Balancing digital freedom and professional responsibility is the key to ensuring that technological tools serve as enablers of efficiency rather than distractions in the workplace.
References
- https://www.thehindu.com/sci-tech/technology/kerala-high-court-issues-memo-banning-staff-from-gaming-and-social-media-during-work-hours/article68963949.ece
- https://timesofindia.indiatimes.com/technology/tech-news/kerala-high-court-bans-mobile-gaming-and-social-media-for-staff-during-work-hours/articleshow/116101149.cms
- https://images.assettype.com/barandbench/2024-12-05/1hiq8ffv/Kerala_High_Court_OM.pdf
- https://www.coolest-gadgets.com/cell-phones-at-workplace-statistics/

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide