#FactCheck -Analysis Reveals AI-Generated Anomalies in Viral ‘Russia Snow Jump’ Video”
Executive Summary
A dramatic video showing several people jumping from the upper floors of a building into what appears to be thick snow has been circulating on social media, with users claiming that it captures a real incident in Russia during heavy snowfall. In the footage, individuals can be seen leaping one after another from a multi-storey structure onto a snow-covered surface below, eliciting reactions ranging from amusement to concern. The claim accompanying the video suggests that it depicts a reckless real-life episode in a snow-hit region of Russia.
A thorough analysis by CyberPeace confirmed that the video is not a real-world recording but an AI-generated creation. The footage exhibits multiple signs of synthetic media, including unnatural human movements, inconsistent physics, blurred or distorted edges, and a glossy, computer-rendered appearance. In some frames, a partial watermark from an AI video generation tool is visible. Further verification using the Hive Moderation AI-detection platform indicated that 98.7% of the video is AI-generated, confirming that the clip is entirely digitally created and does not depict any actual incident in Russia.
Claim:
The video was shared on social media by an X (formerly Twitter) user ‘Report Minds’ on January 25, claiming it showed a real-life event in Russia. The post caption read: "People jumping off from a building during serious snow in Russia. This is funny, how they jumped from a storey building. Those kids shouldn't be trying this. It's dangerous." Here is the link to the post, and below is a screenshot.

Fact Check:
The Desk used the InVid tool to extract keyframes from the viral video and conducted a reverse image search, which revealed multiple instances of the same video shared by other users with similar claims. Upon close visual examination, several anomalies were observed, including unnatural human movements, blurred and distorted sections, a glossy, digitally-rendered appearance, and a partially concealed logo of the AI video generation tool ‘Sora AI’ visible in certain frames. Screenshots highlighting these inconsistencies were captured during the research .
- https://x.com/DailyLoud/status/2015107152772297086?s=20
- https://x.com/75secondes/status/2015134928745164848?s=20


The video was analyzed on Hive Moderation, an AI-detection platform, which confirmed that 98.7% of the content is AI-generated.

The viral video showing people jumping off a building into snow, claimed to depict a real incident in Russia, is entirely AI-generated. Social media users who shared it presented the digitally created footage as if it were real, making the claim false and misleading.
Related Blogs

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/
.webp)
The world of Artificial Intelligence is entering a new phase with the rise of Agentic AI, often described as the third wave of AI evolution. Unlike earlier systems that relied on static models (that learn from the information that is fed) and reactive outputs, Agentic AI introduces intelligent agents that can make decisions, take initiative, and act autonomously in real time. These systems are designed to require minimal human oversight while actively collaborating and learning continuously. Such capabilities indicate an incoming shift, especially in the ways in which Indian businesses can function. For better understanding, Agentic AI is capable of streamlining operations, personalising services, and driving innovations at scale.
India and Agentic AI
Building as we go, India is making continuous strides in the AI revolution- deliberating on government frameworks, and simultaneously adapting. At Microsoft's Pinnacle 2025 summit in Hyderabad, India's pivotal role in shaping the future of Agentic AI was brought to the spotlight. With over 17 million developers on GitHub and ambitions to become the world's largest developer community by 2028, India's tech talent is gearing up to lead global AI innovations. Microsoft's Azure AI Foundry, also highlighted the country's growing influence in the AI landscape.
Indian companies are actively integrating Agentic AI into their operations to enhance efficiency and customer experience. Zomato is leveraging AI agents to optimise delivery logistics, ensuring timely and efficient service. Infosys has developed AI-driven copilots to assist developers in code generation, reducing development time, requiring fewer people to work on a particular project, and improving software quality.
As per a report by Deloitte, the Indian AI market is projected to grow potentially $20 billion by 2028. However, this is accompanied by significant challenges. 92% of Indian executives identify security concerns as the primary obstacle to responsible AI usage. Additionally, regulatory uncertainties and privacy risks associated with sensitive data were also highlighted.
Challenges in Adoption
Despite the enthusiasm, several barriers hinder the widespread adoption of Agentic AI in India:
- Skills Gap: While the AI workforce is expected to grow to 1.25 million by 2027, the current growth rate of 13% is considered to be insufficient with respect to the demands of the market.
- Data Infrastructure: Effective AI systems require robust, structured, and accessible datasets. Many organisations lack the necessary data maturity, leading to flawed AI outputs and decision-making failures.
- Trust and Governance: Building trust in AI systems is crucial. Concerns over data privacy, ethical usage, and regulatory compliance require robust governance frameworks to ensure the adoption of AI in a responsible manner.
- Looming fear of job loss: As AI continues to take up more sophisticated roles, a general feeling of hesitancy with respect to the loss of employment/human labour might come in the way of adopting such measures.
- Outsourcing: Currently, most companies prefer outsourcing or buying AI solutions rather than building them in-house. This gives rise to the issue of adapting to evolving needs.
The Road Ahead
To fully realise the potential of Agentic AI, India must address the following challenges :
- Training the Workforce: Initiatives and workshops tailored for employees that provide AI training can prove to be helpful. Some relevant examples are Microsoft’s commitment to provide AI training to 2 million individuals by 2025 and Infosys's in-house AI training programs.
- Data Readiness: Investing in modern data infrastructure and promoting data literacy are essential to improve data quality and accessibility.
- Establishing Governance Frameworks: Developing clear regulatory guidelines and ethical standards will foster trust and facilitate responsible AI adoption. Like the IndiaAI mission, efforts regarding evolving technology and keeping up with it are imperative.
Agentic AI holds unrealised potential to transform India's business landscape when coupled with innovation and a focus on quality that enhances global competitiveness. India is at a position where by proactively addressing the existing challenges, this potential can be realised and set the foundation for a new technological revolution (along with in-house development), solidifying its position as a global AI leader.
References
- https://economictimes.indiatimes.com/tech/artificial-intelligence/india-facing-shortage-of-agentic-ai-professionals-amid-surge-in-demand/articleshow/120651512.cms?from=mdr
- https://economictimes.indiatimes.com/tech/artificial-intelligence/india-a-global-leader-in-agentic-ai-adoption-deloitte-report/articleshow/119906474.cms?from=mdr
- https://inc42.com/features/from-zomato-to-infosys-why-indias-biggest-companies-are-betting-on-agentic-ai/
- https://www.hindustantimes.com/india-news/agentic-ai-next-big-leap-in-workplace-automation-101742548406693.html
- https://www.deloitte.com/in/en/about/press-room/india-rides-the-agentic-ai-wave.html
- https://www.businesstoday.in/tech-today/news/story/ais-next-chapter-starts-in-india-microsoft-champions-agentic-ai-at-pinnacle-2025-474286-2025-05-01
- https://www.hindustantimes.com/opinion/calm-before-ai-storm-a-moment-to-prepare-101746110985736.html
- https://www.financialexpress.com/life/technology/why-agentic-ai-is-the-next-big-thing/3828357/

Introduction
Regulatory agencies throughout Europe have stepped up their monitoring of digital communication platforms because of the increased use of Artificial Intelligence in the digital domain. Messaging services have evolved into being more than just messaging systems, they now serve as a gateway for Artificial Intelligence services, Business Tools and Digital Marketplaces. In light of this evolution, the Competition Authority in Italy has taken action against Meta Platforms and ordered Meta to cease activities on WhatsApp that are deemed to restrict the ability of other companies to sell AI-based chatbots. This action highlights the concerns surrounding Gatekeeping Power, Market Foreclosure and Innovation Suppression. This proceeding will also raise questions regarding the application of Competition Law to the actions of Dominant Digital Platforms, where they leverage their own ecosystems to promote their own AI products to the detriment of competitors.
Background of the Case
In December 2025, Italy’s competition authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), ordered Meta Platforms to suspend certain contractual terms governing WhatsApp. These terms allegedly prevented or restricted the operation of third-party AI chatbots on WhatsApp’s platform.
The decision was issued as an interim measure during an ongoing antitrust investigation. According to the AGCM, the disputed terms risked excluding competing AI chatbot providers from accessing a critical digital channel, thereby distorting competition and harming consumer choice.
Why WhatsApp Matters as a Digital Gateway
WhatsApp is situated uniquely within the European digital landscape. It has hundreds of millions of users throughout the entire European Union; therefore, it is an integral part of the communication infrastructure that supports communications between individual consumers and companies as well as between companies and their service providers. AI chatbot developers depend heavily upon WhatsApp as it provides the ability to connect directly with consumers in real-time, which is critical to their success as business offers.
According to the Italian regulator's opinion, a corporation that controls the ability to communicate via such a popular platform has a tremendous influence over innovation within that market as it essentially operates as a gatekeeper between the company creating an innovative service and the consumer using that service. If Meta is permitted to stop competing AI chatbot developers while providing more productive and useful offers than those offered by competing developers, it is likely that competing developers will be unable to market and distribute their innovative products at sufficient scale to remain competitive.
Alleged Abuse of Dominant Position
Under EU and national competition law, companies holding a dominant market position bear a special responsibility not to distort competition. The AGCM’s concern is that Meta may have abused WhatsApp’s dominance by:
- Restricting market access for rival AI chatbot providers
- Limiting technical development by preventing interoperability
- Strengthening Meta’s own AI ecosystem at the expense of competitors
Such conduct, if proven, could amount to an abuse under Article 102 of the Treaty on the Functioning of the European Union (TFEU). Importantly, the authority emphasised that even contractual terms—rather than explicit bans—can have exclusionary effects when imposed by a dominant platform.
Meta’s Response and Infrastructure Argument
Meta has openly condemned the Italian ruling as “fundamentally flawed,” arguing that third-party AI chatbots represent a major economic burden to the infrastructure and risk the performance, safety, and user enjoyment of WhatsApp.
Although the protection of infrastructure is a valid issue of concern, competition authorities commonly look at whether the justifications for such restrictions are appropriate and non-discriminatory. One of the principal legal issues is whether the restrictions imposed by Meta were applied in a uniform manner or whether they were selectively imposed in favour of Meta's AI services. If the restrictions are asymmetrical in application, they may be viewed as anti-competitive rather than as legitimate technical safeguards.
Link to the EU’s Digital Markets Framework
The Italian case fits into a wider EU context in relation to their efforts to regulate the actions of large technology companies with the use of prior (ex-ante) regulation as contained in the Digital Markets Act (DMA). The DMA has put in place obligations on a set of gatekeepers to make available to third parties on a non-discriminatory basis in order to maintain equitable access, interoperability and no discrimination against those parties.
While the Italian case has been brought pursuant to an Italian competition law, its philosophy is consistent with that of the DMA in that dominant digital platforms should not undertake actions that use their control over their core products and services to prevent other companies from being able to innovate. The trend with some EU national regulators is to be increasingly willing to take swift action through the application of interim measures rather than await many years for final decisions.
Implications for AI Developers and Platforms
The Italian order signifies to developers of AI-based chatbots that competitive access to AI technology via messaging services is an important factor for regulatory bodies. The order also serves as a warning to the large incumbent organisations that are establishing a foothold in the messaging services market to integrate AI into their already established platforms that they will not be protected from competition laws.
Additionally, the overall case showcases the growing consensus amongst regulatory agencies regarding the role of competition in the development of AI. If a handful of large companies are allowed to control both the infrastructure and the AI technology being operated on top of that infrastructure, the result will likely be the development of closed ecosystems that eliminate or greatly reduce the potential for technology diversity.
Conclusion
Italy's move against Meta highlights a significant intersection between competitive laws and artificial intelligence. The Italian antitrust authority has reinforced the principle that digital gatekeepers cannot use contractual methods to block off access to competition in targeting WhatsApp's restrictive terms. As AI becomes a larger part of our day to day digital services, regulatory bodies will likely continue to increase their scrutiny on platform behaviour. The result of this investigation will impact not just the Metaverse's AI strategy, but also create a baseline for future European regulators' methods of balancing innovation versus competition versus consumer choice in an increasingly AI-driven digital marketplace.
References
- https://www.reuters.com/sustainability/boards-policy-regulation/italy-watchdog-orders-meta-halt-whatsapp-terms-barring-rival-ai-chatbots-2025-12-24/
- https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/
- https://www.communicationstoday.co.in/italy-watchdog-orders-meta-to-halt-whatsapp-terms-barring-rival-ai-chatbots/
- https://www.techinasia.com/news/italy-watchdog-orders-meta-halt-whatsapp-terms-ai-bot