A dramatic video showing several people jumping from the upper floors of a building into what appears to be thick snow has been circulating on social media, with users claiming that it captures a real incident in Russia during heavy snowfall. In the footage, individuals can be seen leaping one after another from a multi-storey structure onto a snow-covered surface below, eliciting reactions ranging from amusement to concern. The claim accompanying the video suggests that it depicts a reckless real-life episode in a snow-hit region of Russia.
A thorough analysis by CyberPeace confirmed that the video is not a real-world recording but an AI-generated creation. The footage exhibits multiple signs of synthetic media, including unnatural human movements, inconsistent physics, blurred or distorted edges, and a glossy, computer-rendered appearance. In some frames, a partial watermark from an AI video generation tool is visible. Further verification using the Hive Moderation AI-detection platform indicated that 98.7% of the video is AI-generated, confirming that the clip is entirely digitally created and does not depict any actual incident in Russia.
Claim:
The video was shared on social media by an X (formerly Twitter) user ‘Report Minds’ on January 25, claiming it showed a real-life event in Russia. The post caption read: "People jumping off from a building during serious snow in Russia. This is funny, how they jumped from a storey building. Those kids shouldn't be trying this. It's dangerous." Here is the link to the post, and below is a screenshot.
The Desk used the InVid tool to extract keyframes from the viral video and conducted a reverse image search, which revealed multiple instances of the same video shared by other users with similar claims. Upon close visual examination, several anomalies were observed, including unnatural human movements, blurred and distorted sections, a glossy, digitally-rendered appearance, and a partially concealed logo of the AI video generation tool ‘Sora AI’ visible in certain frames. Screenshots highlighting these inconsistencies were captured during the research .
The video was analyzed on Hive Moderation, an AI-detection platform, which confirmed that 98.7% of the content is AI-generated.
The viral video showing people jumping off a building into snow, claimed to depict a real incident in Russia, is entirely AI-generated. Social media users who shared it presented the digitally created footage as if it were real, making the claim false and misleading.
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
Google India announced sachet loans on the Google Pay application to help small businesses in the country. Google India said that merchants in India often need smaller loans, hence, the tech giant launched sachet loans on the Gpay application. The company will provide loans to small businesses, which can be repaid in easier repayment instalments. To provide the load services, Google Pay has partnered with DMI Finance. This move comes at the Google for India, 2023, the flagship event to launch the Indian interventions planned by the big tech.
What is a Sachet Loan?
The loan system is the primary backbone of the global banking system. Since we have seen a massive transition towards the digital mode of transactions and banking operations, many online platforms have emerged. With the advent of QR codes, the Unified Payment Interface (UPI) has been rampantly used by Indians for making small or petty payments. Seeing this, Sachet loans made an advent as well, Sachet loans are essentially small-ticket loans ranging from Rs 10,000 to Rs 1 lakh, with repayment tenures between 7 days and 12 months. This nano-credit addresses immediate financial needs and is designed for swift approval and disbursement. Satchel loans are one of the most sought-after loan forms in the Western world. The ease of accessibility and easy repayment options have made it a successful form of money lending, which in turn has sparked the interest of the tech giant Google to execute similar operations in India.
Google Pay
Pertaining to the fact that UPI payments are the most preferred form of online payment, google came out with GPay in 2013 and now enjoys a user base of 67 million Indians. Google Pay has a 36.10% mobile application market share in India, and 26% of the UPI payments made have been through Google Pay. Google Pay adoption for in-store payments in India was higher in 2023 than it was in early 2019, signalling a growing use among consumers. The numbers shown here refer to the share of respondents who indicated they used Google Pay in the last 12 months, either for POS transactions with a mobile device in stores and restaurants or for online shopping. Eight out of 10 respondents from India indicated they had used Google Pay in a POS setting between April 2022 and March 2023, with an additional seven out of 10 saying they used Google Pay during this same time for online payments.
Pertaining to the Indian spectrum, the following aspects should be kept into consideration:
PhonePe, Google Pay and Paytm accounted for nearly 96% of all UPI transactions by value in March
PhonePe remained the top UPI app, processing 407.63 Cr transactions worth INR 7.07 Lakh Cr
While Google Pay and Paytm retained second and third positions, respectively, Amazon Pay pushed CRED to the fifth spot in terms of the number of transactions
Walmart-owned PhonePe, Google Pay and Paytm continued their dominance in India’s UPI payments space, together processing 94% of payments in March 2023.
According to data from the National Payments Corporation of India (NPCI), the top three apps accounted for nearly 96% of all UPI transactions by value. This translates to about 841.91 Cr transactions worth INR 13.44 Lakh Cr between the three apps.
Conclusion
The big tech giant Google.org has been fundamental in creating and provisioning best-in-class services which are easily accessible to all the netizens. Satchel loans are the new services introduced by the platform and the widespread access of Gpay will go a long way in providing financial services and ease to the deprived and needy lot of the Indian population. This transition can also be seen by other payment portals like Paypal and Paytm, which clearly shows India's massive potential in leading the world of online banking and UPI transactions. As per stats, 40% of global online banking transactions take place in India. These aspects, coupled with the cores of Digital India and Make in India, clearly show how India is the global destination for investment in the current era.
Images showing collapsed buildings are being widely shared on social media following a powerful earthquake in Indonesia, with users claiming they depict the aftermath of the recent 7.4-magnitude quake. However, research by the CyberPeace Research Wing found the claim to be misleading. The viral images are not from the recent earthquake but from past tremors, and were published by major international news agencies in 2018, 2021 and 2022.
The posts surfaced after a 7.4-magnitude earthquake struck off the coast of Kota Ternate in eastern Indonesia in the early hours of April 2, 2026, killing one person after a building collapse, as reported by international media.
To verify the authenticity of the images, we conducted reverse image and keyword searches on Google. The first image was found to be part of a wider photograph published by The Associated Press on January 15, 2021.
The third image was traced to Getty Images, which published it on October 2, 2018. According to its description, the image shows rubble and debris around a mosque in Palu, Central Sulawesi, following a 7.5-magnitude earthquake.
These findings confirm that the viral images are unrelated to the recent earthquake and have been taken from older incidents.
Conclusion
The viral claim is misleading. The images circulating online do not show the aftermath of the April 2026 earthquake in Indonesia. Instead, they are old visuals from previous earthquakes, reused with a false and misleading context.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.