#FactCheck: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana
Executive Summary:
A viral video claims to show a massive cumulonimbus cloud over Gurugram, Haryana, and Delhi NCR on 3rd September 2025. However, our research reveals the claim is misleading. A reverse image search traced the visuals to Lviv, Ukraine, dating back to August 2021. The footage matches earlier reports and was even covered by the Ukrainian news outlet 24 Kanal, which published the story under the headline “Lviv Covered by Unique Thundercloud: Amazing Video”. Thus, the viral claim linking the phenomenon to a recent event in India is false.
Claim:
A viral video circulating on social media claims to show a massive cloud formation over Gurugram, Haryana, and the Delhi NCR region on 3rd September 2025. The cloud appears to be a cumulonimbus formation, which is typically associated with heavy rainfall, thunderstorms, and severe weather conditions.

Fact Check:
After conducting a reverse image search on key frames of the viral video, we found matching visuals from videos that attribute the phenomenon to Lviv, a city in Ukraine. These videos date back to August 2021, thereby debunking the claim that the footage depicts a recent weather event over Gurugram, Haryana, or the Delhi NCR region.


Further research revealed that a Ukrainian news channel named 24 Kanal, had reported on the Lviv thundercloud phenomenon in August 2021. The report was published under the headline “Lviv Covered by Unique Thundercloud: Amazing Video” ( original in Russian, translated into English).

Conclusion:
The viral video does not depict a recent weather event in Gurugram or Delhi NCR, but rather an old incident from Lviv, Ukraine, recorded in August 2021. Verified sources, including Ukrainian media coverage, confirm this. Hence, the circulating claim is misleading and false.
- Claim: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana.
- Claimed On: Social Media
- Fact Check: False and Misleading.
Related Blogs
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/
.webp)
Introduction
In an era where digital connectivity drives employment, investment, and communication, the most potent weapon of cybercriminals is ‘gaining trust’ with their sophisticated tactics. Prayagraj has been a recent battleground in India's cybercrime landscape. Within a one-year crackdown, over 10,400 SIM cards, 612 mobile device IMEIs, and 59 bank accounts were blocked, exposing a sprawling international fraud network. These activities primarily targeted unsuspecting individuals through Telegram job postings, fake investment tips, and mobile app scams, highlighting the darker side of convenience in cyberspace. With India now experiencing a wave of scams enabled by technology, this crackdown establishes a precedent for concerted cyber policing and awareness among citizens.
Digital Deceit: How the Scams Operated
SIM cards that have been issued through fake or stolen identities are increasingly being used by cybercriminals in Prayagraj and elsewhere. These SIMs were the initial weapon in a highly organised fraud system, allowing criminals to conduct themselves anonymously while abusing messaging services like WhatsApp and Telegram. The gangs involved in these scams, some of which have been linked by reports to nations like Nepal, Pakistan, China, Dubai, and Myanmar, enticed their victims with rich-yielding stock market advice, remote employment offers, and weekend employment promises. After getting a target engaged, victims were slowly manipulated into sending money in the name of application fees, verification fees, or investment contributions.
API Abuse and OTP Interception
What's more alarming about these scams is their tech-savviness. From Prayagraj's cybercrime squad, several syndicates are reported to have employed API-based mobile applications to intercept OTPs (One-Time Passwords) sent to Indian numbers. Such apps, cleverly disguised as genuine services or work-from-home software, collected personal details like bank account credentials and payment card data, allowing wrongdoers to carry out unauthorised transactions in a matter of minutes. The pilfered funds were then quickly transferred through several mule accounts, rendering the money trail almost untraceable.
The Human Impact: How Citizens Were Trapped
Victims tended to come from job-hunting groups, students, or housewives seeking to earn additional income. Often, the scammers persuaded users to join Telegram channels providing free investment advice or job-referral-based schemes, creating an illusion of authenticity. Once on board, victims were sometimes even paid small commissions initially, creating a false sense of success. This tactic, known as “advance-fee confidence building,” made victims more likely to invest larger sums later, ultimately leading to complete financial loss.
Digital Arrest Threats and Bitcoin Ransom Scams
Aside from investment and job scam complaints, the cybercrime cell also saw several "digital arrest" scams, where victims were forced to send money under the threat of engaging in criminal activities. Bitcoin extortion schemes were also used in some cases, with perpetrators threatening exposure of victims' personal information or browsing history on the internet unless they were paid in cryptocurrency.
Law Enforcement’s Cyber Shield: Local Action, Global Impact
Identifying the extent of the threat, Prayagraj authorities implemented strategic measures to enable local policing. Cyber Units have been formed in each of the 43 police stations in the district, each made up of a sub-inspector, head constable, constable, lady constable, and computer operator. This decentralised model enables response in real-time, improved victim support, and quicker forensic analysis of hacked devices. The nodal officer for cyber operations said that this multi-level action is not punitive but preventive, meant to break syndicates before more harm is caused.
CyberPeace Recommendations: Prevention is Power
As cybercrime gets advanced, citizens will also have to keep pace with it. Prayagraj's experience highlights the importance of public awareness, digital literacy, and instant response processes. To assist in preventing people from falling victim to such scams, CyberPeace advises the following:
- Don't click on dubious APK links sent on WhatsApp or Telegram.
- Do not share OTPs or confidential details, even if the source appears to be familiar.
- Never download unfamiliar apps that demand access to SMS or financial information.
- Block your SIM card, payment cards, and bank accounts at once if your phone is stolen.
- Report all cyber frauds to cybercrime.gov.in or your local Cyber Cell.
- Never join investment or job groups on social sites without verification.
- Refuse video calls from unknown numbers; some scammers use this method of recording or blackmailing victims.
Conclusion
Prayagraj crackdown uncovers both the magnitude and versatility of cybercrime in the present. From trans-border cartels to Telegram job scams, the cyber front is as intricate as ever. But this incident also illustrates what can be achieved when technology, law enforcement, and public awareness come together. To stay safe from cyber threats, a cyber-conscious citizenry is as important as an effective cyber cell for India. At CyberPeace, we know that defending cyberspace begins with cyber resilience, and the story of Prayagraj should encourage communities everywhere to take active digital precautions.
References
- https://www.hindustantimes.com/cities/lucknow-news/over-10k-sims-blocked-as-job-investment-frauds-rise-in-prayagraj-101753715061234.html
- https://consumer.ftc.gov/articles/how-recognize-and-avoid-phishing-scams
- https://faq.whatsapp.com/2286952358121083
- https://education.vikaspedia.in/viewcontent/education/digital-litercy/information-security/preventing-online-scams-cert-in-advisory?lgn=en
- https://cybercrime.gov.in/Accept.aspx
- https://www.linkedin.com/pulse/perils-advance-fee-fraud-protecting-yourself-from-scammers-sharma/

Executive Summary
A shocking video showing a car hanging from a highway signboard is going viral on social media. The clip allegedly shows a black Mahindra Thar stuck on an overhead direction signboard on the Delhi–Jaipur Highway (NH-48). Social media users are widely sharing the video, claiming it shows a real road accident. However, a research by CyberPeace found the viral claim to be false. Our findings reveal that the circulating video is not real but AI-generated.
Claim
Social media users are sharing the clip as footage of an actual road accident. A viral post on X (formerly Twitter) claims that the incident took place on the Delhi–Jaipur Highway, showing a black Mahindra & Mahindra Thar lodged in a highway signboard.
- https://x.com/SenBaijnath/status/2024098520006029504
- https://archive.ph/cmr5e

Fact Check
On closely examining the viral video, several inconsistencies were observed that are commonly associated with AI-generated content. For instance, it appears highly improbable for a heavy vehicle to get stuck precisely at the center of a signboard at such a height. Despite the scale of the alleged incident, traffic on the highway below continues moving normally without any disruption. Additionally, the text visible on the right side of the signboard appears distorted and unusually written. To further verify the authenticity of the video, we analysed it using the AI detection tool Hive Moderation, which indicated a 99.9% probability that the video was AI-generated.

Another AI image detection tool, WasitAI, also found that the visuals in the viral clip were largely AI-generated.

Conclusion
Based on our research and available evidence, it is clear that the viral video showing a Mahindra Thar hanging from a highway signboard is not real but AI-generated.