#FactCheck: Viral video blast of fuel tank in UAE Al Hariyah Port portray as Russia-Ukraine Conflict
Executive Summary:
A viral video showing flames and thick smoke from large fuel tanks has been shared widely on social media. Many claimed it showed a recent Russian missile attack on a fuel depot in Ukraine. However, our research found that the video is not related to the Russia-Ukraine conflict. It actually shows a fire that happened at Al Hamriyah Port in Sharjah, United Arab Emirates, on May 31, 2025. The confusion was likely caused by a lack of context and misleading captions.

Claim:
The circulating claim suggests that Russia deliberately bombed Ukraine's fuel reserves and the viral video shows evidence of the bombing. The posts claim the fuel depot was destroyed purposefully during military operations, implying an increase in violence. This narrative is intended to generate feelings and reinforce fears related to war.

Fact Check:
After doing a reverse image search of the key frames of the viral video, we found that the video is actually from Al Hamriyah Port, UAE, not from the Russia-Ukraine conflict. During further research we found the same visuals were also published by regional news outlets in the UAE, including Gulf News and Khaleej Times, which reported on a massive fire at Al Hamriyah Port on 31 May 2025.
As per the news report, a fire broke out at a fuel storage facility in Al Hamriyah Port, UAE. Fortunately, no casualties were reported. Fire Management Services responded promptly and successfully brought the situation under control.


Conclusion:
The belief that the viral video is evidence of a Russian strike in Ukraine is misleading and incorrect. The video is actually of a fire at a commercial port in the UAE. When you share misleading footage like that, you distort reality and incite fear based on lies. It is simply a reminder that not all viral media is what it appears to be, and every viewer should take the time to check and verify the content source and context before accepting or reposting. In this instance, the original claim is untrue and misleading.
- Claim: Fresh attack in Ukraine! Russian military strikes again!
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.

Disclaimer:
This report is the collaborative outcome of insights derived from the CyberPeace Helpline’s operational statistics and the CyberPeace Research Team, covering the monthly helpline case trends of May 2025, the report identifies recurring trends, operational challenges, and strategic opportunities. The objective is to foster research-driven solutions that enhance the overall efficacy of the helpline.
Executive Summary:
This report summarizes the cybercrime cases reported in May, offering insights into case types, gender distribution, resolution status, and geographic trends.
As per our analysis, out of various Cyber Frauds Financial Fraud was the most reported issue, making up 43% of cases, followed by Cyberbullying (26%) and Impersonation (14%). Less frequent but serious issues included Sexual Harassment, Sextortion, Hacking, Data Tampering, and Cyber Defamation, each accounting for 3–6%, highlighting a mix of financial and behavioral threats.The gender distribution was fairly balanced, with 51% male and 49% female respondents. While both genders were affected by major crimes like financial fraud and cyber bullying, some categories—such as sexual harassment—reflected more gender-specific risks, indicating the need for gender-responsive policies and support.
Regarding case status, 60% remain under follow-up while 40% have been resolved, reflecting strong case-handling efforts by the team.
The location-wise data shows higher case concentrations in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal, with significant reports also from Delhi, Telangana, Maharashtra, and Odisha. Reports from the northeastern and eastern states confirm the nationwide spread of cyber incidents.In conclusion, the findings point to a growing need for enhanced cybersecurity awareness, preventive strategies, and robust digital safeguards to address the evolving cyber threat landscape across India.
Cases Received in May:
As per the given dataset, the following types of cases were reported to our team during the month of May:
- 💰 Financial Fraud – 43%
- 💬 Cyber Bullying – 26%
- 🕵️♂️ Impersonation – 14%
- 🚫 Sexual Harassment – 6%
- 📸 Sextortion – 3%
- 💻 Hacking – 3%
- 📝 Data Tampering – 3%
- 🗣️ Cyber Defamation – 3%

The chart illustrates various cybercrime categories and their occurrence rates. Financial Fraud emerges as the most common, accounting for 43% of cases, highlighting the critical need for stronger digital financial security. This is followed by Cyber Bullying at 26%, reflecting growing concerns around online harassment, especially among youth. Impersonation ranks third with 14%, involving identity misuse for deceitful purposes. Less frequent but still serious crimes such as Sexual Harassment (6%), Sextortion, Hacking, Data Tampering, and Cyber Defamation (each 3%) also pose significant risks to users’ privacy and safety. Overall, the data underscores the need for improved cybersecurity awareness, legal safeguards, and preventive measures to address both financial and behavioral threats in the digital space.
Gender-Wise Distribution:
- 👨 Male – 51%
- 👩 Female – 49%

The chart illustrates the distribution of respondents by gender. The data shows that Male participants make up 51% of the total, while Female participants account for 49%. This indicates a fairly balanced representation of both genders, with a slight majority of male respondents.
Gender-Wise Case Distribution:

- The chart presents a gender-wise distribution of various cybercrime cases, offering a comparative view of how different types of cyber incidents affect males and females.
- It highlights that both genders are significantly impacted by cybercrimes such as financial fraud and cyber bullying, indicating a widespread risk across the board.
- Certain categories, including sexual harassment, cyber defamation, and hacking, show more gender-specific patterns of victimization, pointing to differing vulnerabilities.
- The data suggests the need for gender-sensitive policies and preventive measures to effectively address the unique risks faced by males and females in the digital space.
- These insights can inform the design of tailored awareness programs, support services, and intervention strategies aimed at improving cybersecurity for all individuals.
Major Location Wise Distribution:
The map visualization displays location-wise distribution of reported cases across India. The cases reflect the cyber-related incidents or cases mapped geographically.

The map highlights the regional distribution of cybercrime cases across Indian states, with a higher concentration in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal. States like Delhi, Telangana, Maharashtra, and Odisha also show notable activity, indicating widespread cyber threats. Regions including Assam, Tripura, Bihar, Jharkhand, and Jammu & Kashmir further reflect the pan-India spread of such incidents. This distribution stresses the need for targeted cybersecurity awareness and stronger digital safeguards nationwide
CyberPeace Advisory:
- Use Strong and Unique Passwords: Create complex passwords using a mix of letters, numbers, and symbols. Avoid reusing the same password across multiple platforms.
- Enable Multi-Factor Authentication (MFA): Add an extra layer of security by using a second verification step like an OTP or authentication app.
- Keep Software Updated: Regularly update your operating system, apps, and security tools to protect against known vulnerabilities.
- Install Trusted Security Software: Use reliable antivirus and anti-malware programs to detect and block threats.
- Limit Information Sharing: Be cautious about sharing personal or sensitive details, especially on social media or public platforms.
- Secure Your Network: Protect your Wi-Fi with a strong password and encryption. Avoid accessing confidential information on public networks.
- Back Up Important Data: Regularly save copies of important files in secure storage to prevent data loss in case of an attack.
- Stay Informed with Cybersecurity Training: Learn how to identify scams, phishing attempts, and other online threats through regular awareness sessions.
- Control Access to Data: Give access to sensitive information only to those who need it, based on their job roles.
- Monitor and Respond to Threats: Continuously monitor systems for unusual activity and have a clear response plan for handling security incidents.
- CyberPeace Helpline mail ID: helpline@cyberpeace.net
- CyberPeace Helpline Number: 9570000066
- Central Government Helpline: https://cybercrime.gov.in/
- Central Government Helpline Number: 1930
Conclusion
The cybercrime cases reported in May highlight a diverse and evolving threat landscape across India. Financial fraud, cyber bullying, and impersonation are the most prevalent, affecting both genders almost equally, though some crimes like sexual harassment call for targeted gender-sensitive measures. With 60% of cases still under follow-up, the team’s efforts in investigation and resolution remain strong. Geographically, cyber incidents are widespread, with higher concentrations in several key states, demonstrating that no region is immune. These findings underscore the urgent need to enhance cybersecurity awareness, strengthen preventive strategies, and build robust digital safeguards. Proactive and inclusive approaches are essential to protect individuals and communities and to address the growing challenges posed by cybercrime nationwide.

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full