#Fact Check: Old Photo Misused to Claim Israeli Helicopter Downed in Lebanon!
Executive Summary
A viral image claims that an Israeli helicopter shot down in South Lebanon. This investigation evaluates the possible authenticity of the picture, concluding that it was an old photograph, taken out of context for a more modern setting.

Claims
The viral image circulating online claims to depict an Israeli helicopter recently shot down in South Lebanon during the ongoing conflict between Israel and militant groups in the region.


Factcheck:
Upon Reverse Image Searching, we found a post from 2019 on Arab48.com with the exact viral picture.



Thus, reverse image searches led fact-checkers to the original source of the image, thus putting an end to the false claim.
There are no official reports from the main news agencies and the Israeli Defense Forces that confirm a helicopter shot down in southern Lebanon during the current hostilities.
Conclusion
Cyber Peace Research Team has concluded that the viral image claiming an Israeli helicopter shot down in South Lebanon is misleading and has no relevance to the ongoing news. It is an old photograph which has been widely shared using a different context, fueling the conflict. It is advised to verify claims from credible sources and not spread false narratives.
- Claim: Israeli helicopter recently shot down in South Lebanon
- Claimed On: Facebook
- Fact Check: Misleading, Original Image found by Google Reverse Image Search
Related Blogs
.webp)
Introduction
Personalised advertisements deploy a mechanism that derives from the collection of the user’s data. Although it allows for a more tailored user experience, one cannot ignore the method through which this is achieved. Recently, as per a report by the Indian Express on 13th November 2024, Meta has come up with a less personalised ad option on Facebook and Instagram for its users in the European Union (EU). This was done due to the incompatibility of their previous ad offer with the EU’s Digital Markets Act (DMA).
Relevant Legislation
In October 2023, Meta came up with a “Pay or Consent” option for their users in the EU. It gave the users two options: either to pay a monthly subscription fee to avail of the ad-free usage variant of Facebook and Instagram, or to give consent to see personalised ads based on the user’s data. This consent model was introduced in their attempts to comply with the EU’s DMA. However, this was found to be incompatible with the said mandate, according to the EU regulators, as they believed that the users should not only have the option to consent to ads but also have access to less personalised but equivalent alternatives. It is this decision that pushed Meta to come up with less personalised ad options for users in the EU. The less-personalised ad option claims to rely on limited data and show ads that are only based on the context of what is being viewed i.e. during a Facebook or Instagram session requiring a minimum set of data points such as location, age, gender, and the user’s engagement with the ads. However, choosing this option also allows for such ads to be less skippable.
The EU’s Digital Markets Act came into force on November 1, 2022. The purpose was to make the digital marketing sector fairer and in doing so, identify what they consider to be “Gatekeepers” (core platform services such as messenger services, search engines, and app stores) and a list of do’s and don’ts for them. One of them, applicable to the case mentioned above, is the effective consent required by the user in case the gatekeeper decides to target advertisements enabled by tracking the users' activity outside the gatekeeper's core platform services.
The Indian Context
Although no such issues have been raised in India yet, it is imperative to know that in the Indian context, the DPDP (Digital Personal Data Protection) Act 2023 governs personal data regulation. This includes rules for Data Fiduciaries (those who, alone or in partnership with others, determine the means and purpose of processing personal data), the Data Principal (those who give data), Consent Managers, and even rules regarding processing data of children.
CyberPeace Recommendations:
At the level of the user, one can take steps to ensure limited collection of personal data by following the mentioned steps:
- Review Privacy Settings- Reviewing Privacy settings for one’s online accounts and devices is a healthy practice to avoid giving unnecessary information to third-party applications.
- Private Browsing- Browsing through private mode or incognito is encouraged, as it prevents websites from tracking your activity and personal data.
- Using Ad-blockers- Certain websites have a user option to block ads when the user first visits their page. Availing of this prevents spam advertisements from the respective websites.
- Using VPN- Using Virtual Private Networks enables users to hide their IP address and their data to be encrypted, preventing third-party actors from tracking the users' online activities
- Other steps include clearing cookies and cache data and using the location-sharing feature with care.
Conclusion
Meta’s compliance with the EU’s DMA signals that social media platforms cannot circumnavigate their way around rules. Balancing the services provided while respecting user privacy is of the utmost importance. The EU has set precedence for a system that respects this and can be used as an example to help set guidelines for how other countries can continue to deal with similar issues and set standards accordingly.
References
- https://indianexpress.com/article/technology/tech-news-technology/meta-less-personalised-ads-eu-regulatory-demands-9667266/
- https://rainmaker.co.in/blog/view/the-price-of-personalization-how-targeted-advertising-breaches-data-privacy-and-challenges-the-gdprs-shield
- https://www.infosecurity-magazine.com/magazine-features/fines-data-protection-violations/
- https://www.forbes.com/councils/forbestechcouncil/2023/09/01/the-landscape-of-personalized-advertising-efficiency-versus-privacy/
- https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next
- https://economictimes.indiatimes.com/news/how-to/how-to-safeguard-privacy-in-the-era-of-personalised-ads/articleshow/102748711.cms?from=mdr
- https://www.business-standard.com/technology/tech-news/facebook-instagram-users-in-europe-can-opt-for-less-personalised-ads-124111201558_1.html
- https://digital-markets-act.ec.europa.eu/about-dma_en

Introduction
The misinformation crisis has evolved from being merely an abstract risk to a clear-cut and measurable danger to individuals, families, institutions and the whole information ecosystem. The recent death hoax with the famous actor Dharmendra is a perfect illustration of how the monster of falsehoods rises, conquers the world and does damage before the mechanisms of correction have a chance to operate. The first week of November 2025 saw the first wave of reports from different social media accounts and even some online news outlets that claimed Dharmendra had died at the age of 89. The news travelled like wildfire, causing confusion, grief and emotional suffering in large circles of fans, one could say the whole world. But then, the family came to the scene with the loudest, clearest, and most conclusive denial of them all. This case is not a one-time event. It is part of a cycle of misinformation that goes through the stages from one unverified claim to the next due to the emotional value, the virality of platforms, and the accelerating online engagement.
How One Wrong Post Can Create Worry and Fear
This kind of false news spreads fast on social media because people share emotional posts without checking the source, and automated accounts often repeat the same claim which makes it look true. Such hoaxes create fear, sadness and stress for fans, and they place sudden pressure on the family who must deal with public worry at a time when they need calm and privacy. The message shared by Hema Malini who is the wife of the actor shows how hurtful and careless misinformation can be, and it reminds everyone that even one false post can create real emotional damage for many people.

Why This Hoax Spread So Quickly
- Sensationalism Drives Engagement: Rumours regarding the passing of a public figure, particularly someone who is universally loved, cause an immediate outburst of feelings. Such news is practically taken for granted by the online public, who are very likely to share it, most of the time without checking its authenticity, which, in turn, leads to viral spread.
- Very Fast Weaving-in on social media: Social media networks are very much made for swift sharing. Long before the official sources were able to either confirm or dismiss the matter, posts, reels, and messages ripped through the networks.
- Digital Users Not Verifying Source: A large part of the audience depends on screenshots, forwards, and unverified posts for keeping up with the news. This opens a very nice environment for the spreading of hoaxes.
- Weak Verification Protocols: Although there have been measures to inform the public about misinformation, most news companies still give priority to the speed of reporting rather than its correctness, though not all the time, especially for the more entertaining and attractive topics like the health or death of famous people.
- Algorithmic Amplification Risks: The engagement is mainly driven by algorithms that bring to the surface the posts that evoke strong emotions. In a way, it is very unfair because the false or sensational claims are getting in front at the same time as the corrective updates, hence, the public is getting misled. In the absence of algorithmic safeguards, misinformation is on the rise and becoming stronger.
Best Practices For Users:
- Make sure to verify before sharing, especially if the topic is about health or death.
- Get updates by following official accounts rather than through sharing of viral forwards.
- Be aware of the emotional manipulation tactics used in misleading information.
Conclusion
The rumour surrounding Dharmendra's death is yet another example that misinformation, whether promptly corrected or not, can still inflict distress, cause loss of trust and damage to reputation. It also emphasises the need for stronger information governance, responsible digital journalism, and platform intervention mechanisms as a matter of urgency. This incident, from clicks to consequences, points to a basic truth: misinformation in the digital age is quicker to spread than facts, and the responsibility of putting a stop to it falls on all the stakeholders’ platforms, media, and users.
References
- https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/esha-deol-and-hema-malini-dismiss-dharmendras-fake-death-news-relieved-fans-pray-for-actors-speedy-recovery-aap-jld-se-jld-apne-ghar-aye/articleshow/125242843.cms
- https://www.altnews.in/media-misreport-bollywood-actor-dharmendra-hasnt-passed-away-yet/
- https://economictimes.indiatimes.com/news/india/dharmendra-death-news-bollywoods-veeru-and-he-man-passes-away-at-89/articleshow/125238900.cms

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/