#Fact Check: Old Photo Misused to Claim Israeli Helicopter Downed in Lebanon!
Executive Summary
A viral image claims that an Israeli helicopter shot down in South Lebanon. This investigation evaluates the possible authenticity of the picture, concluding that it was an old photograph, taken out of context for a more modern setting.

Claims
The viral image circulating online claims to depict an Israeli helicopter recently shot down in South Lebanon during the ongoing conflict between Israel and militant groups in the region.


Factcheck:
Upon Reverse Image Searching, we found a post from 2019 on Arab48.com with the exact viral picture.



Thus, reverse image searches led fact-checkers to the original source of the image, thus putting an end to the false claim.
There are no official reports from the main news agencies and the Israeli Defense Forces that confirm a helicopter shot down in southern Lebanon during the current hostilities.
Conclusion
Cyber Peace Research Team has concluded that the viral image claiming an Israeli helicopter shot down in South Lebanon is misleading and has no relevance to the ongoing news. It is an old photograph which has been widely shared using a different context, fueling the conflict. It is advised to verify claims from credible sources and not spread false narratives.
- Claim: Israeli helicopter recently shot down in South Lebanon
- Claimed On: Facebook
- Fact Check: Misleading, Original Image found by Google Reverse Image Search
Related Blogs

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

Introduction
The year, 2022 has been a year of transition and change for the gaming industry. This year esports and gaming including the industry’s greater increased acceptance by the sports authorities and higher prize pools for top players, has been more commercial than ever, according to research by the year 2025 the industry will witness growth by 5 million dollars and around 420 million active gamers from India. Since, India is on the way to become world’s largest gaming market, with revenue earned in 2021 increasing by up to 28%, or 1.2 billion dollars, and predicted to reach 2 billion dollars by 2024 as a result of the COVID-19 expanding internet access throughout the country.
After a lengthy debate, the government has finally decided to bring online gaming under the purview of the law. The President of India has changed the rules governing e-sports and requested that the Sports Ministry and the Ministry of Electronics and Information Technology (MeitY) include e-sports in multi-sport competitions. India’s gaming sector has reached new heights this year, with the country winning its first bronze medal in the first esports event organized by this year’s Commonwealth Games, and this is only the beginning.
Indian government takes on E-sports
The Indian government has given esports a huge boost. It has been introduced into the traditional sports disciplines of the nation. Droupadi Murmu, the President of India, changed the regulations governing eSports using the authority “conferred by clause (3) of Article 77 of the Constitution,” and requested that “e-Sports be included as part of multi-sports events” from the Ministries of Electronics and Information Technology and Sports. Some crucial points will clarify the government’s position on e-sports.
- E-sports were added as a demonstration sport to the 2018 Asian Games in Jakarta, which meant that medals earned in the sport were not counted in the official total of medals.
- There is a greater desire for Esports to be integrated with school curricula.
- E-Sports (Electronic Sports) have been acknowledged by the Indian government as a component of multi-sport tournaments.

Why is e-sports important?
The Indian Esports Industry has worked hard to distinguish Esports from the broader category of “Gaming.” Esports is a competitive sport in which esports athletes compete in specific video game genres in a virtual, electronic environment using their physical and mental prowess, according to the industry.
According to studies, as individuals have gotten more screen aware and online gaming has become a part of their life, internet gaming not only improves fine motor skills but also sharpens the mind. The industry has the most users and stakeholders, and it has become critical to governing it; consequently, legislation is required to regulate it.
The online regulation bill 2022
The Online Gaming (Regulations) Bill, 2022, was recently filed in the Lok Sabha to create an effective regulatory mechanism for the online gaming business to prevent fraud and misuse of things related to or incidental to it. There are 20 sections spread throughout three chapters. It intends to establish an Online Gaming Commission, the authority, mandate, and jurisdiction of which will be specified by the Bill. An online gaming server will be licensed, relinquished, revoked, or suspended by the Commission’s key highlights of the bill to make it more clear
- The Bill establishes a regulating agency, the Online Gaming Commission (“OGC”), comprised of five members chosen by the Central Government, each with at least one specialist in the fields of law, cyber technology, and law enforcement experience.
- The OGC will be able to oversee the functions of online gaming websites, issue periodic or special reports on Online Gaming issues, recommend appropriate measures to control and curb illegal Online Gaming, grant, suspend, and revoke licenses for online gaming websites, and set fees for license applications and renewals.
- Without a website and a non-transferable and non-assignable license, the Bill proposes to make online gambling illegal. Anyone operating an online gaming server or website without a license risks up to three years in prison and a fine. The permission will be good for a six-year term.
- The license intended to be given under the Bill may be terminated or canceled if the licensee violates any of the license’s requirements or any of Bill’s provisions. However, the Bill does not apply to anybody providing backend services in India, including hosting and maintenance for any international gaming website situated outside of India.
- The bill also mentions the Foreign Direct Investment and Technology Collaboration in Online Gaming

Few misses in the bill that can be addressed to make it stronger and a better version
- The law does not address Know Your Customer (KYC) requirements, customer complaint procedures, advertising and marketing restrictions, user data protection, responsible gaming guidelines, and other concerns.
- In the bill, there is no clear distinction between money involved in the game. This is a matter of concern and needs to be addressed so the money laundering aspect can be determined.
- The distinction between “games of chance” and “games of skill” is not addressed in the Bill. Furthermore, the Bill does not specify whether its prohibitions apply only to for-real-money games or to free games.
Conclusion
Despite the bill’s flaws, it has offered optimism to the burgeoning gaming sector, which desperately needs a robust regulatory and legal framework free of ambiguity, allowing players to play safely, and encouraging entrepreneurs to enter the field with safety and security. An improved regulatory framework will increase job prospects while also assisting the government. A transparent framework will also aid in the protection of the rights of actors and stakeholders.

Recognizing As the Ministry of Electronic and Information Technology (MeitY) continues to invite proposals from academicians, institutions, and industry experts to develop frameworks and tools for AI-related issues through the IndiaAI Mission, it has also funded two AI projects that will deal with matters related to deepfakes as per a status report submitted on 21st November 2024. The Delhi court also ordered the nomination of the members of a nine-member Committee constituted by the MeitY on 20th November 2024 (to address deepfake issues) and asked for a report within three months.
Funded AI projects :
The two projects funded by MeitY are:
- Fake Speech Detection Using Deep Learning Framework- The project was initiated in December 2021 and focuses on detecting fake speech by creating a web interface for detection software this also includes investing in creating a speech verification software platform that is specifically designed for testing fake speech detection systems. It is set to end in December 2024.
- Design and Development of Software for Detecting Deepfake Videos and Images- This project was funded by MeitY from January 2022 to March 2024. It also involved the Centre for Development of Advanced Computing (C-DAC), Kolkata and Hyderabad as they have developed a prototype tool capable of detecting deepfakes. Named FakeCheck, it is designed as a desktop application and a web portal aiming to detect deepfakes without the use of the internet. Reports suggest that it is currently undergoing the testing phase and awaiting feedback.
Apart from these projects, MeitY has released their expression of interest for proposals in four other areas which include:
- Tools that detect AI-generated content along with traceable markers,
- Tools that develop an ethical AI framework for AI systems to be transparent and respect human values,
- An AI risk management and assessment tool that analyses threats and precarious situations of AI-specific risks in public AI use cases and;
- Tools that can assess the resilience of AI in stressful situations such as cyberattacks, national disasters, operational failures, etc.
CyberPeace Outlook
Deepfakes pose significant challenges to critical sectors in India, such as healthcare and education, where manipulated content can lead to crimes like digital impersonation, misinformation, and fraud. The rapid advancement of AI, with developments (regarding regulation) that can’t keep pace, continues to fuel such threats. Recognising these risks, MeitY’s IndiaAI mission, promoting investments and encouraging educational institutions to undertake AI projects that strengthen the country's digital infrastructure comes in as a guiding light. A part of the mission focuses on developing indigenous solutions, including tools for assessment and regulation, to address AI-related threats effectively. While India is making strides in this direction, the global AI landscape is evolving rapidly, with many nations advancing regulations to mitigate AI-driven challenges. Consistent steps, including inviting proposals and funding projects provide the much-needed impetus for the mission to be realized.
References
- https://economictimes.indiatimes.com/tech/technology/meity-dot-at-work-on-projects-for-fair-ai-development/articleshow/115777713.cms?from=mdr
- https://www.hindustantimes.com/india-news/meity-seeks-tools-to-detect-deepfakes-label-ai-generated-content-101734410291642.html
- https://www.msn.com/en-in/news/India/meity-funds-two-ai-projects-to-detect-fake-media-forms-committee-on-deepfakes/ar-AA1vMAlJ
- https://indiaai.gov.in/