#Fact Check: Old Photo Misused to Claim Israeli Helicopter Downed in Lebanon!
Executive Summary
A viral image claims that an Israeli helicopter shot down in South Lebanon. This investigation evaluates the possible authenticity of the picture, concluding that it was an old photograph, taken out of context for a more modern setting.

Claims
The viral image circulating online claims to depict an Israeli helicopter recently shot down in South Lebanon during the ongoing conflict between Israel and militant groups in the region.


Factcheck:
Upon Reverse Image Searching, we found a post from 2019 on Arab48.com with the exact viral picture.



Thus, reverse image searches led fact-checkers to the original source of the image, thus putting an end to the false claim.
There are no official reports from the main news agencies and the Israeli Defense Forces that confirm a helicopter shot down in southern Lebanon during the current hostilities.
Conclusion
Cyber Peace Research Team has concluded that the viral image claiming an Israeli helicopter shot down in South Lebanon is misleading and has no relevance to the ongoing news. It is an old photograph which has been widely shared using a different context, fueling the conflict. It is advised to verify claims from credible sources and not spread false narratives.
- Claim: Israeli helicopter recently shot down in South Lebanon
- Claimed On: Facebook
- Fact Check: Misleading, Original Image found by Google Reverse Image Search
Related Blogs

Executive Summary:
In the digital world, people are becoming targets more and more of online scams, which rely on deception. One of the ways the social media is being used for the elections in recent time, is the "BJP - Election Bonus" offer that promises a cash prize of Rs. 5000 or more, through some easy questionnaire. This article provides the details of this swindle and reveals its deceptive tricks as well as gives a set of recommendations on how to protect yourself from such online fraud, especially during the upcoming elections.
False Claim:
The "BJP - Election Bonus" campaign boasts that by taking a few clicks of the mouse, users will get a cash prize. This scheme is nothing but a fake association with the Bharatiya Janata Party (BJP)’s Government and Prime Minister Shri Narendra Modi and therefore, it uses the images and brands of both of them to give the scheme an impression of legitimacy. The imposters are taking advantage of the public's trust for the Government and the widespread desire for remuneration to ensnare the unaware victims, specifically before the upcoming Lok Sabha elections.

The Deceptive Scheme:
- Tempting Social Media Offer: The fraud begins with an attractive link on the social media platforms. The scammers say that the proposal is related to the Bharatiya Janata Party (BJP) with the caption of “The official party has prepared many gifts for their supporters.” accompanied by an image of the Prime Minister Shri Narendra Modi.
- Luring with Money: The offer promises to give Rs.5,000 or more. This is aimed at drawing in people specifically during election campaigns; and people’s desire for financial gain.
- Tricking with Questions: When the link is clicked, the person is brought to the page with the simple questions. The purpose of these questions is to make people feel safe and believe that they have been selected for an actual government’s program.
- The Open-the-Box Trap: Finally, the questions are answered and the last instruction is to open-the-box for the prize. However, this is just a tactic for them to make you curious about the reward.
- Fake Reward and Spreading the Scam: Upon opening the box, the recipient will be greeted with the text of Rs. 5000. However, this is not true; it is just a way to make them share the link on WhatsApp, helping the scammers to reach more victims.
The fraudsters use political party names and the Prime Minister's name to increase the plausibility of it, although there is no real connection. They employ the people's desire for monetary help, and also the time of the elections, making them susceptible to their tricks.
Analytical Breakdown:
- The campaign is a cleverly-created scheme to lure people by misusing the trust they have in the Government. By using BJP's branding and the Prime Minister's photo, fraudsters aim to make their misleading offer look credible. Fake reviews and cash reward are the two main components of the scheme that are intended to lure users into getting involved, and the end result of this is the path of deception.
- Through sharing the link over WhatsApp, users become unaware accomplices that are simply assisting the scammers to reach an even bigger audience and hence their popularity, especially with the elections around the corner.
- On top of this, the time of committing this fraud is very disturbing, as the election is just round the corner. Scammers do this in the context of the political turmoil and the spread of unconfirmed rumors and speculation about the upcoming elections in the same way they did earlier. The fraudsters are using this strategy to take advantage of the political affiliations by linking their scam to the Political party and their Leaderships.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by the Party.
- Domain Analysis: The campaign is hosted on a third party domain, which is different from the official website, thus creating doubts. Whois information reveals that the domain has been registered not long ago. The domain was registered on 29th march 2024, just a few days back.

- Domain Name: PSURVEY[.]CYOU
- Registry Domain ID: D443702580-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-03-29T16:18:00.0Z
- Creation Date: 2024-03-29T15:59:17.0Z (Recently Created)
- Registry Expiry Date: 2025-03-29T23:59:59.0Z
- Registrant State/Province: Anhui
- Registrant Country: CN (China)
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
Note: Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory and Best Practices:
- Be careful and watchful for any offers that seem too good to be true online, particularly during election periods. Exercise caution at a high level when you come across such offers, because they are usually accompanied by dishonest schemes.
- Carefully cross-check the authenticity of every campaign or offer you’re considering before interacting with it. Do not click on suspicious links and do not share private data that can be further used to run the scam.
- If you come across any such suspicious activity or if you feel you have been scammed, report it to the relevant authorities, such as the local police or the cybercrime section. Reporting is one of the most effective instruments to prevent the spread of these misleading schemes and it can support the course of the investigations.
- Educate yourselves and your families on the usual scammers’ tricks, including their election-related strategies. Prompt people to think critically and a good deal of skepticism when they meet online offers and promotions that evoke a possibility to obtain money or rewards easily.
- Ensure that you are always on a high level of alert as you explore the digital field, especially during elections. The authenticity of the information you encounter should always be verified before you act on it or pass it over to someone else.
- In case you have any doubt or worry regarding a certain e-commerce offer or campaign, don’t hesitate to ask for help from reliable sources such as Cybersecurity experts or Government agencies. A consultation with credible sources will assist you in coming up with informed decisions and guarding yourself against being navigated by these schemes.
Conclusion:
The "BJP - Election Bonus" campaign is a real case study of how Internet fraud is becoming more popular day by day, particularly before the elections. Through the awareness of the tactics employed by these scammers and their abuse of the community's trust in the Government and political figures, we can equip ourselves and our communities to avert becoming the victim of such fraudulent schemes. As a team, we can collectively strive for a digital environment free of threats and breaches of security, even in times of high political tension that accompany elections.

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html