#Fact Check: Old Photo Misused to Claim Israeli Helicopter Downed in Lebanon!
Executive Summary
A viral image claims that an Israeli helicopter shot down in South Lebanon. This investigation evaluates the possible authenticity of the picture, concluding that it was an old photograph, taken out of context for a more modern setting.

Claims
The viral image circulating online claims to depict an Israeli helicopter recently shot down in South Lebanon during the ongoing conflict between Israel and militant groups in the region.


Factcheck:
Upon Reverse Image Searching, we found a post from 2019 on Arab48.com with the exact viral picture.



Thus, reverse image searches led fact-checkers to the original source of the image, thus putting an end to the false claim.
There are no official reports from the main news agencies and the Israeli Defense Forces that confirm a helicopter shot down in southern Lebanon during the current hostilities.
Conclusion
Cyber Peace Research Team has concluded that the viral image claiming an Israeli helicopter shot down in South Lebanon is misleading and has no relevance to the ongoing news. It is an old photograph which has been widely shared using a different context, fueling the conflict. It is advised to verify claims from credible sources and not spread false narratives.
- Claim: Israeli helicopter recently shot down in South Lebanon
- Claimed On: Facebook
- Fact Check: Misleading, Original Image found by Google Reverse Image Search
Related Blogs

Introduction
In the age of digital technology, the concept of net neutrality has become more crucial for preserving the equity and openness of the internet. Thanks to net neutrality, all internet traffic is treated equally, without difference or preferential treatment. Thanks to this concept, users can freely access and distribute content, which promotes innovation, competition, and the democratisation of knowledge. India has seen controversy over net neutrality, which has led to a legal battle to protect an open internet. In this blog post, we’ll look at the challenges of the law and the efforts made to safeguard net neutrality in India.
Background on Net Neutrality in India
Net neutrality became a hot topic in India after a major telecom service provider suggested charging various fees for accessing different parts of the internet. Internet users, activists, and organisations in favour of an open internet raised concern over this. Millions of comments were made on the consultation document by the Telecom Regulatory Authority of India (TRAI) published in 2015, highlighting the significance of net neutrality for the country’s internet users.
Legal Battle and Regulatory Interventions
The battle for net neutrality in India acquired notoriety when TRAI released the “Prohibition of Discriminatory Tariffs for Data Services Regulations” in 2016. These laws, often known as the “Free Basics” prohibition, were created to put an end to the usage of zero-rating platforms, which exempt specific websites or services from data expenses. The regulations ensured that all data on the internet would be handled uniformly, regardless of where it originated.
But the legal conflict didn’t end there. The telecom industry challenged TRAI’s regulations, resulting in a flurry of legal conflicts in numerous courts around the country. The Telecom Regulatory Authority of India Act and its provisions of it that control TRAI’s ability to regulate internet services were at the heart of the legal dispute.
The Indian judicial system greatly helped the protection of net neutrality. The importance of non-discriminatory internet access was highlighted in 2018 when the Telecom Disputes Settlement and Appellate Tribunal (TDSAT) upheld the TRAI regulations and ruled in favour of net neutrality. The TDSAT ruling created a crucial precedent for net neutrality in India. In 2019, after several rounds of litigation, the Supreme Court of India backed the principles of net neutrality, declaring that it is a fundamental idea that must be protected. The nation’s legislative framework for preserving a free and open internet was bolstered by the ruling by the top court.
Ongoing Challenges and the Way Forward
Even though India has made great strides towards upholding net neutrality, challenges persist. Because of the rapid advancement of technology and the emergence of new services and platforms, net neutrality must always be safeguarded. Some practices, such as “zero-rating” schemes and service-specific data plans, continue to raise questions about potential violations of net neutrality principles. Regulatory efforts must be proactive and under constant watch to allay these worries. The regulatory organisation, TRAI, is responsible for monitoring for and responding to breaches of the net neutrality principles. It’s crucial to strike a balance between promoting innovation and competition and maintaining a free and open internet.
Additionally, public awareness and education on the issue are crucial for the continuation of net neutrality. By informing users of their rights and promoting involvement in the conversation, a more inclusive and democratic decision-making process is assured. Civil society organisations and advocacy groups may successfully educate the public about net neutrality and gain their support.
Conclusion
The legal battle for net neutrality in India has been a significant turning point in the campaign to preserve an open and neutral internet. A robust framework for net neutrality in the country has been established thanks to legislative initiatives and judicial decisions. However, due to ongoing challenges and the dynamic nature of technology, maintaining net neutrality calls for vigilant oversight and strong actions. An open and impartial internet is crucial for fostering innovation, increasing free speech, and providing equal access to information. India’s attempts to uphold net neutrality should motivate other nations dealing with similar issues. All parties, including politicians, must work together to protect the principles of net neutrality and ensure that the Internet is accessible to everyone.

Introduction
The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1 2024, urging platforms to prevent bias, discrimination, and threats to electoral integrity by using AI, generative AI, LLMs, or other algorithms. The advisory requires that AI models deemed unreliable or under-tested in India must obtain explicit government permission before deployment. While leveraging Artificial Intelligence models, Generative AI, software, or algorithms in their computer resources, Intermediaries and platforms need to ensure that they prevent bias, discrimination, and threats to electoral integrity. As Intermediaries are required to follow due diligence obligations outlined under “Information Technology (Intermediary Guidelines and Digital Media Ethics Code)Rules, 2021, updated as of 06.04.2023”. This advisory is issued to urge the intermediaries to abide by the IT rules and regulations and compliance therein.
Key Highlights of the Advisories
- Intermediaries and platforms must ensure that users of Artificial Intelligence models/LLM/Generative AI, software, or algorithms do not allow users to host, display, upload, modify, publish, transmit, store, update, or share unlawful content, as per Rule 3(1)(b) of the IT Rules.
- The government emphasises intermediaries and platforms to prevent bias or discrimination in their use of Artificial Intelligence models, LLMs, and Generative AI, software, or algorithms, ensuring they do not threaten the integrity of the electoral process.
- The government requires explicit permission to use deemed under-testing or unreliable AI models, LLMs, or algorithms on the Indian internet. Further, it must be deployed with proper labelling of potential fallibility or unreliability. Further, users can be informed through a consent popup mechanism.
- The advisory specifies that all users should be well informed about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. It entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform.
- The advisory also indicates measures advocating to combat deepfakes or misinformation. The advisory necessitates identifying synthetically created content across various formats, advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Rajeev Chandrasekhar, Union Minister of State for IT, specified that
“Advisory is aimed at the Significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups. Advisory is aimed at untested AI platforms from deploying on the Indian Internet. Process of seeking permission , labelling & consent based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers. Safety & Trust of India's Internet is a shared and common goal for Govt, users and Platforms.”
Conclusion
MeitY's advisory sets the stage for a more regulated Al landscape. The Indian government requires explicit permission for the deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet. Alongside intermediaries, the advisory also applies to digital platforms that incorporate Al elements. Advisory is aimed at significant platforms and will not apply to startups. This move safeguards users and fosters innovation by promoting responsible AI practices, paving the way for a more secure and inclusive digital environment.
References
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf
- https://economictimes.indiatimes.com/tech/technology/govts-ai-advisory-will-not-apply-to-startups-mos-it-rajeev-chandrasekhar/articleshow/108197797.cms?from=mdr
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/