#FactCheck -Old Video of Benjamin Netanyahu Running in Knesset Falsely Linked to Iran-Israel Tensions
Executive Summary
Amid the ongoing tensions between the United States, Israel, and Iran, a video circulating on social media claims that Israeli Prime Minister Benjamin Netanyahu was seen running after Iran launched an attack on Israel. However, research by the CyberPeace found the viral claim to be misleading. Our research revealed that the video has no connection with the current tensions between the United States, Israel, and Iran. In reality, the clip dates back to 2021, when Netanyahu was rushing inside Israel’s parliament to cast his vote after arriving late.
Claim:
On the social media platform X (formerly Twitter), a user shared the video on March 5, 2026, claiming that Netanyahu had fled and gone into hiding due to fear of Iran. The post included inflammatory remarks suggesting that Iran had demonstrated its power and that Netanyahu had abandoned his country out of fear.

Fact Check
To verify the authenticity of the video, we extracted several keyframes and conducted a reverse image search on Google. During the research, we found the same video on the official X account of Benjamin Netanyahu, posted on December 14, 2021. In the post, Netanyahu wrote in Hebrew, which translates to,“I am always proud to run for you. Photographed half an hour ago in the Knesset.”

Further research also led us to a Hebrew news website where the same video was published.

According to the report, voting in the Knesset (Israel’s parliament) continued throughout the night, and an explosives-related bill was passed by a very narrow margin. At the time, opposition leader Benjamin Netanyahu was in his room inside the Knesset building. When he was called for the vote, he hurried through the parliament corridors to reach the chamber in time to cast his vote.
Conclusion:
Our research found that the viral video is unrelated to the ongoing tensions involving the United States, Israel, and Iran. The footage is from 2021 and shows Benjamin Netanyahu rushing inside the Knesset to participate in a parliamentary vote after being called in at the last moment.
Related Blogs
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms

Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full