#FactCheck - Debunking the AI-Generated Image of an Alleged Israeli Army Dog Attack
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL

Biological data includes biometric information such as fingerprints, facial recognition, DNA sequences, and behavioral traits. Genetic data can be extracted from an individual’s remains long after their death and can continue to identify both that individual and an expanding pool of their living relatives. This persistent identification can significantly reduce privacy over time, revealing genetic characteristics and familial relationships across successive generations.
Key Developments in Privacy Protection for Biological Data:
Legal texts have been created relating to personal data protection and privacy broadly, and can sometimes prove to be poor adaptations specifically for ‘biometric data’ and its safety. Some examples are mentioned below:
- EU and UK- GDPR
GDPR focuses primarily on biometrics in Biological Data while deciphering the technology's immense potential. The EU describes “personal data” under the General Data Protection Regulation (GDPR) including any identifiable information about a particular person. For example, this can include names, identification numbers, location data, and other structured and unstructured data. In addition, the GDPR has more specific requirements around processing sensitive or “special categories of personal data.” These “special categories” include things like genetic and biometric data. For biometric security to work well, citizens' rights must be protected appropriately, and the data collected by private and public concerns must be managed carefully and sensibly.
- USA
California Consumer Privacy Act (CCPA) grants Californian consumers the right to protect their personal information and biometric data including the right to disclosure or access, the right to be forgotten, and data portability. The sale of personal information and the option of opt-out is also given to consumers. Additionally, it contains the right to take legal action, with penalties imposed for violations.
The California Privacy Rights Act was passed on November 3, 2020, and took effect on January 1, 2023, with a lookback period starting January 1, 2022. It introduces sensitive personal information which includes biometric data and other sensitive details.
Virginia's Consumer Data Protection Act, effective from January 1, 2023, designates genetic and biometric data as sensitive data that must be protected.
Illinois' Biometric Information Privacy Act is recognised as the most robust biometric privacy law in the United States. The significance of the Rosenbach v. Six Flags case lies in the Illinois Supreme Court's ruling that a plaintiff does not need to demonstrate additional harm to impose penalties on a BIPA violator. A mere loss of statutory biometric privacy rights is sufficient to warrant penalties.
- India
As per Rule 2(1)(b) of the SPDI Rules, Sensitive Personal Data or Information, including biometric data is included under its meaning. The term ‘biometric data’ has not been defined in the Digital Personal Data Protection Act, 2023. The need for data privacy under the DPDP Act emerges only if such data is subsequently digitised under extraction and manipulation, including notice and consent requirements and penalties.
The Biotech-PRIDE (Promotion of Research and Innovation through Data Exchange) Guidelines of 2021 are aimed at fostering an exchange of information which would thereby enhance research and innovation among various research groups nationwide. These guidelines do not deal with the generation of biological data but are a mechanism to share and exchange information and knowledge generated according to existing laws, rules, regulations and norms of the country. They will ensure data-sharing benefits, maximise use, avoid duplication, maximise integration, ownership of information, better decision-making and equity of access
How is Biological Data vulnerable?
- Biological data is often immutable, meaning it cannot be altered once compromised. Unlike other authentications that can be changed, compromised biometric data poses a permanent risk, making its protection paramount.
- The use of facial recognition technology by law enforcement agencies and the creation of databases by the same also highlights the urgent need for stringent privacy protections.
- Advances in technology, particularly AI and ML, make it easier to collect, analyse, and utilise biometric data by manipulating biometric data. This in turn is leading to new forms of identity theft and fraud that make it necessary to enhance security measures and ethical considerations to prevent abuse.
- Cross-border data transfers raise serious privacy concerns, especially as countries have varying levels and standards of data protection.
- Wearable health-related biometric devices lack the required privacy protections which ends up making the data they collect vulnerable to misuse and breaches.
Future Outlook
With the growing use of biological data, there is likely to be increased pressure on regulatory bodies to strengthen privacy protections. This necessitates a need for enhanced security measures to protect users' identities and further prevent any form of unauthorised access. Future developments should be aimed at including strict consent requirements, and enhanced data security measures, especially for wearable devices. A new legal framework specifically designed to address the challenges posed by biometric data would be welcome. Biological data protection is an emerging need in the digital environment that we live in today.
References
- https://www.cnbc.com/2024/08/17/new-privacy-battle-is-underway-as-tech-gadgets-capture-our-brain-waves.html
- https://www.snrlaw.in/sense-and-sensitivity-sensitive-information-under-indias-new-data-regime/
- https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/biometrics/biometric-data
- https://www.business-standard.com/article/economy-policy/govt-releases-guideline-to-provide-framework-for-sharing-of-biological-data-121073001467_1.html