#FactCheck - Debunking the AI-Generated Image of an Alleged Israeli Army Dog Attack
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Related Blogs
.webp)
What is Deepfake
Deepfakes have been, a fascinating but unsettling phenomenon that is now prominent in this digital age. These incredibly convincing films have drawn attention and blended in well with our high-tech surroundings. The lifelike but completely manufactured quality of deepfake videos has become an essential component of our digital environment as we traverse the broad reaches of our digital society. While these works have an undoubtedly captivating charm, they have important ramifications. Come along as we examine the deep effects that misuse of deepfakes can have on our globalized digital culture. After many actors now business tycoon Ratan Tata has become the latest victim of deepfake. Tata called out a post from a user that used a fake interview of him in a video recommending Investments.
Case Study
The nuisance of deep fake is sparing none from actors politicians to entrepreneurs everyone is getting caught in the Trap. Soon after the actresses Rashmika Mandana, Katrina Kaif, Kajol and other actresses fell prey to the rising scenario of deepfake, a new case from the industry emerged, which took Mr. Ratan Tata on storm. Business tycoon Ratan Tata has become the latest victim of deepfake. He took to his social media sharing an image of the interview that asked people to invest money in a project in a post on Instagram. Ratan Tata called out a post from a user that used a fake interview of him in a video recommending these Investments.
This nuisance that has been created because of the deepfake is sparing nobody from actors to politicians to entrepreneurs now everyone is getting caught in the trap the latest victim being Ratan Tata. Tech magnate Ratan Tata is the most recent victim of this deepfake phenomenon. The millionaire was seen in the video, which was posted by the Instagram user, giving his followers a once-in-a-million opportunity to "exaggerate investments risk-free."
In the stated video, Ratan Tata was seen giving everyone in India advice mentioning to the public regarding the opportunity to increase their money with no risk and a 100% guarantee. The caption of the video clip stated, "Go to the channel right now."
Tata annotated both the video and the screenshot of the caption with the word "FAKE."
Ongoing Deepfake Assaults in India
Deepfake videos continue to target celebrities, and Priyanka Chopra is also a recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples, including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her looking the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Prevention and Detection
In order to effectively combat the growing threat posed by deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain.
Conclusion
The current instance involving Ratan Tata serves as an example of how the emergence of counterfeit technology poses an imminent danger to our digital civilization. The fake video, which was posted to Instagram, showed the business tycoon giving financial advice and luring followers with low-risk investment options. Tata quickly called out the footage as "FAKE," highlighting the need for careful media consumption. The Tata incident serves as a reminder of the possible damage deepfakes can do to prominent people's reputations. The issue, in Ratan Tata's instance specifically, demands that public personalities be more mindful of the possible misuse of their virtual identities. We can all work together to strengthen our defenses against this sneaky phenomenon and maintain the trustworthiness of our internet-based culture in the face of ever-changing technological challenges by emphasizing preventive measures like strict safety regulations and the implementation of state-of-the-art deepfake detection technologies.
References
- https://economictimes.indiatimes.com/magazines/panache/ratan-tata-slams-deepfake-video-that-features-him-giving-risk-free-investment-advice/articleshow/105805223.cms
- https://www.ndtv.com/india-news/ratan-tata-flags-deepfake-video-of-his-interview-recommending-investments-4640515
- https://www.businesstoday.in/bt-tv/short-video/viralvideo-business-tycoon-ratan-tata-falls-victim-to-deepfake-408557-2023-12-07
- https://www.livemint.com/news/india/false-ratan-tata-calls-out-a-deepfake-video-of-him-giving-investment-advice-11701926766285.html

Introduction
Attempts at countering the spread of misinformation can include various methods and differing degrees of engagement by different stakeholders. The inclusion of Artificial Intelligence, user awareness and steps taken on the part of the public at a larger level, focus on innovation to facilitate clear communication can be considered in the fight to counter misinformation. This becomes even more important in spaces that deal with matters of national security, such as the Indian army.
IIT Indore’s Intelligent Communication System
As per a report in Hindustan Times on 14th November 2024, IIT Indore has achieved a breakthrough on their project regarding Intelligent Communication Systems. The project is supported by the Department of Telecommunications (DoT), the Ministry of Electronics and Information Technology (MeitY), and the Council of Scientific and Industrial Research (CSIR), as part of a specialised 6G research initiative (Bharat 6G Alliance) for innovation in 6G technology.
Professors at IIT Indore claim that the system they are working on has features different from the ones currently in use. They state that the receiver system can recognise coding, interleaving (a technique used to enhance existing error-correcting codes), and modulation methods together in situations of difficult environments, which makes it useful for transmitting information efficiently and securely, and thus could not only be used for telecommunication but the army as well. They also mention that previously, different receivers were required for different scenarios, however, they aim to build a system that has a single receiver that can adapt to any situation.
Previously, in another move that addressed the issue of misinformation in the army, the Ministry of Defence designated the Additional Directorate General of Strategic Communication in the Indian Army as the authorised officer to issue take-down notices regarding instances of posts consisting of illegal content and misinformation concerning the Army.
Recommendations
Here are a few policy implications and deliberations one can explore with respect to innovations geared toward tackling misinformation within the army:
- Research and Development: In this context, investment and research in better communication through institutes have enabled a system that ensures encrypted and secure communication, which helps with ways to combat misinformation for the army.
- Strategic Deployment: Relevant innovations can focus on having separate pilot studies testing sensitive data in the military areas to assess their effectiveness.
- Standardisation: Once tested, a set parameter of standards regarding the intelligence communication systems used can be encouraged.
- Cybersecurity integration: As misinformation is largely spread online, innovation in such fields can encourage further exploration with regard to integration with Cybersecurity.
Conclusion
The spread of misinformation during modern warfare can have severe repercussions. Sensitive and clear data is crucial for safe and efficient communication as a lot is at stake. Innovations that are geared toward combating such issues must be encouraged, for they not only ensure efficiency and security with matters related to defence but also combat misinformation as a whole.
References
- https://timesofindia.indiatimes.com/city/indore/iit-indore-unveils-groundbreaking-intelligent-receivers-for-enhanced-6g-and-military-communication-security/articleshow/115265902.cms
- https://www.hindustantimes.com/technology/6g-technology-and-intelligent-receivers-will-ease-way-for-army-intelligence-operations-iit-official-101731574418660.html

Executive Summary:
A viral clip where the Indian batsman Virat Kohli is shown endorsing an online casino and declaring a Rs 50,000 jackpot in three days as a guarantee has been proved a fake. In the clip that is accompanied by manipulated captions, Kohli is said to have admitted to being involved in the launch of an online casino during the interview with Graham Bensinger but this is not true. Nevertheless, an investigation showed that the original interview, which was published on YouTube in the last quarter of 2023 by Bensinger, did not have the mentioned words spoken by Kohli. Besides, another AI deepfake analysis tool called Deepware labelled the viral video as a deepfake.

Claims:
The viral video states that cricket star Virat Kohli gets involved in the promotion of an online casino and ensures that the users of the site can make a profit of Rs 50,000 within three days. Conversely, the CyberPeace Research Team has just revealed that the video is a deepfake and not the original and there is no credible evidence suggesting Kohli's participation in such endorsements. A lot of the users are sharing the videos with the wrong info title over different Social Media platforms.


Fact Check:
As soon as we were informed about the news, we made use of Keyword Search to see any news report that could be considered credible about Virat Kohli promoting any Casino app and we found nothing. Therefore, we also used Reverse Image Search for Virat Kohli wearing a Black T-shirt as seen in the video to find out more about the subject. We landed on a YouTube Video by Graham Bensinger, an American Journalist. The clip of the viral video was taken from this original video.

In this video, he discussed his childhood, his diet, his cricket training, his marriage, etc. but did not mention anything regarding a newly launched Casino app by the cricketer.
Through close scrutiny of the viral video we have noticed some inconsistencies in the lip-sync and voice. Subsequently, we executed Deepfake Detection in Deepware tool and identified it to be Deepfake Detected.


Finally, we affirm that the Viral Video Is Deepfakes Video and the statement made is False.
Conclusion:
The video has gone viral and claims that cricketer Virat Kohli is the one endorsing an online casino and assuring you that in three days time you will be a guaranteed winner of Rs 50,000. This is all a fake story. This incident demonstrates the necessity of checking facts and a source before believing any information, as well as remaining sceptical about deepfakes and AI (artificial intelligence), which is a new technology used nowadays for spreading misinformation.