#FactCheck - Viral Clip Not Harish Rana’s Farewell, Linked to Surat Woman’s Organ Donation
Executive Summary
Amid reports that AIIMS Delhi has initiated the process to implement the Supreme Court’s decision allowing passive euthanasia for Harish Rana, a video is being shared on social media claiming to show his emotional farewell. However, research by the CyberPeace found the viral claim to be misleading. Our research revealed that the video has no connection to the Harish Rana case. In reality, the footage is from Surat, Gujarat, where the family of a 45-year-old brain-dead woman donated her organs.
Claim:
On social media platform X (formerly Twitter), a user shared the viral video on March 16, 2026, with the caption:
“Harish Rana… struggled for life for 13 years… in the end said goodbye to the world through euthanasia… but even while leaving, gave new life to many through organ donation… eyes, liver, kidneys and several other organs will give a new life to many…”
Post link and archive link are given below:

Fact Check
We took screenshots from the viral video and conducted a reverse image search. This led us to an Instagram handle where the same video was uploaded on January 27, 2026.
- https://www.instagram.com/reels/DUAt_42k2ME/

According to the caption on the Instagram post, the video shows a brain-dead woman in Surat whose liver, both kidneys, and eyes were donated. The post also included an image of the woman. Based on clues from the viral video, we conducted a keyword search on Google and found a report on the website of News18 Gujarati.

According to the report, organ donation by Ritaben Hareshbhai Korat in Surat gave a new life to five patients. The report also carried her photograph, matching the visuals seen in the viral video.
Conclusion:
Our research found that the viral video has no connection to Harish Rana. It actually shows an incident from Surat, Gujarat, where the family of a 45-year-old brain-dead woman, Ritaben Korat, donated her organs.
Related Blogs

Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction:
Cybercriminals can hack your phone using or exploiting some public charging stations such as at airports, Malls, hotel rooms, etc. When you plug in your phone or laptop devices into a power charger using USB, you may be plugging into a hacker. Juice jacking poses a security threat at public charging stations at airports, shopping malls and other public places that provide free charging stations for mobile, tablet, and laptop devices.
Cybercriminals can either hack into the public charging spot or download malware or viruses through the USB port into your system. When you plug your phone, laptop, tablet or other such devices for charging at public charging stations, it can download malware to your phone and other such devices, and then hackers can access your personal information or passwords, It is really a problem since hackers can even get access to your bank account for unauthorised transactions by accessing your passwords and personal information.
Hence it is important to think twice before using public charging spots, as it might lead to serious consequences such as malware, data leak and hacking. Hacking can gain unauthorised access to your personal information by installing malware in your device and they might monitor your device by installing monitor software or spyware to your device. This scam is referred to as juice jacking.
FBI issued an advisory warning about using public charging stations:
The Federal Bureau of Investigation (FBI), In May 2023, advised users to avoid using free charging stations in airports, hotels, or shopping centres. The warning comes as threat actors have figured out ways to inject malware into devices attached to publicly installed USB ports.
Updated Security measures:
We all must have seen public charging points such as airports, shopping malls, metro, and other public places that provide charging stations for mobile devices. But it can be a threat to your stored data on your device. During the charging process, your data can be transferred which can ultimately lead to a data breach. Hence utmost care should be taken to protect your information and data. iPhones and other devices have security measures in place, When you plug your phone into a charging power source, a pop-up appears to ask permission to allow or disallow the transfer of Data. There is also a default setting in the phones where data transfer is disabled. In the latest models, when you plug your device into a new port or a computer, a pop-up appears asking whether the device is trusted or not.
Two major risks involved in the threat of Juice jacking:
- Malware installation: – Malware apps can be used by bad actors to clone your phone data to their device, Your personal data is transferred leading to a data breach. Some types of malware include Trojans, adware, spyware, crypto-miners, etc. Once this malware is injected into your device, It is easy for cybercriminals to extort a ransom to restore the information they have unauthorized access to.
- Data Theft: It is important to give emphasis to the question of whether your data is protected at public charging stations? When we use a USB cable and connect to a public charging station port, cyber-criminals by injecting malware into the charging port system, can inject the malware into your device or your data can be transferred to the bad actors. USB cords can be exploited by cybercriminals to commit malicious activities.
Best practices:
- Avoid using public charging stations: Using public charging stations is not safe. It is very possible for a cybercriminal to load malware into a charging station with a USB cord. Hence It is advisable not to use public charging spots, try to make sure you charge your phone, and laptop devices in your car, at home or office so it will help you to avoid public charging stations.
- Alternative method of charging: You can carry a power bank along with you to avoid the use of public charging stations.
- Lock your phone: Lock your phone once connected to the charging port. Locking your device once connected to the charging station will prevent it from being able to sync or transfer data.
- Software update: It is important to enable and use your device’s software security measures. Mobile devices have certain technical protections against such vulnerabilities and security threats.
- Review Settings: Disable your device’s option to automatically transfer data when a charging cable is connected. This is the default on iOS devices. Android users should disable this option in the Settings app. If your device displays a prompt asking you to “trust this computer,” it means you are connected to another device, not simply a power outlet. Deny the permission, as trusting the computer will enable data transfers to and from your device. So when you plug your device into a USB port and a prompt appears asking permission to "share data" or “trust this computer” or “charge only,” always select “charge only.”
Conclusion:
Cybercriminals or bad actors exploit public charging stations. There have been incidents where malware was planted in the system by the use of a USB cord, During the charging process, the USB cord opens a path into your device that a cybercriminal can exploit, which means the devices can exchange data. That's called juice jacking. Hence avoid using public charging stations, our safety is in our hands and it is significantly important to give priority to best practices and stay protected in the evolving digital landscape.
References:
- https://www.cbsnews.com/philadelphia/news/fbi-issue-warning-about-juice-jacking-when-using-free-cell-phone-charging-kiosks/
- https://www.comparitech.com/blog/information-security/juice-jacking/#:~:text=Avoid%20public%20charging%20stations,guaranteed%20success%20with%20this%20method
- https://www.fcc.gov/juice-jacking-tips-to-avoid-it