#FactCheck - Suryakumar Yadav–Salman Ali Agha Handshake Row: Viral Image Found AI-Generated
Executive Summary
An image circulating on social media claims to show Suryakumar Yadav, captain of the Indian cricket team, extending his hand to greet Pakistan’s skipper Salman Ali Agha, who allegedly refused the gesture during the India–Pakistan T20 World Cup match held on February 15. Users shared the image as evidence of a real incident from the high-profile clash. However, a research by CyberPeace found that the image is AI-generated and was falsely circulated to mislead viewers.
Claim
On February 15, an X account named “@iffiViews,” reportedly operated from Pakistan, shared the image claiming it was taken during the India–Pakistan T20 World Cup match at the R. Premadasa Stadium in Colombo. The viral image appeared to show Yadav attempting to shake hands with Agha, who seemed to decline the gesture. The post quickly gained significant traction online, attracting around one million views at the time of reporting. Here is the link and archive link to the post, along with a screenshot.
- https://x.com/iffiViews/status/2023024665770484206?s=20
- https://archive.ph/xvtBs

Fact Check:
To verify the authenticity of the image, researchers closely examined the visual and identified a watermark associated with an AI image-generation tool. This raised strong indications that the image was digitally created and did not depict an actual event.

The image was further analysed using an AI detection tool, which indicated a 99.9 percent probability that the content was artificially generated or manipulated.

Researchers also conducted keyword searches to check whether the two captains had exchanged a handshake during the match. The search revealed media reports confirming that the traditional handshake between players has been discontinued since the Asia Cup 2025 in both men’s and women’s cricket. A report published by The Times of India on February 15 confirmed that no such customary exchange took place during the match between the two teams in Colombo.

Conclusion
The viral image claiming to show Suryakumar Yadav attempting to shake hands with Salman Ali Agha is not authentic. The visual is AI-generated and has been shared online with misleading claims.
Related Blogs
.jpeg)
As technological advancements continue to shape the future, the rise of artificial intelligence brings with it significant potential benefits, yet also raises concerns about the spread of misinformation. Recognising the need for accountability on both ends, on 5th May, during the three-day World News Media Congress 2025 in Kraków, Poland the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) have announced to the public the five core principles for their joint initiative called News Integrity in the Age of AI. The initiative is aimed at fostering dialogue and cooperation between media organisations and technology platforms, and the principles announced are to be a code of practice to be followed by all those taking part. With thousands of public and private media outlets around the world joining the effort, the initiative highlights the shared responsibility of AI developers to ensure that AI systems are trustworthy, safe, and supportive of a reliable news ecosystem. It represents a global call to action to uphold the integrity of news in this age of major influx and curb the growing challenge of misinformation.
The five core principles released focus on:
1. Authorisation of content by the originators is a must prior to its usage in Generative AI tools and models
2. High-quality and up-to-date news content must be recognised by third parties that are benefiting from it
3. There must be a focus on accuracy and attribution, making the original sources of news apparent to the public, promoting transparency
4. Harnessing the plural nature of the news perspectives, which will help AI-driven tools perform better and
5. An invitation to tech companies for an open dialogue with news outlets, facilitating conversation to collaborate and develop standards of transparency, accuracy, and safety.
As this initiative provides a unified platform to address and deliberate on issues affecting the integrity of news, there are also some other technical ways in which misinformation in news caused by AI can be curbed:
1. Encourage the usage of Smaller Generative AI Models: The Large Language Models (LLMs) have to be trained on a range of topics. Businesses don’t require such an expanse of information but just a little that is relevant. A narrower context of information to be sourced from allows better content navigation and a reduced chance of mix-up.
2. Fighting AI hallucination: This is a phenomenon that causes generative AI (such as chatbots and computer vision tools) to present nonsensical and inaccurate outputs as the system perceives objects or patterns that are imperceptible or non-existent to human observers. This occurs as a result of the system trying to focus on both language fluency and stitching information from different sources together. In order to deal with this, one can deploy retrieval augmented generation (RAG). This enables connection with external sources of data that include academic journals, a company’s organisational data, among other things, that would help in providing more accurate, domain-specific content.
Conclusion
This global call to action marks an important step toward fostering unified efforts to combat misinformation. The set of principles introduced is designed to be adaptable, providing a flexible framework that can evolve to address emerging challenges (through dialogue and discussion), including issues like copyright infringement. While AI offers powerful tools to support the news industry, it is essential to emphasise that human oversight remains crucial. These technological advancements are meant to enhance and augment the work of journalists, not replace it, ensuring that the core values of journalism, such as accuracy and integrity, are preserved in the age of AI.
References
● https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-ethics-8-biggest-concerns
● https://trilateralresearch.com/responsible-ai/using-responsible-ai-to-combat-misinformation
● https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation
● https://2024.jou.ufl.edu/page/ai-and-misinformation
● https://techxplore.com/news/2025-05-ai-counter-misinformation-fact-based.html
● https://www.advanced-television.com/2025/05/06/media-outlets-call-for-ai-companies-news-integrity-protection/https://www.ibm.com/think/insights/ai-misinformation

A video has been going viral on social media in recent days in which Republic TV’s Editor-in-Chief and anchor Arnab Goswami can allegedly be heard using objectionable language against Prime Minister Narendra Modi. While sharing the video, users are claiming that Arnab Goswami publicly made controversial remarks about the Prime Minister.
An investigation by CyberPeace Foundation found this claim to be completely false. Our probe revealed that the viral video is edited and is being circulated on social media with a misleading narrative. In the original video, Arnab Goswami was not making any personal statement; rather, he was referring to an old statement made by Congress leader Rahul Gandhi.
Viral Claim
An Instagram user posted this video on 5 January 2026. In the video, a voice resembling Arnab Goswami is heard saying, “Ye jo Narendra Modi hain, ye chhe mahine baad ghar se nahi nikal paayenge aur Hindustan ke log inhein danda maarenge.”
(Translation: “This Narendra Modi will not be able to step out of his house after six months, and the people of India will beat him with sticks.”)
The post link, its archive link, and screenshots can be seen below:
- Instagram link: https://www.instagram.com/reel/DTHrO7bk7Rf/?igsh=MThzbzBlcm82eWN0ZA%3D%3D
- Archive link: https://archive.ph/oaYsf

Fact Check
To verify the viral claim, we first examined the video using Google Lens search. During this process, we found a video published on 18 July 2024 on the official YouTube channel of Republic Bharat. The investigation revealed that this video is the longer (extended) version of the viral clip.
After carefully watching the full video, it became clear that Arnab Goswami was not making the statement himself. Instead, he was referring to a remark made by Congress leader Rahul Gandhi during the 2020 Delhi Assembly elections against Prime Minister Narendra Modi. This confirms that the viral video was clipped and presented out of context.
The related video link can be seen below: https://www.youtube.com/shorts/KlQV25-3l8s

In the next step of the investigation, to verify whether Rahul Gandhi had indeed made such a statement, we conducted a customized keyword search on Google. During this, we found a video published on 6 February 2020 on the official YouTube channel of India Today.
In this video, recorded during a public event ahead of the 2020 Delhi Assembly elections, Rahul Gandhi is seen sharply attacking Prime Minister Narendra Modi, stating that if the Prime Minister fails to resolve the issue of unemployment in the country, the youth would beat him with sticks.
The video link is given below: https://www.youtube.com/watch?v=t5qCSA5nG9Y

Conclusion
The CyberPeace Foundation’s investigation found this claim to be completely fake. The viral video is edited and is being shared in a misleading context. In the original video, Arnab Goswami was referring to an old statement made by Rahul Gandhi, which was selectively clipped and presented in a way that falsely suggests Arnab Goswami himself made objectionable remarks against Prime Minister Narendra Modi.
.webp)
Introduction
The scam involving "drugs in parcels' has resurfaced again with a new face. Cybercriminals impersonating and acting as FedEx, Police and various other authorities and in actuality, they are the perpetrators or bad actors behind the renewed "drugs in parcel" scam, which entails pressuring victims into sending money and divulging private information in order to escape fictitious legal repercussions.
Modus operandi
The modus operandi followed in this scam usually begins with a hacker calling someone on their cell phone posing as FedEx. They say that they are the recipients of a package under their name that includes illegal goods like jewellery, narcotics, or other items. The victim would feel afraid and apprehensive by now. Then there will be a video call with someone else who is posing as a police officer. The victim will be asked to keep the matter confidential while it is being investigated by this "fake officer."
After the call, they would get falsified paperwork from the CBI and RBI stating that an arrest warrant had been issued. Once the victim has fallen entirely under their sway, they would claim that the victim's Aadhaar has been used to carry out the unlawful conduct. They then request that the victim submit their bank account information and Aadhaar data for investigation. Subsequently, the hackers request that the victim transfer funds to a bank account for RBI validation. The victims thus submit money to the hackers believing it to be true for clearing their name.
Recent incidence:
In the most recent instance of a "drug-in-parcel" scam, an IT expert in Pune was defrauded of Rs 27.9 lakh by internet con artists acting as members of the Mumbai police's Cyber Crime Cell. The victim filed the First Information Report (FIR) in this matter at the police station. The victim stated that on November 11, 2023, the complainant received a call from a fraudster posing as a Mumbai police Cyber Crime Cell officer. The scammer falsely claimed to have discovered illegal narcotics in a package addressed to the complainant sent from Mumbai to Taiwan, along with an expired passport and an SBI card. To avoid arrest in a fabricated drug case, the fraudster coerced the complainant into providing bank account information under the guise of "verification." The victim, fearing legal consequences, transferred Rs 27,98,776 in ten online transactions to two separate bank accounts as instructed. Upon realizing the deception, the complainant reported the incident to the police, leading to an investigation.
In another such incident, the victim received an online bogus identity card from the scammers who had phoned him on the phone in October 2023. In an attempt to "clear the case" and issue a "no-objection certificate (NOC)," the fraudster persuaded the victim to wire money to a bank account, claiming to have seized narcotics in a shipment shipped from Mumbai to Thailand under his name. Fraudsters threatened to arrest the victim for mailing the narcotics package if money was not provided.
Furthermore, In August 2023, fraudsters acting as police officers and executives of courier companies defrauded a 25-year-old advertising student of Rs 53 lakh. They extorted money from her under the guise of avoiding legal action, which would include arrest, and informed her that narcotics had been discovered in a package she had delivered to Taiwan. According to the police, callers acting as police officers threatened to arrest the girl and forced her to complete up to 34 transactions totalling Rs 53.63 lakh from her and her mother's bank accounts to different bank accounts.
Measures to protect oneself from such scams
Call Verification:
- Be sure to always confirm the legitimacy of unexpected calls, particularly those purporting to be from law enforcement or delivery services. Make use of official contact information obtained from reliable sources to confirm the information presented.
Confidentiality:
- Use caution while disclosing personal information online or over the phone, particularly Aadhaar and bank account information. In general, legitimate authorities don't ask for private information in this way.
Official Documentation:
- Request official documents via the appropriate means. Make sure that any documents—such as arrest warrants or other government documents—are authentic by getting in touch with the relevant authorities.
No Haste in Transactions:
- Proceed with caution when responding hastily to requests for money or quick fixes. Creating a sense of urgency is a common tactic used by scammers to coerce victims into acting quickly.
Knowledge and Awareness:
- Remain up to date on common fraud schemes and frauds. Keep up with the most recent strategies employed by online fraudsters to prevent falling for fresh scam iterations.
Report Suspicious Activity:
- Notify the local police or other appropriate authorities of any suspicious calls or activities. Reports received in a timely manner can help investigations and shield others from falling for the same fraud.
2fA:
- Enable two-factor authentication (2FA) wherever you can to provide online accounts and transactions an additional degree of protection. This may lessen the chance of unwanted access.
Cybersecurity Software:
- To defend against malware, phishing attempts, and other online risks, install and update reputable antivirus and anti-malware software on a regular basis.
Educate Friends and Family:
- Inform friends and family about typical scams and how to avoid falling victim to fraud. A safer online environment can be achieved through increased collective knowledge.
Be skeptical
- Whenever anything looks strange or too good to be true, it most often is. Trust your instincts. Prior to acting, follow your gut and confirm the information.
By taking these precautions and exercising caution, people may lessen their vulnerability to scams and safeguard their money and personal data from online fraudsters.
Conclusion:
Verifying calls, maintaining secrecy, checking official papers, transacting cautiously, and keeping up to date are all examples of protective measures for protecting ourselves from such scams. Using cybersecurity software, turning on two-factor authentication, and reporting suspicious activity are essential in stopping these types of frauds. Raising awareness and working together are essential to making the internet a safer place and resisting the activities of cybercriminals.
References:
- https://indianexpress.com/article/cities/pune/pune-cybercrime-drug-in-parcel-cyber-scam-it-duping-9058298/#:~:text=In%20August%20this%20year%2C%20a,avoiding%20legal%20action%20including%20arrest.
- https://www.the420.in/pune-it-professional-duped-of-rs-27-9-lakh-in-drug-in-parcel-scam/
- https://www.newindianexpress.com/states/tamil-nadu/2023/oct/16/the-return-of-drugs-in-parcel-scam-2624323.html
- https://timesofindia.indiatimes.com/city/hyderabad/2-techies-fall-prey-to-drug-parcel-scam/articleshow/102786234.cms