#FactCheck - Suryakumar Yadav–Salman Ali Agha Handshake Row: Viral Image Found AI-Generated
Executive Summary
An image circulating on social media claims to show Suryakumar Yadav, captain of the Indian cricket team, extending his hand to greet Pakistan’s skipper Salman Ali Agha, who allegedly refused the gesture during the India–Pakistan T20 World Cup match held on February 15. Users shared the image as evidence of a real incident from the high-profile clash. However, a research by CyberPeace found that the image is AI-generated and was falsely circulated to mislead viewers.
Claim
On February 15, an X account named “@iffiViews,” reportedly operated from Pakistan, shared the image claiming it was taken during the India–Pakistan T20 World Cup match at the R. Premadasa Stadium in Colombo. The viral image appeared to show Yadav attempting to shake hands with Agha, who seemed to decline the gesture. The post quickly gained significant traction online, attracting around one million views at the time of reporting. Here is the link and archive link to the post, along with a screenshot.
- https://x.com/iffiViews/status/2023024665770484206?s=20
- https://archive.ph/xvtBs

Fact Check:
To verify the authenticity of the image, researchers closely examined the visual and identified a watermark associated with an AI image-generation tool. This raised strong indications that the image was digitally created and did not depict an actual event.

The image was further analysed using an AI detection tool, which indicated a 99.9 percent probability that the content was artificially generated or manipulated.

Researchers also conducted keyword searches to check whether the two captains had exchanged a handshake during the match. The search revealed media reports confirming that the traditional handshake between players has been discontinued since the Asia Cup 2025 in both men’s and women’s cricket. A report published by The Times of India on February 15 confirmed that no such customary exchange took place during the match between the two teams in Colombo.

Conclusion
The viral image claiming to show Suryakumar Yadav attempting to shake hands with Salman Ali Agha is not authentic. The visual is AI-generated and has been shared online with misleading claims.
Related Blogs
.jpeg)
Introduction and Brief Analysis
A movie named “The Artifice Girl” portrayed A law enforcement agency developing an AI-based personification of a 12-year-old girl who appears to be exactly like a real person. Believing her to be an actual girl, perpetrators of child sexual exploitation were caught attempting to seek sexual favours. The movie showed how AI aided law enforcement, but the reality is that the emergence of Artificial Intelligence has posed numerous challenges in multiple directions. This example illustrates both the promise and the complexity of using AI in sensitive areas like law enforcement, where technological innovation must be carefully balanced with ethical and legal considerations.
Detection and Protection tools are constantly competing with technologies that generate content, automate grooming and challenge legal boundaries. Such technological advancements have provided enough ground for the proliferation of Child Sexual Exploitation and Abuse Material (CSEAM). Also known as child pornography under Section 2 (da) of Protection of Children from Sexual Offences Act, 2012, it defined it as - “means any visual depiction of sexually explicit conduct involving a child which includes a photograph, video, digital or computer-generated image indistinguishable from an actual child and image created, adapted, or modified, but appears to depict a child.”
Artificial Intelligence is a category of technologies that attempt to shape human thoughts and behaviours using input algorithms and datasets. Two Primary applications can be considered in the context of CSEAM: classifiers and content generators. Classifiers are programs that learn from large data sets, which may be labelled or unlabelled and further classify what is restricted or illegal. Whereas generative AI is also trained on large datasets, it uses that knowledge to create new things. Majority of current AI research related to AI for CSEAM is done by the use of Artificial neural networks (ANNs), a type of AI that can be trained to identify unusual connections between items (classification) and to generate unique combinations of items (e.g., elements of a picture) based on the training data used.
Current Legal Landscape
The legal Landscape in terms of AI is yet unclear and evolving, with different nations trying to track the evolution of AI and develop laws. However, some laws directly address CSEAM. The International Centre for Missing and Exploited Children (ICMEC) combats Illegal sexual content involving children. They have a “Model Legislation” for setting recommended sanctions/sentencing. According to research performed in 2018, Illegal sexual content involving children is illegal in 118 of the 196 Interpol member states. This figure represents countries that have sufficient legislation in place to meet 4 or 5 of the 5 criteria defined by the ICMEC.
CSEAM in India can be reported on various portals like the ‘National Cyber Crime Reporting Portal’. Online crimes related to children, including CSEAM, can be reported to this portal by visiting cybercrime.gov.in. This portal allows anonymous reporting, automatic FIR registration and tracking of your complaint. ‘I4C Sahyog Portal’ is another platform managed by the Indian Cyber Crime Coordination Centre (I4C). This portal integrates with social media platforms.
The Indian legal front for AI is evolving and CSEAM is well addressed in Indian laws and through judicial pronouncements. The Supreme Court judgement on Alliance and Anr v S Harish and ors is a landmark in this regard. The following principles were highlighted in this judgment.
- The term “child pornography” should be substituted by “Child Sexual Exploitation and Abuse Material” (CSEAM) and shall not be used for any further judicial proceeding, order, or judgment. Also, parliament should amend the same in POCSO and instead, the term CSEAM should be endorsed.
- Parliament to consider amending Section 15 (1) of POCSO to make it more convenient for the general public to report by way of an online portal.
- Implementing sex education programs to give young people a clear understanding of consent and the consequences of exploitation. To help prevent Problematic sexual behaviour (PSB), schools should teach students about consent, healthy relationships and appropriate behaviour.
- Support services to the victims and rehabilitation programs for the offenders are essential.
- Early identification of at-risk individuals and implementation of intervention strategies for youth.
Distinctive Challenges
According to a report by the National Centre for Missing and Exploited Children (NCMEC), a significant number of reports about child sexual exploitation and abuse material (CSEAM) are linked to perpetrators based outside the country. This highlights major challenges related to jurisdiction and anonymity in addressing such crimes. Since the issue concerns children and considering the cross-border nature of the internet and the emergence of AI, Nations across the globe need to come together to solve this matter. Delays in the extradition procedure and irregular legal processes across the jurisdictions hinder the apprehension of offenders and the delivery of justice to victims.
CyberPeace Recommendations
For effective regulation of AI-generated CSEAM, laws are required to be strengthened for AI developers and trainers to prevent misuse of their tools. AI should be designed with its ethical considerations, ensuring respect for privacy, consent and child rights. There can be a self-regulation mechanism for AI models to recognise and restrict red flags related to CSEAM and indicate grooming or potential abuse.
A distinct Indian CSEAM reporting portal is urgently needed, as cybercrimes are increasing throughout the nation. Depending on the integrated portal may lead to ignorance of AI-based CSEAM cases. This would result in faster response and focused tracking. Since AI-generated content is detectable. The portal should also include an automated AI-content detection system linked directly to law enforcement for swift action.
Furthermore, International cooperation is of utmost importance to win the battle of AI-enabled challenges and to fill the jurisdictional gaps. A united global effort is required. Using a common technology and unified international laws is essential to tackle AI-driven child sexual exploitation across borders and protect children everywhere. CSEAM is an extremely serious issue. Children are among the most vulnerable to such harmful content. This threat must be addressed without delay, through stronger policies, dedicated reporting mechanisms and swift action to protect children from exploitation.
References:
- https://www.sciencedirect.com/science/article/pii/S2950193824000433?ref=pdf_download&fr=RR-2&rr=94efffff09e95975
- https://aasc.assam.gov.in/sites/default/files/swf_utility_folder/departments/aasc_webcomindia_org_oi d_4/portlet/level_2/pocso_act.pdf
- https://www.manupatracademy.com/assets/pdf/legalpost/just-rights-for-children-alliance-and-anr-vs-sharish-and-ors.pdfhttps://www.icmec.orghttps://www.missingkids.org/theissues/generative-ai

Introduction
Words come easily, but not necessarily the consequences that follow. Imagine a 15-year-old child on the internet hoping that the world will be nice to him and help him gain confidence, but instead, someone chooses to be mean on the internet, or the child becomes the victim of a new kind of cyberbullying, i.e., online trolling. The consequences of trolling can have serious repercussions, including eating disorders, substance abuse, conduct issues, body dysmorphia, negative self-esteem, and, in tragic cases, self-harm and suicide attempts in vulnerable individuals. The effects of online trolling can include anxiety, depression, and social isolation. This is one example, and hate speech and online abuse can touch anyone, regardless of age, background, or status. The damage may take different forms, but its impact is far-reaching. In today’s digital age, hate speech spreads rapidly through online platforms, often amplified by AI algorithms.
As we celebrate today, i.e., 18th June, the International Day for Countering Hate Speech, if we have ever been mean to someone on the internet, we pledge never to repeat that kind of behaviour, and if we have been the victim, we will stand against the perpetrator and report it.
This year, the theme for the International Day for Countering Hate Speech is “Hate Speech and Artificial Intelligence Nexus: Building coalitions to reclaim inclusive and secure environments free of hatred. UN Secretary-General Antonio Guterres, in his statement, said, “Today, as this year’s theme reminds us, hate speech travels faster and farther than ever, amplified by Artificial Intelligence. Biased algorithms and digital platforms are spreading toxic content and creating new spaces for harassment and abuse."
Coded Convictions: How AI Reflects and Reinforces Ideologies
Algorithms have swiftly taken the place of feelings; they tamper with your taste, and they do so with a lighter foot, invisibly. They are becoming an important component of social media user interaction and content distribution. While these tools are designed to improve user experience, they frequently inadvertently spread divisive ideologies and push extremist propaganda. This amplification can strengthen the power of extremist organisations, spread misinformation, and deepen societal tensions. This phenomenon, known as “algorithmic radicalisation,” demonstrates how social media companies may utilise a discriminating content selection approach to entice people down ideological rabbit holes and shape their ideas. AI-driven algorithms often prioritise engagement over ethics, enabling divisive and toxic content to trend and placing vulnerable groups, especially youth and minorities, at risk. The UN’s Strategy and Plan of Action on Hate Speech, launched on June 18, 2019, recognises that while AI holds promise for early detection and prevention of harmful speech, it also demands stringent human rights safeguards. Without regulation, these tools can themselves become purveyors of bias and exclusion.
India’s Constitutional Resolve and Civilizational Ethos against Hate
India has always taken pride in being inclusive and united rather than divided. As far as hate speech is concerned, India's stand is no different. The United Nations, India believes in the same values as its international counterpart. Although India has won many battles against hate speech, the war is not over and is now more prominent than ever due to the advancement in communication technologies. In India, while the right to freedom of speech and expression is protected under Article 19(1)(a), its exercise is limited subject to reasonable restrictions under Article 19(2). Landmark rulings such as Ramji Lal Modi v. State of U.P. and Amish Devgan v. UOI have clarified that speech can be curbed if it incites violence or undermines public order. Section 69A of the IT Act, 2000, empowers the government to block content, and these principles are also reflected in Section 196 of the BNS, 2023 (153A IPC) and Section 299 of the BNS, 2023 (295A IPC). Platforms are also required to track down the creators of harmful content and remove it within a reasonable hour and fulfil their due diligence requirements under IT rules.
While there is no denying that India needs to be well-equipped and prepared normatively to tackle hate propaganda and divisive forces. India’s rich culture and history, rooted in philosophies of Vasudhaiva Kutumbakam (the world is one family) and pluralistic traditions, have long stood as a beacon of tolerance and coexistence. By revisiting these civilizational values, we can resist divisive forces and renew our collective journey toward harmony and peaceful living.
CyberPeace Message
The ultimate goal is to create internet and social media platforms that are better, safer and more harmonious for each individual, irrespective of his/her/their social and cultural background. CyberPeace stands resolute on promoting digital media literacy, cyber resilience, and consistently pushing for greater accountability for social media platforms.
References
- https://www.un.org/en/observances/countering-hate-speech
- https://www.artemishospitals.com/blog/the-impact-of-trolling-on-teen-mental-health
- https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/

Introduction
Phishing as a Service (PhaaS) platform 'LabHost' has been a significant player in cybercrime targeting North American banks, particularly financial institutes in Canada. LabHost offers turnkey phishing kits, infrastructure for hosting pages, email content generation, and campaign overview services to cybercriminals in exchange for a monthly subscription. The platform's popularity surged after introducing custom phishing kits for Canadian banks in the first half of 2023.Fortra reports that LabHost has overtaken Frappo, cybercriminals' previous favorite PhaaS platform, and is now the primary driving force behind most phishing attacks targeting Canadian bank customers.
In the digital realm, where the barriers to entry for nefarious activities are crumbling, and the tools of the trade are being packaged and sold with the same customer service one might expect from a legitimate software company. This is the world of Phishing-as-a-Service (PhaaS), and at the forefront of this ominous trend is LabHost, a platform that has been instrumental in escalating attacks on North American banks, with a particular focus on Canadian financial institutions.
LabHost is not a newcomer to the cybercrime scene, but its ascent to infamy was catalyzed by the introduction of custom phishing kits tailored for Canadian banks in the first half of 2023. The platform operates on a subscription model, offering turnkey solutions that include phishing kits, infrastructure for hosting malicious pages, email content generation, and campaign overview services. For a monthly fee, cybercriminals are handed the keys to a kingdom of deception and theft.
Emergence of Labhost
The rise of LabHost has been meticulously chronicled by various cyber security firms which reports that LabHost has dethroned the previously favored PhaaS platform, Frappo. LabHost has become the primary driving force behind the majority of phishing attacks targeting customers of Canadian banks. Despite suffering a disruptive outage in early October 2023, LabHost has rebounded with vigor, orchestrating several hundreds of attacks per month.
Their investigation into LabHost's operations reveals a tiered membership system: Standard, Premium, and World, with monthly fees of $179, $249, and $300, respectively. Each tier offers an escalating scope of targets, from Canadian banks to 70 institutions worldwide, excluding North America. The phishing templates provided by LabHost are not limited to financial entities; they also encompass online services like Spotify, postal delivery services like DHL, and regional telecommunication service providers.
LabRat
The true ingenuity of LabHost lies in its integration with 'LabRat,' a real-time phishing management tool that enables cybercriminals to monitor and control an active phishing attack. This tool is a linchpin in man-in-the-middle style attacks, designed to capture two-factor authentication codes, validate credentials, and bypass additional security measures. In essence, LabRat is the puppeteer's strings, allowing the phisher to manipulate the attack with precision and evade the safeguards that are the bulwarks of our digital fortresses.
LabSend
In the aftermath of its October disruption, LabHost unveiled 'LabSend,' an SMS spamming tool that embeds links to LabHost phishing pages in text messages. This tool orchestrates a symphony of automated smishing campaigns, randomizing portions of text messages to slip past the vigilant eyes of spam detection systems. Once the SMS lure is cast, LabSend responds to victims with customizable message templates, a Machiavellian touch to an already insidious scheme.
The Proliferation of PhaaS
The proliferation of PhaaS platforms like LabHost, 'Greatness,' and 'RobinBanks' has democratized cybercrime, lowering the threshold for entry and enabling even the most unskilled hackers to launch sophisticated attacks. These platforms are the catalysts for an exponential increase in the pool of threat actors, thereby magnifying the impact of cybersecurity on a global scale.
The ease with which these services can be accessed and utilized belies the complexity and skill traditionally required to execute successful phishing campaigns. Stephanie Carruthers, who leads an IBM X-Force phishing research project, notes that crafting a single phishing email can consume upwards of 16 hours, not accounting for the time and resources needed to establish the infrastructure for sending the email and harvesting credentials.
PhaaS platforms like LabHost have commoditized this process, offering a buffet of malevolent tools that can be customized and deployed with a few clicks. The implications are stark: the security measures that businesses and individuals have come to rely on, such as multi-factor authentication (MFA), are no longer impenetrable. PhaaS platforms have engineered ways to circumvent these defenses, rendering them vulnerable to exploitation.
Emerging Cyber Defense
In the face of this escalating threat, a multi-faceted defense strategy is imperative. Cybersecurity solutions like SpamTitan employ advanced AI and machine learning to identify and block phishing threats, while end-user training platforms like SafeTitan provide ongoing education to help individuals recognize and respond to phishing attempts. However, with phishing kits now capable of bypassing MFA,it is clear that more robust solutions, such as phishing-resistant MFA based on FIDO/WebAuthn authentication or Public Key Infrastructure (PKI), are necessary to thwart these advanced attacks.
Conclusion
The emergence of PhaaS platforms represents a significant shift in the landscape of cybercrime, one that requires a vigilant and sophisticated response. As we navigate this treacherous terrain, it is incumbent upon us to fortify our defenses, educate our users, and remain ever-watchful of the evolving tactics of cyber adversaries.
References
- https://www-bleepingcomputer-com.cdn.ampproject.org/c/s/www.bleepingcomputer.com/news/security/labhost-cybercrime-service-lets-anyone-phish-canadian-bank-users/amp/
- https://www.techtimes.com/articles/302130/20240228/phishing-platform-labhost-allows-cybercriminals-target-banks-canada.htm
- https://www.spamtitan.com/blog/phishing-as-a-service-threat/
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms