#FactCheck - AI-Generated Image Falsely Shows Mohammed Siraj Offering Namaz During Net Practice
A photo circulating on social media claims to show Indian cricketer Mohammed Siraj offering namaz during net practice, while teammates Rohit Sharma, Virat Kohli and Shubman Gill are seen taking a selfie with him. Several users are sharing the image as a “beautiful moment,” portraying it as a symbol of faith, unity and sportsmanship. However, research by the Cyber Peace Foundation has found that the viral image is not genuine and has been AI-generated.
Claim
On January 14, 2026, multiple Facebook users shared the viral image with captions describing it as a touching scene from Rajkot’s Saurashtra Stadium. The posts claim that Mohammed Siraj took time out during net practice to offer prayers, reflecting his strong faith, while fellow cricketers Rohit Sharma, Virat Kohli and Shubman Gill respectfully captured the moment on camera.
Users praised the image as a rare blend of spirituality, discipline, teamwork and mutual respect, calling it a “beautiful confluence of sport and faith.”(Links to the post, archived version and screenshots are provided below.)

Fact Check:
On closely examining the viral image, several visual inconsistencies and unnatural elements were observed, raising suspicion that the picture may not be authentic.To verify this, the Cyber Peace Foundation analysed the image using the AI detection tool Hive Moderation. According to the tool’s assessment, the image showed a 99% likelihood of being AI-generated.

To further strengthen the verification, the image was also scanned using another AI detection platform, Sightengine. The results indicated a 96% probability that the image was generated using artificial intelligence.

Conclusion:
The research confirms that the viral image claiming to show Mohammed Siraj offering namaz during net practice, with Rohit Sharma, Virat Kohli and Shubman Gill taking a selfie, is not real.The photograph has been created using AI tools and falsely shared on social media, misleading users by presenting a fabricated scene as an authentic moment.
Related Blogs

Executive Summary:
Recently PAN-OS software of Palo Alto Networks was discovered with the critical vulnerability CVE-2024-3400. It is the software used to power all their networks in the next generation firewalls. This vulnerability is a common injection vulnerability which provides access to unauthenticated attackers to execute random code having root privileges on the attacked system. This has been exploited actively by threat actors, leaving many organizations at risk for severe cyberattacks. This report helps to understand the exploitation, detection, mitigations and recommendations for this vulnerability.

Understanding The CVE-2024-3400 Vulnerability:
CVE-2024-3400 impacts the particular version of PAN-OS and a certain configuration susceptible to this kind of a security issue. It is a command injection, which exists in the GlobalProtect module of the PAN-OS software. The vulnerability can be exploited by an unauthorized user to run any code on the firewall having root privileges. This targets Active Directory database (ntds.dit), important data (DPAPI), and Windows event logs (Microsoft-Windows-TerminalServices-LocalSessionManager%4Operational.evtx) and also login data, cookies, and local state data for Chrome and Microsoft Edge from specific targets leading attackers to capture the browser master key and steal sensitive information of the organization.
The CVE-2024-3400 has been provided with a critical severity rating of 10.0. The following two weaknesses make this CVE highly severe:
- CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection')
- CWE-20: Improper Input Validation.
Impacted Products:
The affected version of PAN-OS by CVE-2024-3400 are-

Only the versions 10.2, 11.0, and 11.1, setup with GlobalProtect Gateway or GlobalProtect Portal are exploited by this vulnerability. Whereas the Cloud NGFW, Panorama appliances and Prisma Access are not affected.
Detecting Potential Exploitation:
Palo Alto Networks has confirmed that they are aware of the exploitation of this particular vulnerability by threat actors. In a recent publication they have given acknowledgement to Volexity for identifying the vulnerability. There is an increasing number of organizations that face severe and immediate risk by this exploitation. Third parties also have released the proof of concept for the vulnerability.
The suggestions were provided by Palo Alto Networks to detect this critical vulnerability. To detect this vulnerability, the following command shall be run on the command-line interface of PAN-OS device:
grep pattern "failed to unmarshal session(.\+.\/" mp-log gpsvc.log*
This command looks through device logs for specific entries related to vulnerability.
These log entries should contain a long, random-looking code called a GUID (Globally Unique Identifier) between the words "session(" and ")". If an attacker has tried to exploit the vulnerability, this section might contain a file path or malicious code instead of a GUID.
Presence of such entries in your logs, could be a sign of a potential attack to hack your device which may look like:
- failed to unmarshal session(../../some/path)
A normal, harmless log entry would look like this:
- failed to unmarshal session(01234567-89ab-cdef-1234-567890abcdef)
Further investigations and actions shall be needed to secure the system in case the GUID entries were not found and suspicious.
Mitigation and Recommendations:
Mitigation of the risks posed by the critical CVE-2024-3400 vulnerability, can be accomplished by the following recommended steps:
- Immediately update Software: This vulnerability is fixed in software releases namely PAN-OS 10.2.9-h1, PAN-OS 11.0.4-h1, PAN-OS 11.1.2-h3, and all higher versions. Updating software to these versions will protect your systems fully against potential exploitation.
- Leverage Hotfixes: Palo Alto Networks has released hotfixes for commonly deployed maintenance releases of PAN-OS 10.2, 11.0, and 11.1 for the users who cannot upgrade to the latest versions immediately. These hotfixes do provide a temporary solution while you prepare for the full upgrade.
- Enable Threat Prevention: Incase of available Threat Prevention subscription, enable Threat IDs 95187, 95189, and 95191 to block attacks targeting the CVE-2024-3400 vulnerability. These Threat IDs are available in Applications and Threats content version 8836-8695 and later.
- Apply Vulnerability Protection: Ensure that vulnerability protection has been applied in the GlobalProtect interface to prevent the exploitation on the device. It can be implemented using these instructions.
- Monitor Advisory Updates: Regularly checking for the updates to the official advisory of Palo Alto Networks. This helps to stay up to date of the new releases of the guidance and threat prevention IDs of CVE-2024-3400.
- Disable Device Telemetry – Optional: It is suggested to disable the device telemetry as an additional precautionary measure.
- Remediation: If there is an active exploitation observed, follow the steps mentioned in this Knowledge Base article by Palo Alto Networks.
Implementation of the above mitigation measures and recommendations would be in a position to greatly reduce the risk of exploitation you might face from a cyber attack targeting the CVE-2024-3400 vulnerability in Palo Alto Networks' PAN-OS software.
Conclusion:
The immediate response should be taken against the offensive use of the critical CVE-2024-3400 vulnerability found in the PAN-OS platform of Palo Alto Networks. Organizations should actively respond by implementing the suggested mitigation measures such as upgrading to the patched versions, enabling threat prevention and applying vulnerability protection to immediately protect from this vulnerability. Regular monitoring, implementing security defense mechanisms and security audits are the necessary measures that help to combat emerging threats and save critical resources.

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
Entrusted with the responsibility of leading the Global Education 2030 Agenda through the Sustainable Development Goal 4, UNESCO’s Institute for Lifelong Learning in collaboration with the Media and Information Literacy and Digital Competencies Unit has recently launched a Media and Information Literacy Course for Adult Educators. The course aligns with The Pact for The Future adopted at The United Nations Summit of the Future, September 2024 - asking for increased efforts towards media and information literacy from its member countries. The course is free for Adult Educators to access and is available until 31st May 2025.
The Course
According to a report by Statista, 67.5% of the global population uses the internet. Regardless of the age and background of the users, there is a general lack of understanding on how to spot misinformation, targeted hate, and navigating online environments in a manner that is secure and efficient. Since misinformation (largely spread online) is enabled by the lack of awareness, digital literacy becomes increasingly important. The course is designed keeping in mind that many active adult educators are yet to get an opportunity to hone their skills with regard to media and information through formal education. Self-paced, a total of 10 hours, this course covers basics such as concepts of misinformation and disinformation, artificial intelligence, and combating hate speech, and offers a certificate on completion.
CyberPeace Recommendations
As this course is free of cost, can be done in a remote capacity, and covers basics regarding digital literacy, all eligible are encouraged to take it up to familiarise themselves with such topics. However, awareness regarding the availability of this course, alongside who can avail of this opportunity can be further worked on so a larger number can avail its benefits.
CyberPeace Recommendations To Enhance Positive Impact
- Further Collaboration: As this course is open to adult educators, one can consider widening the scope through active engagement with Independent organisations and even Individual internet users who are willing to learn.
- Engagement with Educational Institutions: After launching a course, an interactive outreach programme and connecting with relevant stakeholders can prove to be beneficial. Since this course requires each individual adult educator to sign up to avail the course, partnering with educational universities, institutes, etc. is encouraged. In the Indian context, active involvement with training institutes such as DIET (District Institute of Education and Training), SCERT (State Council of Educational Research and Training), NCERT (National Council of Educational Research and Training), and Open Universities, etc. could be initiated, facilitating greater awareness and more participation.
- Engagement through NGOs: NGOs (focused on digital literacy) with a tie-up with UNESCO, can aid in implementing and encouraging awareness. A localised language approach option can be pondered upon for inclusion as well.
Conclusion
Though a long process, tackling misinformation through education is a method that deals with the issue at the source. A strong foundation in awareness and media literacy is imperative in the age of fake news, misinformation, and sensitive data being peddled online. UNESCO’s course launch garners attention as it comes from an international platform, is free of cost, truly understands the gravity of the situation, and calls for action in the field of education, encouraging others to do the same.
References
- https://www.uil.unesco.org/en/articles/media-and-information-literacy-course-adult-educators-launched
- https://www.unesco.org/en/articles/celebrating-global-media-and-information-literacy-week-2024
- https://www.unesco.org/en/node/559#:~:text=UNESCO%20believes%20that%20education%20is,must%20be%20matched%20by%20quality.