#FactCheck: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
Executive Summary:
A photoshopped image circulating online suggests Prime Minister Narendra Modi met with militant leader Hafiz Saeed. The actual photograph features PM Modi greeting former Pakistani Prime Minister Nawaz Sharif during a surprise diplomatic stopover in Lahore on December 25, 2015.
The Claim:
A widely shared image on social media purportedly shows PM Modi meeting Hafiz Saeed, a declared terrorist. The claim implies Modi is hostile towards India or aligned with terrorists.

Fact Check:
On our research and reverse image search we found that the Press Information Bureau (PIB) had tweeted about the visit on 25 December 2015, noting that PM Narendra Modi was warmly welcomed by then-Pakistani PM Nawaz Sharif in Lahore. The tweet included several images from various angles of the original meeting between Modi and Sharif. On the same day, PM Modi also posted a tweet stating he had spoken with Nawaz Sharif and extended birthday wishes. Additionally, no credible reports of any meeting between Modi and Hafiz Saeed, further validating that the viral image is digitally altered.


In our further research we found an identical photo, with former Pakistan Prime Minister Nawaz Sharif in place of Hafiz Saeed. This post was shared by Hindustan Times on X on 26 December 2015, pointing to the possibility that the viral image has been manipulated.
Conclusion:
The viral image claiming to show PM Modi with Hafiz Saeed is digitally manipulated. A reverse image search and official posts from the PIB and PM Modi confirm the original photo was taken during Modi’s visit to Lahore in December 2015, where he met Nawaz Sharif. No credible source supports any meeting between Modi and Hafiz Saeed, clearly proving the image is fake.
- Claim: Debunking the Edited Image Claim of PM Modi with Hafiz Saeed
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
26th November 2024 marked a historical milestone for India as a Hyderabad-based space technology firm TakeMe2Space, announced the forthcoming launch of MOI-TD “(My Orbital Infrastructure - Technology Demonstrator)”, India's first AI lab in space. The mission will demonstrate real-time data processing in orbit, making space research more affordable and accessible according to the Company. The launch is scheduled for mid-December 2024 aboard the ISRO's PSLV C60 launch vehicle. It represents a transformative phase for innovation and exploration in India's AI and space technology integration space.
The Vision Behind the Initiative
The AI Laboratory in orbit is designed to enable autonomous decision-making, revolutionising satellite exploration and advancing cutting-edge space research. It signifies a major step toward establishing space-based data centres, paving the way for computing capabilities that will support a variety of applications.
While space-based data centres currently cost 10–15 times more than terrestrial alternatives, this initiative leverages high-intensity solar power in orbit to significantly reduce energy consumption. Training AI models in space could lower energy costs by up to 95% and cut carbon emissions by at least tenfold, even when factoring in launch emissions. It positions MOI-TD as an eco-friendly and cost-efficient solution.
Technological Innovations and Future Impact of AI in Space
This AI Laboratory, MOI-TD includes control software and hardware components, including reaction wheels, magnetometers, an advanced onboard computer, and an AI accelerator. The satellite also features flexible solar cells that could power future satellites. It will enable the processing of real-time space data, pattern recognition, and autonomous decision-making and address the latency issues, ensuring faster and more efficient data analysis, while the robust hardware designs tackle the challenges posed by radiation and extreme space environments. Advanced sensor integration will further enhance data collection, facilitating AI model training and validation.
These innovations drive key applications with transformative potential. It will allow users to access the satellite platform through OrbitLaw, a web-based console that will allow users to upload AI models to aid climate monitoring, disaster prediction, urban growth analysis and custom Earth observation use cases. TakeMe2Space has already partnered with a leading Malaysian university and an Indian school (grades 9 and 10) to showcase the satellite’s potential for democratizing space research.
Future Prospects and India’s Global Leadership in AI and Space Research
As per Stanford’s HAI Global AI Vibrancy rankings, India secured 4th place due to its R&D leadership, vibrant AI ecosystem, and public engagement for AI. This AI laboratory is a step further in advancing India’s role in the development of regulatory frameworks for ethical AI use, fostering robust public-private partnerships, and promoting international cooperation to establish global standards for AI applications.
Cost-effectiveness and technological exercise are some of India’s unique strengths and could position the country as a key player in the global AI and space research arena and draw favourable comparisons with initiatives by NASA, ESA, and private entities like SpaceX. By prioritising ethical and sustainable practices and fostering collaboration, India can lead in shaping the future of AI-driven space exploration.
Conclusion
India’s first AI laboratory in space, MOI-TD, represents a transformative milestone in integrating AI with space technology. This ambitious project promises to advance autonomous decision-making, enhance satellite exploration, and democratise space research. Additionally, factors such as data security, fostering international collaboration and ensuring sustainability should be taken into account while fostering such innovations. With this, India can establish itself as a leader in space research and AI innovation, setting new global standards while inspiring a future where technology expands humanity’s frontiers and enriches life on Earth.
References
- https://www.ptinews.com/story/national/start-up-to-launch-ai-lab-in-space-in-december/2017534
- https://economictimes.indiatimes.com/tech/startups/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/articleshow/115701888.cms?from=mdr
- https://www.ibm.com/think/news/data-centers-space
- https://cio.economictimes.indiatimes.com/amp/news/next-gen-technologies/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/115718230

Executive Summary:
Recently, CyberPeace faced a case involving a fraudulent Android application imitating the Punjab National Bank (PNB). The victim was tricked into downloading an APK file named "PNB.apk" via WhatsApp. After the victim installed the apk file, it resulted in unauthorized multiple transactions on multiple credit cards.
Case Study: The Attack: Social Engineering Meets Malware
The incident started when the victim clicked on a Facebook ad for a PNB credit card. After submitting basic personal information, the victim receives a WhatsApp call from a profile displaying the PNB logo. The attacker, posing as a bank representative, fakes the benefits and features of the Credit Card and convinces the victim to install an application named PNB.apk. The so called bank representative sent the app through WhatsApp, claiming it would expedite the credit card application. The application was installed in the mobile device as a customer care application. It asks for permissions such as to send or view SMS messages. The application opens only if the user provides this permission.

It extracts the credit card details from the user such as Full Name, Mobile Number, complain, on further pages irrespective of Refund, Pay or Other. On further processing, it asks for other information such as credit card number, expiry date and cvv number.



Now the scammer has access to all the details of the credit card information, access to read or view the sms to intercept OTPs.
The victim, thinking they were securely navigating the official PNB website, was unaware that the malware was granting the hacker remote access to their phone. This led to ₹4 lakhs worth of 11 unauthorized transactions across three credit cards.
The Investigation & Analysis:
Upon receiving the case through CyberPeace helpline, the CyberPeace Research Team acted swiftly to neutralize the threat and secure the victim’s device. Using a secure remote access tool, we gained control of the phone with the victim’s consent. Our first step was identifying and removing the malicious "PNB.apk" file, ensuring no residual malware was left behind.
Next, we implemented crucial cyber hygiene practices:
- Revoking unnecessary permissions – to prevent further unauthorized access.
- Running antivirus scans – to detect any remaining threats.
- Clearing sensitive data caches – to remove stored credentials and tokens.
The CyberPeace Helpline team assisted the victim to report the fraud to the National Cybercrime Portal and helpline (1930) and promptly blocked the compromised credit cards.
The technical analysis for the app was taken ahead and by using the md5 hash file id. This app was marked as malware in virustotal and it has all the permissions such as Send/Receive/Read SMS, System Alert Window.


In the similar way, we have found another application in the name of “Axis Bank” which is circulated through whatsapp which is having similar permission access and the details found in virus total are as follows:



Recommendations:
This case study implies the increasingly sophisticated methods used by cybercriminals, blending social engineering with advanced malware. Key lessons include:
- Be vigilant when downloading the applications, even if they appear to be from legitimate sources. It is advised to install any application after checking through an application store and not through any social media.
- Always review app permissions before granting access.
- Verify the identity of anyone claiming to represent financial institutions.
- Use remote access tools responsibly for effective intervention during a cyber incident.
By acting quickly and following the proper protocols, we successfully secured the victim’s device and prevented further financial loss.

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full