#FactCheck -Misleading Claim Uses 2013 Army Coffin Photo to Spread False Ceasefire Narrative
Executive Summary
Social media users, particularly Pakistani propaganda accounts, shared an image showing coffins wrapped in the Indian tricolour and claimed that India violated the ceasefire along the Line of Control (LoC). According to the posts, Pakistan retaliated with heavy firing, captured the Indian Army’s Kumar Top post, and several Indian soldiers were killed in the exchange.
One user wrote, “Breaking News: Indian Army once again violated the ceasefire in the Mandal sector, targeting civilians with mortar shelling. Pakistan responded strongly, captured the Indian Army’s Kumar Top post, and several soldiers were reportedly killed. Calm has now been restored after Pakistan’s response.”

Fact Check
Research by CyberPeace found the viral claim to be false. Using reverse image search, we traced the viral photo to the Shutterstock website. The image description states that it was taken on August 6, 2013, and shows Indian Army personnel standing near the coffins of soldiers who were killed by Pakistani infiltrators at a brigade headquarters in Poonch, located about 240 km from Jammu. This confirms that the image is old and unrelated to recent developments along the Line of Control.

Further verification led us to a report published by NBC News on August 8, 2013, which also featured the same visual in connection with the 2013 cross-border attack.

Additionally, posts from the official X (formerly Twitter) handle of the Indian Army 16 Corps (White Knight Corps) stated that based on intelligence inputs and continuous surveillance, suspicious terrorist activity was detected near Nathua Tibba in the Sunderbani sector close to the LoC in the early hours of February 19, 2026. Alert troops responded promptly and successfully foiled the infiltration attempt. The Army also confirmed that operational vigilance remains high across the sector. However, there were no reports of casualties due to Pakistani firing.

Conclusion:
The viral image showing coffins of Indian soldiers is not recent but dates back to 2013. There are no confirmed reports of casualties from Pakistani firing along the Line of Control in the current context. Therefore, the claim circulating on social media is misleading.
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Executive Summary
The IT giant Apple has alerted customers to the impending threat of "mercenary spyware" assaults in 92 countries, including India. These highly skilled attacks, which are frequently linked to both private and state actors (such as the NSO Group’s Pegasus spyware), target specific individuals, including politicians, journalists, activists and diplomats. In sharp contrast to consumer-grade malware, these attacks are in a league unto themselves: highly-customized to fit the individual target and involving significant resources to create and use.
As the incidence of such attacks rises, it is important that all persons, businesses, and officials equip themselves with information about how such mercenary spyware programs work, what are the most-used methods, how these attacks can be prevented and what one must do if targeted. Individuals and organizations can begin protecting themselves against these attacks by enabling "Lockdown Mode" to provide an extra layer of security to their devices and by frequently changing passwords and by not visiting the suspicious URLs or attachments.
Introduction: Understanding Mercenary Spyware
Mercenary spyware is a special kind of spyware that is developed exclusively for law enforcement and government organizations. These kinds of spywares are not available in app stores, and are developed for attacking a particular individual and require a significant investment of resources and advanced technologies. Mercenary spyware hackers infiltrate systems by means of techniques such as phishing (by sending malicious links or attachments), pretexting (by manipulating the individuals to share personal information) or baiting (using tempting offers). They often intend to use Advanced Persistent Threats (APT) where the hackers remain undetected for a prolonged period of time to steal data by continuous stealthy infiltration of the target’s network. The other method to gain access is through zero-day vulnerabilities, which is the process of gaining access to mobile devices using vulnerabilities existing in software. A well-known example of mercenary spyware includes the infamous Pegasus by the NSO Group.
Actions: By Apple against Mercenary Spyware
Apple has introduced an advanced, optional protection feature in its newer product versions (including iOS 16, iPadOS 16, and macOS Ventura) to combat mercenary spyware attacks. These features have been provided to the users who are at risk of targeted cyber attacks.
Apple released a statement on the matter, sharing, “mercenary spyware attackers apply exceptional resources to target a very small number of specific individuals and their devices. Mercenary spyware attacks cost millions of dollars and often have a short shelf life, making them much harder to detect and prevent.”
When Apple's internal threat intelligence and investigations detect these highly-targeted attacks, they take immediate action to notify the affected users. The notification process involves:
- Displaying a "Threat Notification" at the top of the user's Apple ID page after they sign in.

- Sending an email and iMessage alert to the addresses and phone numbers associated with the user's Apple ID.
- Providing clear instructions on steps the user should take to protect their devices, including enabling "Lockdown Mode" for the strongest available security.
- Apple stresses that these threat notifications are "high-confidence alerts" - meaning they have strong evidence that the user has been deliberately targeted by mercenary spyware. As such, these alerts should be taken extremely seriously by recipients.
Modus Operandi of Mercenary Spyware
- Installing advanced surveillance equipment remotely and covertly.
- Using zero-click or one-click attacks to take advantage of device vulnerabilities.
- Gain access to a variety of data on the device, including location tracking, call logs, text messages, passwords, microphone, camera, and app information.
- Installation by utilizing many system vulnerabilities on devices running particular iOS and Android versions.
- Defense by patching vulnerabilities with security updates (e.g., CVE-2023-41991, CVE-2023-41992, CVE-2023-41993).
- Utilizing defensive DNS services, non-signature-based endpoint technologies, and frequent device reboots as mitigation techniques.
Prevention Measures: Safeguarding Your Devices
- Turn on security measures: Make use of the security features that the device maker has supplied, such as Apple's Lockdown Mode, which is intended to prevent viruses of all types from infecting Apple products, such as iPhones.
- Frequent software upgrades: Make sure the newest security and software updates are installed on your devices. This aids in patching holes that mercenary malware could exploit.
- Steer clear of misleading connections: Exercise caution while opening attachments or accessing links from unidentified sources. Installing mercenary spyware is possible via phishing links or attachments.
- Limit app permissions: Reassess and restrict app permissions to avoid unwanted access to private information.
- Use secure networks: To reduce the chance of data interception, connect to secure Wi-Fi networks and stay away from public or unprotected connections.
- Install security applications: To identify and stop any spyware attacks, think about installing reliable security programs from reliable sources.
- Be alert: If Apple or other device makers send you a threat notice, consider it carefully and take the advised security precautions.
- Two-factor authentication: To provide an extra degree of protection against unwanted access, enable two-factor authentication (2FA) on your Apple ID and other significant accounts.
- Consider additional security measures: For high-risk individuals, consider using additional security measures, such as encrypted communication apps and secure file storage services
Way Forward: Strengthening Digital Defenses, Strengthening Democracy
People, businesses and administrations must prioritize cyber security measures and keep up with emerging dangers as mercenary spyware attacks continue to develop and spread. To effectively address the growing threat of digital espionage, cooperation between government agencies, cybersecurity specialists, and technology businesses is essential.
In the Indian context, the update carries significant policy implications and must inspire a discussion on legal frameworks for government surveillance practices and cyber security protocols in the nation. As the public becomes more informed about such sophisticated cyber threats, we can expect a greater push for oversight mechanisms and regulatory protocols. The misuse of surveillance technology poses a significant threat to individuals and institutions alike. Policy reforms concerning surveillance tech must be tailored to address the specific concerns of the use of such methods by state actors vs. private players.
There is a pressing need for electoral reforms that help safeguard democratic processes in the current digital age. There has been a paradigm shift in how political activities are conducted in current times: the advent of the digital domain has seen parties and leaders pivot their campaigning efforts to favor the online audience as enthusiastically as they campaign offline. Given that this is an election year, quite possibly the most significant one in modern Indian history, digital outreach and online public engagement are expected to be at an all-time high. And so, it is imperative to protect the electoral process against cyber threats so that public trust in the legitimacy of India’s democratic is rewarded and the digital domain is an asset, and not a threat, to good governance.

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.