#FactCheck -Misleading Claim Uses 2013 Army Coffin Photo to Spread False Ceasefire Narrative
Executive Summary
Social media users, particularly Pakistani propaganda accounts, shared an image showing coffins wrapped in the Indian tricolour and claimed that India violated the ceasefire along the Line of Control (LoC). According to the posts, Pakistan retaliated with heavy firing, captured the Indian Army’s Kumar Top post, and several Indian soldiers were killed in the exchange.
One user wrote, “Breaking News: Indian Army once again violated the ceasefire in the Mandal sector, targeting civilians with mortar shelling. Pakistan responded strongly, captured the Indian Army’s Kumar Top post, and several soldiers were reportedly killed. Calm has now been restored after Pakistan’s response.”

Fact Check
Research by CyberPeace found the viral claim to be false. Using reverse image search, we traced the viral photo to the Shutterstock website. The image description states that it was taken on August 6, 2013, and shows Indian Army personnel standing near the coffins of soldiers who were killed by Pakistani infiltrators at a brigade headquarters in Poonch, located about 240 km from Jammu. This confirms that the image is old and unrelated to recent developments along the Line of Control.

Further verification led us to a report published by NBC News on August 8, 2013, which also featured the same visual in connection with the 2013 cross-border attack.

Additionally, posts from the official X (formerly Twitter) handle of the Indian Army 16 Corps (White Knight Corps) stated that based on intelligence inputs and continuous surveillance, suspicious terrorist activity was detected near Nathua Tibba in the Sunderbani sector close to the LoC in the early hours of February 19, 2026. Alert troops responded promptly and successfully foiled the infiltration attempt. The Army also confirmed that operational vigilance remains high across the sector. However, there were no reports of casualties due to Pakistani firing.

Conclusion:
The viral image showing coffins of Indian soldiers is not recent but dates back to 2013. There are no confirmed reports of casualties from Pakistani firing along the Line of Control in the current context. Therefore, the claim circulating on social media is misleading.
Related Blogs

Executive Summary:
The rise in cybercrime targeting vulnerable individuals, particularly students and their families, has reached alarming levels. Impersonation scams, where fraudsters pose as Law Enforcement Officers, have become increasingly sophisticated, exploiting fear, urgency, and social stigma. This report delves into recent incidents of ransom scams involving fake CBI officers, highlighting the execution methods, psychological impact on victims, and preventive measures. The goal is to raise public awareness and equip individuals with the knowledge needed to protect themselves from such fraudulent activities.
Introduction:
Cybercriminals are evolving their tactics, with impersonation and social engineering at the forefront. Scams involving fake law enforcement officers have become rampant, preying on the fear of legal repercussions and the desire to protect loved ones. This report examines incidents where scammers impersonated CBI officers to extort money from families of students, emphasizing the urgent need for awareness, verification, and preventive measures.
Case Study:
This case study explains how the scammers impersonate themselves for the money targeting students' families.
Targets receive calls from scammers posing as CBI officers. Mostly the families of students are targeted by the fraudsters using sophisticated impersonation and emotional manipulation tactics. In our case study, the targets received calls from unknown international numbers, falsely claiming that the students, along with their friends, were involved in a fabricated rape case. The parents get calls during school or college hours, a time when it is particularly difficult and chaotic for parents to reach their children, adding to the panic and sense of urgency. The scammers manipulate the parents by stating that, due to the students' clean records, they are not officially arrested but would face severe legal consequences unless a sum of money is paid immediately.
Although in these specific cases, the parents did not pay the money, many parents in our country fall victim to such scams, paying large sums out of fear and desperation to protect their children’s futures. The fear of legal repercussions, social stigma, and the potential damage to the students' reputations, the scammers used high-pressure tactics to force compliance.
These incidents may result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities. This underscores the urgent need for awareness, verification of authority, and prompt reporting of such scams to prevent further victimisation
Modus Operandi:
- Caller ID Spoofing: The scammer used a unknown number and spoofing techniques to mimic a legitimate law enforcement authority.
- Fear Induction: The fraudster played on the family's fear of social stigma, manipulating them into compliance through emotional blackmail.
Analysis:
Our research found that the unknown international numbers used in these scams are not real but are puppet numbers often used for prank calls and fraudulent activities. This incident also raises concerns about data breaches, as the scammers accurately recited students' details, including names and their parents' information, adding a layer of credibility and increasing the pressure on the victims. These incidents result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities.
Impact on Victims:
- Financial and Psychological Losses: The family may face substantial financial losses, coupled with emotional and psychological distress.
- Loss of Trust in Authorities: Such scams undermine trust in official communication and law enforcement channels.
- Exploitation of Fear and Urgency: Scammers prey on emotions such as fear, urgency, and social stigma to manipulate victims.
- Sophisticated Impersonation Techniques: Using caller ID spoofing, Virtual/Temporary numbers and impersonation of Law Enforcement Officers adds credibility to the scam.
- Lack of Verification: Victims often do not verify the caller's identity, leading to successful scams.
- Significant Psychological Impact: Beyond financial losses, these scams cause lasting emotional trauma and distrust in institutions.
Recommendations:
- Cross-Verification: Always cross-verify with official sources before acting on such claims. Always contact official numbers listed on trusted Government websites to verify any claims made by callers posing as law enforcement.
- Promote Awareness: Educational institutions should conduct regular awareness programs to help students and families recognize and respond to scams.
- Encourage Prompt Reporting: Reporting such incidents to authorities can help track scammers and prevent future cases. Encourage victims to report incidents promptly to local authorities and cybercrime units.
- Enhance Public Awareness: Continuous public awareness campaigns are essential to educate people about the risks and signs of impersonation scams.
- Educational Outreach: Schools and colleges should include Cybersecurity awareness as part of their curriculum, focusing on identifying and responding to scams.
- Parental Guidance and Support: Parents should be encouraged to discuss online safety and scam tactics with their children regularly, fostering a vigilant mindset.
Conclusion:
The rise of impersonation scams targeting students and their families is a growing concern that demands immediate attention. By raising awareness, encouraging verification of claims, and promoting proactive reporting, we can protect vulnerable individuals from falling victim to these manipulative and harmful tactics. It is high time for the authorities, educational institutions, and the public to collaborate in combating these scams and safeguarding our communities. Strengthening data protection measures and enhancing public education on the importance of verifying claims can significantly reduce the impact of these fraudulent schemes and prevent further victimisation.

Introduction
Due to the rapid growth of high-capability AI systems around the world, growing concerns regarding safety, accountability, and governance have arisen throughout the world; thus, California has responded by passing the Transparency in Frontier Artificial Intelligence Act (TFAIA), the first state statute focused on "frontier" (highly capable) AI models. This statute is unique in that it does not only target harms caused by AI models in the form of consumer protection as compared to the majority of state statutes; rather, this statute addresses the catastrophic and systemic risks to society associated with large-scale AI systems. As California is a global technology leader, the TFAIA is positioned to have a significant impact on both domestic regulation and the evolution of international legal frameworks for AI technology (and as such has the potential to influence corporate compliance practices and the establishment of global norms related to the use of AI).
Understanding the Transparency in Frontier Artificial Intelligence Act
The Transparency in Frontier Artificial Intelligence Act provides a specific regulatory process for companies that create sophisticated AI systems with societal, economic, or national security implications. Covered developers are required to publish an extensive safety and transparency policy that details how they navigate risk throughout the artificial intelligence lifecycle. The act requires developers to notify the government of any significant incidents or failures with their deployed frontier models on a timely basis.
A significant aspect of the TFAIA is that it establishes the concept of "process transparency", which does not explicitly control how AI developers create their models, but rather holds them accountable for their internal safety governance by mandating that they develop Documented safety frameworks that outline risk assessment, mitigation, and monitoring processes. The act allows developers to protect their trade secrets, patents, and national defense concerns by providing them with limited opportunities for exemption and/or redaction of their documents so that they can maintain a balance between data openness and safeguarding sensitive information..
Extraterritorial Impact on Global AI Developers
While the Act is a state law, its implementation has far-reaching effects. Many of the largest AI companies have facilities, research labs or customers in California. Therefore, to be compliant with the TFAIA, these companies are required to do so commercially. The ability to develop a unified compliance model across regions enables companies to avoid developing duplicate compliance models.
This same pattern has occurred in other regulatory areas, like data protection regulations; where a region's regulations effectively became global compliance benchmarks for that regulatory area. The TFAIA could similarly serve as a global standard for transparency in frontier AI and shape how companies build their governance structure globally even if they don't have explicit regulations in the regions where they operate.
Influence on International AI Regulatory Models
The TFAIA offers a unique perspective on global discussions about regulating AI. In contrast to other legislation which defines different levels of risk depending on the type of AI, the TFAIA targets specifically high-impact or emerging technologies. Other nations may see value in this model of tiered regulations based on capability and apply it for their own regulation of AI, with the strictest obligations placed on those with the most critical potential harm.
The TFAIA may serve as a guide for international public policy makers by showing how they can reference existing standards and best practices in developing regulations, thus improving interoperability and potentially lessening regulatory barriers to cross-border AI innovations.
Corporate Governance, Compliance Costs, and Competition
From an industry perspective, the Act revolutionizes the way companies govern themselves. Developers are now required to create thorough risk assessments, red-teaming exercises, incident response protocols, and have board oversight for AI safety and regulation. The number of people involved in this process increases accountability but at the same time the increases will create a burden of cost for all involved.
The burden of compliance will be easier for large tech companies than for smaller or start-ups, and thus large tech companies may solidify their position of dominance over the development of frontier AI. Smaller and newer developers may be blocked from entering the market unless some form of proportional or scaled compliance mechanism for where they operate emerges. These developments certainly raise issues surrounding innovation policy and competition law at a global scale that will need to be addressed by regulators in conjunction with AI safety concerns.
Transparency, Public Trust, and Accountability
The TFAIA bolsters the capability of citizens, researchers and journalists to oversee the development and the use of artificial intelligence (AI) through its requirement for public disclosure of the safety framework of AI systems. The disclosures will allow citizens, researchers and journalists to critically evaluate corporate claims of responsible AI development. Over time, this evaluation could increase trust in publically regulated AI systems and would expose businesses that exhibit a poor risk management process.
However, how useful this transparency is depends on the quality and comparability of the information being disclosed. Many current disclosures are either too vague or too complex, thus limiting the ability to conduct meaningful oversight. There should be a push for clearer guidance and/or the establishment of standardised disclosure forms for the purposes of public accountability (i.e., citizens) and uniformity between countries.
Conclusion
The Transparency in Frontier Artificial Intelligence Act is a transformative development in the regulation of Artificial Intelligence Technology, specifically, a whole new risk profile of this new generation of AI / (Advanced High-Powered) Technologies such as Autonomous Vehicles. This new California law will create global impact because it Be will change how technology companies operate, create regulatory frameworks and develop standards to govern/oversee the use of Autonomous Vehicles. The Act creates a “transparent” means for regulating (or governing) Autonomous Vehicles as opposed to relying solely on “technical” means for these systems. As other regions experience similar challenges that US Government is facing with respect to this new generation of AI (written laws), California's approach will likely be used as an example for how AI laws are written in the future and develop a more unified and responsible international AI regulatory framework.
References
- https://www.whitecase.com/insight-alert/california-enacts-landmark-ai-transparency-law-transparency-frontier-artificial
- https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/
- https://www.mofo.com/resources/insights/251001-california-enacts-ai-safety-transparency-regulation-tfaia-sb-53
- https://www.dlapiper.com/en/insights/publications/2025/10/california-law-mandates-increased-developer-transparency-for-large-ai-models

A video circulating on social media claims to show former Indian cricketer Sachin Tendulkar commenting on England batter Joe Root’s batting feats. In the clip, Tendulkar is allegedly heard saying that if Joe Root continues scoring centuries, even his (Tendulkar’s) record would be broken. The video further claims that Tendulkar says if Root scores another century, he would give up the bat’s grip, after which the clip abruptly ends.
Users sharing the video are claiming that Sachin Tendulkar has taken a dig at Joe Root through this remark.
Cyber Peace Foundation’s research found the claim to be misleading. Our research clearly establishes that the viral video is not authentic but has been created using Artificial Intelligence (AI) tools and is being shared online with a false narrative.
CLAIM
On January 5, 2025, several users shared the viral video on Instagram, claiming it shows Sachin Tendulkar making remarks about Joe Root’s century-scoring spree.
(Post link and archive link available.)

FACT CHECK
To verify the claim, we extracted keyframes from the viral video and conducted a Google Reverse Image Search. This led us to an interview of Sachin Tendulkar published on the official BBC News YouTube channel on November 18, 2013. The visuals from that interview match exactly with those seen in the viral clip.
This establishes that the visuals used in the viral video are old and have been repurposed with manipulated audio to create a misleading narrative.
Further, Joe Root made his Test debut in 2012. At that time, he had not scored multiple Test centuries and was nowhere close to Sachin Tendulkar’s record tally of hundreds. This timeline itself makes the viral claim factually incorrect.
(Link to the original BBC interview available.)
https://www.youtube.com/watch?v=v6Rz4pgR9UQ

Upon closely examining the viral clip, we noticed that Sachin Tendulkar’s voice sounded unnatural and inconsistent. This raised suspicion of audio manipulation.
We then ran the viral video through an AI detection tool, Aurigin AI. According to the results, the audio in the video was found to be 100 percent AI-generated, confirming that Tendulkar never made the statements attributed to him in the clip.

Conclusion
Our research confirms that the viral video claiming Sachin Tendulkar commented on Joe Root’s centuries is fake. The video has been created using AI-generated audio and misleadingly combined with visuals from a 2013 interview. Users are sharing this manipulated clip on social media with a false claim.