#FactCheck - AI-Generated Image Falsely Linked to Doda Army Vehicle Accident
Executive Summary
On January 22, an Indian Army vehicle met with an accident in Jammu and Kashmir’s Doda district, resulting in the death of 10 soldiers, while several others were injured. In connection with this tragic incident, a photograph is now going viral on social media. The viral image shows an Army vehicle that appears to have fallen into a deep gorge, with several soldiers visible around the site. Users sharing the image are claiming that it depicts the actual scene of the Doda accident.
However, an research by the CyberPeacehas found that the viral image is not genuine. The photograph has been generated using Artificial Intelligence (AI) and does not represent the real accident. Hence, the viral post is misleading.
Claim
An Instagram user shared the viral image on January 22, 2026, writing:“Deeply saddened by the tragic accident in Doda, Jammu & Kashmir today, in which 10 brave soldiers lost their lives. My heartfelt tribute to the martyrs who laid down their lives in the line of duty.Sincere condolences to the bereaved families, and prayers for the speedy recovery of the injured soldiers.The nation will forever remember your sacrifice.”
The link and screenshot of the post can be seen below.
- https://www.instagram.com/p/DT0UBIRk_3k/
- https://archive.ph/submit/?url=https%3A%2F%2Fwww.instagram.com%2Fp%2FDT0UBIRk_3k%2F+

Fact Check:
To verify the claim, we first closely examined the viral image. Several visual inconsistencies were observed. The structure of the soldier visible inside the damaged vehicle appears distorted, and the hands and limbs of people involved in the rescue operation look unnatural. These anomalies raised suspicion that the image might be AI-generated. Based on this, we ran the image through the AI detection tool Hive Moderation, which indicated that the image is over 99.9% likely to be AI-generated.

Another AI image detection tool, Sightengine, also flagged the image as 99% AI-generated.

During further research , we found a report published by Navbharat Times on January 22, 2026, which confirmed that an Indian Army vehicle had indeed fallen into a deep gorge in Doda district. According to officials, 10 soldiers were killed and 7 others were injured, and rescue operations were immediately launched.
However, it is important to note that the image circulating on social media is not an actual photograph from the incident.

Conclusion
CyberPeace research confirms that the viral image linked to the Doda Army vehicle accident has been created using Artificial Intelligence. It is not a real photograph from the incident, and therefore, the viral post is misleading.
Related Blogs

A video clip of journalist Palki Sharma is being widely shared on social media. Along with the video, it is being claimed that during Prime Minister Narendra Modi’s recent Middle East visit, she questioned Jordan’s diplomatic protocol.
In the viral clip, Palki Sharma is allegedly seen asking why Jordan’s King Abdullah II did not come to the airport to receive Prime Minister Modi, and whether this indicated a downgrade in the level of welcome.
However, an investigation by the Cyber Peace Foundation found this claim to be misleading. The probe revealed that while the visuals in the viral video are genuine, the audio has been altered using Artificial Intelligence (AI).
On the social media platform ‘X’, a user named “Ammar Solangi” shared this video on 18 December. The post claimed that the video was related to questions raised about Jordan’s diplomatic protocol during Prime Minister Modi’s visit. According to the post, Palki Sharma questioned why King Abdullah II did not receive Prime Minister Modi at the airport. The archive link of the viral post can be seen here: https://ghostarchive.org/archive/26aK0
Verification
During the investigation, the fact-check desk noticed the ‘Firstpost’ logo in the top-left corner of the viral video. Based on this clue, a customized Google search was conducted, which led to the original news report.
The investigation revealed that the viral video was taken from an episode of journalist Palki Sharma’s show “Vantage with Palki Sharma”, which aired on 17 December.
Analysis of the video showed that the visuals appearing at the 33 minutes 30 seconds timestamp in the original report exactly match those used in the viral clip. However, in the original broadcast, Palki Sharma neither questioned Jordan’s protocol nor made any comment about King Abdullah II not being present at the airport.
In the original video, Palki Sharma says:
“Prime Minister Modi was on a diplomatic tour of Jordan, Ethiopia, and Oman, and in Jordan he was received at the airport by the country’s Prime Minister…” The link to the original report can be seen here: https://www.youtube.com/watch?v=-VYZYe9l6Bs

AI Audio Examination
Further investigation involved separating the audio from the viral video and analyzing it using the AI voice detection tool ‘Resemble AI’. The tool’s results confirmed that fake, AI-generated audio had been added over the real footage in the viral clip to spread a misleading claim. A screenshot of the results from this examination can be seen below.

Conclusion
The video being circulated in the name of journalist Palki Sharma has been tampered with. Her voice has been altered using AI technology, and the claim made regarding the Jordan visit is completely misleading.

Introduction
Rajeev Chandrasekhar, the Union minister of state for information technology (IT), said that the Global Partnership on Artificial Intelligence (GPAI) Summit, which brings together 29 member governments, including the European Union, announced on 13th December 2023 that the New Delhi Declaration had been adopted. The proclamation committed to developing AI applications for medical treatment and agribusiness jointly and taking the needs of the Global South into account when developing AI.
In addition, signing countries committed to leveraging the GPAI infrastructure to establish a worldwide structure for AI safety and trust, as well as to make AI advantages and approaches accessible to all. In order to complete the recommended structure in six months, India also submitted a proposal to host the GPAI Global Governance Summit.
“The New Delhi Declaration, which aims to place GPAI at the forefront of defining the future of AI in terms of both development and building cooperative AI across the partner states, has been unanimously endorsed by 29 GPAI member countries. Nations have come to an agreement to develop AI applications in healthcare, agriculture, and numerous other fields that affect all of our nations and citizens,” Chandrasekhar stated.
The statement highlights GPAI's critical role in tackling modern AI difficulties, such as generative AI, through submitted AI projects meant to maximize benefits and minimize related risks while solving community problems and worldwide difficulties.
GPAI
Global Partnership on Artificial Intelligence (GPAI) is an organisation of 29 countries from the Americas (North and South), Europe and Asia. It has important players such as the US, France, Japan and India, but it excludes China. The previous meeting took place in Japan. In 2024, India will preside over GPAI.
In order to promote and steer the responsible implementation of artificial intelligence based on human rights, multiculturalism, gender equality, innovation, economic growth, the surroundings, and social impact, this forum was established in 2020. Its goal is to bring together elected officials and experts in order to make tangible contributions to the 2030 Agenda and the UN Sustainable Development Goals (SDGs).
Given the quick and significant advancements in artificial intelligence over the previous year, the meeting in New Delhi attracted particular attention. They have sparked worries about its misuse as well as enthusiasm about its possible advantages.
The Summit
The G20 summit, which India hosted in September 2023, provided an atmosphere for the discussions at the GPAI summit. There, participants of this esteemed worldwide economic conference came to an agreement on how to safely use AI for "Good and for All."
In order to safeguard people's freedoms and security, member governments pledged to address AI-related issues "in a responsible, inclusive, and human-centric manner."
The key tactic devised is to distribute AI's advantages fairly while reducing its hazards. Promoting international collaboration and discourse on global management for AI is the first step toward accomplishing this goal.
A major milestone in that approach was the GPAI summit.
The conversation on AI was started by India's Prime Minister Narendra Modi, who is undoubtedly one of the most tech-aware and tech-conscious international authorities.
He noted that every system needs to be revolutionary, honest, and trustworthy in order to be sustained.
"There is no doubt that AI is transformative, but it is up to us to make it more and more transparent." He continued by saying that when associated social, ethical, and financial concerns are appropriately addressed, trust will increase.
After extensive discussions, the summit attendees decided on a strategy to establish global collaboration on a number of AI-related issues. The proclamation pledged to place GPAI at the leading edge of defining AI in terms of creativity and cooperation while expanding possibilities for AI in healthcare, agriculture, and other areas of interest, according to Union Minister Rajeev Chandrasekhar.
There was an open discussion of a number of issues, including disinformation, joblessness and bias, protection of sensitive information, and violations of human rights. The participants reaffirmed their dedication to fostering dependable, safe, and secure AI within their respective domains.
Concerns raised by AI
- The issue of legislation comes first. There are now three methods in use. In order to best promote inventiveness, the UK government takes a "less is more" approach to regulation. Conversely, the European Union (EU) is taking a strong stance, planning to propose a new Artificial Intelligence Act that might categorize AI 'in accordance with use-case situations based essentially on the degree of interference and vulnerability'.
- Second, analysts say that India has the potential to lead the world in discussions about AI. For example, India has an advantage when it comes to AI discussions because of its personnel, educational system, technological stack, and populace, according to Markham Erickson of Google's Centers for Excellence. However, he voiced the hope that Indian regulations will be “interoperable” with those of other countries in order to maximize the benefits for small and medium-sized enterprises in the nation.
- Third, there is a general fear about how AI will affect jobs, just as there was in the early years of the Internet's development. Most people appear to agree that while many jobs won't be impacted, certain jobs might be lost as artificial intelligence develops and gets smarter. According to Erickson, the solution to the new circumstances is to create "a more AI-skilled workforce."
- Finally, a major concern relates to deepfakes defined as 'digital media, video, audio and images, edited and manipulated, using Artificial Intelligence (AI).'
Need for AI Strategy in Commercial Businesses
Firstly, astute or mobile corporate executives such as Shailendra Singh, managing director of Peak XV Partners, feel that all organisations must now have 'an AI strategy'.
Second, it is now impossible to isolate the influence of digital technology and artificial intelligence from the study of international relations (IR), foreign policy, and diplomacy. Academics have been contemplating and penning works of "the geopolitics of AI."
Combat Strategies
"We will talk about how to combine OECD capabilities to maximize our capacity to develop the finest approaches to the application and management of AI for the benefit of our people. The French Minister of Digital Transition and Telecommunications", Jean-Noël Barrot, informed reporters.
Vice-Minister of International Affairs for Japan's Ministry of Internal Affairs and Communications Hiroshi Yoshida stated, "We particularly think GPAI should be more inclusive so that we encourage more developing countries to join." Mr Chandrasekhar stated, "Inclusion of lower and middle-income countries is absolutely core to the GPAI mission," and added that Senegal has become a member of the steering group.
India's role in integrating agribusiness into the AI agenda was covered in a paragraph. The proclamation states, "We embrace the use of AI innovation in supporting sustainable agriculture as a new thematic priority for GPAI."
Conclusion
The New Delhi Declaration, which was adopted at the GPAI Summit, highlights the cooperative determination of 29 member nations to use AI for the benefit of all people. GPAI, which will be led by India in 2024, intends to influence AI research with an emphasis on healthcare, agriculture, and resolving ethical issues. Prime Minister Narendra Modi stressed the need to use AI responsibly and build clarity and confidence. Legislative concerns, India's potential for leadership, employment effects, and the difficulty of deepfakes were noted. The conference emphasized the importance of having an AI strategy in enterprises and covered battle tactics, with a focus on GPAI's objective, which includes tolerance for developing nations. Taken as a whole, the summit presents GPAI as an essential tool for navigating the rapidly changing AI field.
References
- https://www.thehindu.com/news/national/ai-summit-adopts-new-delhi-declaration-on-inclusiveness-collaboration/article67635398.ece
- https://www.livemint.com/news/india/gpai-meet-adopts-new-delhi-ai-declaration-11702487342900.html
- https://startup.outlookindia.com/sector/policy/global-partnership-on-ai-member-nations-unanimously-adopt-new-delhi-declaration-news-10065
- https://gpai.ai/

Introduction
In the contemporary information environment, misinformation has emerged as a subtle yet powerful force capable of shaping public perception, influencing behavior, and undermining institutional credibility. Unlike overt falsehoods, misinformation often gains traction because it appears authentic, familiar, and authoritative. The rapid circulation of content through digital platforms has intensified this challenge, allowing altered or misleading material to reach wide audiences before verification mechanisms can respond. When misinformation mimics official communication, its impact becomes especially concerning, as citizens tend to place implicit trust in documents that carry the appearance of state authority. This growing vulnerability of public information systems was illustrated by the calendar incident in Himachal Pradesh in January 2026.
The calendar incident of Himachal Pradesh in January 2026 shows how a small lie can lead to large social and governance problems. A person whose identity is still unknown posted a modified version of the Government Calendar 2026, changing the official dates and resulting in public confusion and reputational damage to the Printing and Stationery Department. The incident may not appear very serious at first sight, but it indicates a deeper systemic issue. Misinformation is posing increasing dangers to public information ecosystems, especially when official documents are misrepresented and disseminated through digital platforms.
Misinformation as a Governance Challenge
Government calendars and official documents are necessary for public awareness and administrative coordination, and their manipulation impedes the credibility of institutions and the trustworthiness of governance. In Himachal Pradesh, modified dates might have led to confusion regarding public holidays, interference in school and administrative planning, and misinformation among the people. Such misinformation is a direct interference in the social contract that exists between the citizens and the State, where accurate information is the foundation of trust, compliance, and participation.
Impact on Citizens: Confusion, Distrust, and Digital Fatigue
For the general public, the dissemination of fake government information leads to a situation where people are confused and, at the same time, lose their trust in the government communication channels. If someone continuously gets to see the changed or misleading information misrepresented as credible, that person will find it hard to differentiate the truth from lies in the end.
This results in:
- Decision paralysis occurs when the public cannot make up their minds and either postpones or refrains from action due to the doubts they have
- Erosion of trust, not only in one department but also in the whole government communications department
- Digital fatigue occurs when people stop following public information completely, since they think that all content can be unreliable
Misinformation in a digital society is not limited to one platform only. It spreads quickly through direct messaging apps, community groups, and social networks, thus creating greater confusion among people before the official clarifications can reach the same audience.
Institutional Harm and Reputational Damage
The intentional tampering with official documents is not only a violation of ethics but also a crime and an immoral act from a governance perspective. The Printing and Stationery Department noted that such practices tarnish the public image of government bodies, which are based on accuracy, neutrality, and trust.
When untrue material gets to be known as official content:
- Departments have to communicate reactively.
- Money and manpower that could have been used for the normal administrative work are now spent on the control of the situation.
The registration of a First Information Report (FIR) in this matter is an indication of the gradual shift in the perception of law enforcement agencies that misinformation is not a playful act but rather a technology-assisted crime with serious consequences.
The Role of Verifiable Information and Trusted Sources
Such occurrences stress the need for trustworthy information as well as confirmed sources to be at the centre of the digital era. It should be the responsibility of the authorities to lead the citizens to practice and ENABLING to depend on official websites, verified social media accounts, government portals, and press releases for authentication.
Platform Responsibility and Digital Literacy
The spread of misinformation poses a significant challenge for social media platforms, which frequently amplify highly engaging content. There are some ways that the social media networks can try to limit the damage, and these are: tagging of non-verified material, limiting the sharing and working with authorities in the area of fact-checking support. However, one more thing which is crucial here is ‘public knowledge’ about digital platforms, as even unintentional dissemination of fake “official” materials can lead to legal and social repercussions. The advice of the Himachal state government is a good thing, but constantly informing the public is still a requirement.
Legal Accountability as a Deterrent
The active participation of the Cyber Crime Cells unequivocally indicates that digital misinformation, especially involving government documents, will face severe consequences. The establishment of legal responsibility acts as a preventive measure and reiterates the notion that the right to speak one's mind does not cover the right to lie or undermine public institutions. Nonetheless, to have an effective enforcement, it has to be accompanied by preventive actions such as good communication, strong governance, and public trust-building. Consistent enforcement against digital misinformation can contribute to greater accountability within society. Digital Literacy programs should be conducted periodically for netizens and institutions.
Conclusion
The incident of the creation of fake calendars in Himachal Pradesh served as a signal for the authorities to adopt accurate communication strategies. The ratification of misinformation can be achieved only if there is shared participation of governments, digital platforms, citizens and civil societies. The main goal of all this is to maintain public trust and the dissemination of information in democratic processes.