#Fact Check: Pakistan’s Airstrike Claim Uses Video Game Footage
Executive Summary:
A widely circulated claim on social media, including a post from the official X account of Pakistan, alleges that the Pakistan Air Force (PAF) carried out an airstrike on India, supported by a viral video. However, according to our research, the video used in these posts is actually footage from the video game Arma-3 and has no connection to any real-world military operation. The use of such misleading content contributes to the spread of false narratives about a conflict between India and Pakistan and has the potential to create unnecessary fear and confusion among the public.

Claim:
Viral social media posts, including the official Government of Pakistan X handle, claims that the PAF launched a successful airstrike against Indian military targets. The footage accompanying the claim shows jets firing missiles and explosions on the ground. The video is presented as recent and factual evidence of heightened military tensions.


Fact Check:
As per our research using reverse image search, the videos circulating online that claim to show Pakistan launching an attack on India under the name 'Operation Sindoor' are misleading. There is no credible evidence or reliable reporting to support the existence of any such operation. The Press Information Bureau (PIB) has also verified that the video being shared is false and misleading. During our research, we also came across footage from the video game Arma-3 on YouTube, which appears to have been repurposed to create the illusion of a real military conflict. This strongly indicates that fictional content is being used to propagate a false narrative. The likely intention behind this misinformation is to spread fear and confusion by portraying a conflict that never actually took place.


Conclusion:
It is true to say that Pakistan is using the widely shared misinformation videos to attack India with false information. There is no reliable evidence to support the claim, and the videos are misleading and irrelevant. Such false information must be stopped right away because it has the potential to cause needless panic. No such operation is occurring, according to authorities and fact-checking groups.
- Claim: Viral social media posts claim PAF attack on India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.

The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
References
- https://c2pa.org/
- https://contentauthenticity.org/
- https://indianexpress.com/article/technology/tech-news-technology/kate-middleton-9-signs-edited-photo-9211799/
- https://photography.tutsplus.com/articles/fakes-frauds-and-forgeries-how-to-detect-image-manipulation--cms-22230
- https://www.media.mit.edu/projects/detect-fakes/overview/
- https://www.youtube.com/watch?v=qO0WvudbO04&pp=0gcJCbAJAYcqIYzv
- https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- https://indianexpress.com/article/technology/tech-news-technology/ai-law-may-not-prescribe-penal-consequences-for-violations-9457780/
- https://thesecretariat.in/article/meity-s-ai-regulation-report-ambitious-but-no-concrete-solutions
- https://www.ndtv.com/lifestyle/assam-what-babydoll-archi-viral-fame-says-about-india-porn-problem-8878689
- https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf

Introduction
In the labyrinthine expanse of the digital age, where the ethereal threads of our connections weave a tapestry of virtual existence, there lies a sinister phenomenon that preys upon the vulnerabilities of human emotion and trust. This phenomenon, known as cyber kidnapping, recently ensnared a 17-year-old Chinese exchange student, Kai Zhuang, in its deceptive grip, leading to an $80,000 extortion from his distraught parents. The chilling narrative of Zhuang found cold and scared in a tent in the Utah wilderness, serves as a stark reminder of the pernicious reach of cybercrime.
The Cyber Kidnapping
The term 'cyber kidnapping' typically denotes a form of cybercrime where malefactors gain unauthorised access to computer systems or data, holding it hostage for ransom. Yet, in the context of Zhuang's ordeal, it took on a more harrowing dimension—a psychological manipulation through online communication that convinced his family of his peril, despite his physical safety before the scam.
The Incident
The incident unfolded like a modern-day thriller, with Zhuang's parents in China alerting officials at his host high school in Riverdale, Utah, of his disappearance on 28 December 2023. A meticulous investigation ensued, tracing bank records, purchases, and phone data, leading authorities to Zhuang's isolated encampment, 25 miles north of Brigham City. In the frigid embrace of Utah's winter, Zhuang awaited rescue, armed only with a heat blanket, a sleeping bag, limited provisions, and the very phones used to orchestrate his cyber kidnapping.
Upon his rescue, Zhuang's first requests were poignantly human—a warm cheeseburger and a conversation with his family, who had been manipulated into paying the hefty ransom during the cyber-kidnapping scam. This incident not only highlights the emotional toll of such crimes but also the urgent need for awareness and preventative measures.
The Aftermath
To navigate the treacherous waters of cyber threats, one must adopt the scepticism of a seasoned detective when confronted with unsolicited messages that reek of urgency or threat. The verification of identities becomes a crucial shield, a bulwark against deception. Sharing sensitive information online is akin to casting pearls before swine, where once relinquished, control is lost forever. Privacy settings on social media are the ramparts that must be fortified, and the education of family and friends becomes a communal armour against the onslaught of cyber threats.
The Chinese embassy in Washington has sounded the alarm, warning its citizens in the U.S. about the risks of 'virtual kidnapping' and other online frauds. This scam fragments a larger criminal mosaic that threatens to ensnare parents worldwide.
Kai Zhuang's story, while unique in its details, is not an isolated event. Experts warn that technological advancements have made it easier for criminals to pursue cyber kidnapping schemes. The impersonation of loved ones' voices using artificial intelligence, the mining of social media for personal data, and the spoofing of phone numbers are all tools in the cyber kidnapper's arsenal.
The Way Forward
The crimes have evolved, targeting not just the vulnerable but also those who might seem beyond reach, demanding larger ransoms and leaving a trail of psychological devastation in their wake. Cybercrime, as one expert chillingly notes, may well be the most lucrative of crimes, transcending borders, languages, and identities.
In the face of such threats, awareness is the first line of defense. Reporting suspicious activity to the FBI's Internet Crime Complaint Center, verifying the whereabouts of loved ones, and establishing emergency protocols are all steps that can fortify one's digital fortress. Telecommunications companies and law enforcement agencies also have a role to play in authenticating and tracing the source of calls, adding another layer of protection.
Conclusion
The surreal experience of reading about cyber kidnapping belies the very real danger it poses. It is a crime that thrives in the shadows of our interconnected world, a reminder that our digital lives are as vulnerable as our physical ones. As we navigate this complex web, let us arm ourselves with knowledge, vigilance, and the resolve to protect not just our data, but the very essence of our human connections.
References
- https://www.bbc.com/news/world-us-canada-67869517
- https://www.ndtv.com/feature/what-is-cyber-kidnapping-and-how-it-can-be-avoided-4792135

Introduction
In the era of digitalisation, social media has become an essential part of our lives, with people spending a lot of time updating every moment of their lives on these platforms. Social media networks such as WhatsApp, Facebook, and YouTube have emerged as significant sources of Information. However, the proliferation of misinformation is alarming since misinformation can have grave consequences for individuals, organisations, and society as a whole. Misinformation can spread rapidly via social media, leaving a higher impact on larger audiences. Bad actors can exploit algorithms for their benefit or some other agenda, using tactics such as clickbait headlines, emotionally charged language, and manipulated algorithms to increase false information.
Impact
The impact of misinformation on our lives is devastating, affecting individuals, communities, and society as a whole. False or misleading health information can have serious consequences, such as believing in unproven remedies or misinformation about some vaccines can cause serious illness, disability, or even death. Any misinformation related to any financial scheme or investment can lead to false or poor financial decisions that could lead to bankruptcy and loss of long-term savings.
In a democratic nation, misinformation plays a vital role in forming a political opinion, and the misinformation spread on social media during elections can affect voter behaviour, damage trust, and may cause political instability.
Mitigating strategies
The best way to minimise or stop the spreading of misinformation requires a multi-faceted approach. These strategies include promoting media literacy with critical thinking, verifying information before sharing, holding social media platforms accountable, regulating misinformation, supporting critical research, and fostering healthy means of communication to build a resilient society.
To put an end to the cycle of misinformation and move towards a better future, we must create plans to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.
The widespread spread of false information on social media platforms presents serious problems for people, groups, and society as a whole. It becomes clear that battling false information necessitates a thorough and multifaceted strategy as we go deeper into comprehending the nuances of this problem.
Encouraging consumers to develop media literacy and critical thinking abilities is essential to preventing the spread of false information. Being educated is essential for equipping people to distinguish between reliable sources and false information. Giving individuals the skills to assess information critically will enable them to choose the content they share and consume with knowledge. Public awareness campaigns should be used to promote and include initiatives that aim to improve media literacy in school curriculum.
Ways to Stop Misinformation
As we have seen, misinformation can cause serious implications; the best way to minimise or stop the spreading of misinformation requires a multifaceted approach; here are some strategies to combat misinformation.
- Promote Media Literacy with Critical Thinking: Educate individuals about how to critically evaluate information, fact check, and recognise common tactics used to spread misinformation, the users must use their critical thinking before forming any opinion or perspective and sharing the content.
- Verify Information: we must encourage people to verify the information before sharing, especially if it seems sensational or controversial, and encourage the consumption of news or any information from a reputable source of news that follows ethical journalistic standards.
- Accountability: Advocate for social media networks' openness and responsibility in the fight against misinformation. Encourage platforms to put in place procedures to detect and delete fraudulent content while boosting credible sources.
- Regulate Misinformation: Looking at the current situation, it is important to advocate for policies and regulations that address the spread of misinformation while safeguarding freedom of expression. Transparency in online communication by identifying the source of information and disclosing any conflict of interest.
- Support Critical Research: Invest in research and study on the sources, impacts, and remedies to misinformation. Support collaborative initiatives by social scientists, psychologists, journalists, and technology to create evidence-based techniques for countering misinformation.
Conclusion
To prevent the cycle of misinformation and move towards responsible use of the Internet, we must create strategies to combat the spread of false information. This will require coordinated actions from individuals, communities, tech companies, and institutions to promote a culture of information accuracy and responsible behaviour.