#Fact Check: Pakistan’s Airstrike Claim Uses Video Game Footage
Executive Summary:
A widely circulated claim on social media, including a post from the official X account of Pakistan, alleges that the Pakistan Air Force (PAF) carried out an airstrike on India, supported by a viral video. However, according to our research, the video used in these posts is actually footage from the video game Arma-3 and has no connection to any real-world military operation. The use of such misleading content contributes to the spread of false narratives about a conflict between India and Pakistan and has the potential to create unnecessary fear and confusion among the public.

Claim:
Viral social media posts, including the official Government of Pakistan X handle, claims that the PAF launched a successful airstrike against Indian military targets. The footage accompanying the claim shows jets firing missiles and explosions on the ground. The video is presented as recent and factual evidence of heightened military tensions.


Fact Check:
As per our research using reverse image search, the videos circulating online that claim to show Pakistan launching an attack on India under the name 'Operation Sindoor' are misleading. There is no credible evidence or reliable reporting to support the existence of any such operation. The Press Information Bureau (PIB) has also verified that the video being shared is false and misleading. During our research, we also came across footage from the video game Arma-3 on YouTube, which appears to have been repurposed to create the illusion of a real military conflict. This strongly indicates that fictional content is being used to propagate a false narrative. The likely intention behind this misinformation is to spread fear and confusion by portraying a conflict that never actually took place.


Conclusion:
It is true to say that Pakistan is using the widely shared misinformation videos to attack India with false information. There is no reliable evidence to support the claim, and the videos are misleading and irrelevant. Such false information must be stopped right away because it has the potential to cause needless panic. No such operation is occurring, according to authorities and fact-checking groups.
- Claim: Viral social media posts claim PAF attack on India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

THREE CENTRES OF EXCELLENCE IN ARTIFICIAL INTELLIGENCE:
India’s Finance Minister, Mrs. Nirmala Sitharaman, with a vision of ‘Make AI for India’ and ‘Make AI work for India, ’ announced during the presentation of Union Budget 2023 that the Indian Government is planning to set up three ‘Centre of Excellence’ for Artificial Intelligence in top Educational Institutions to revolutionise fields such as health, agriculture, etc.
Under the ‘Amirt Kaal,’ i.e., the budget of 2023 is a stepping stone by the government to have a technology-driven knowledge-based economy and the seven priorities that have been set up by the government called ‘Saptarishi’ such as inclusive development, reaching the last mile, infrastructure investment, unleashing potential, green growth, youth power, and financial sector will guide the nation in this endeavor along with leading industry players that will partner in conducting interdisciplinary research, developing cutting edge applications and scalable problem solutions in such areas.
The government has already formed the roadmap for AI in the nation through MeitY, NASSCOM, and DRDO, indicating that the government has already started this AI revolution. For AI-related research and development, the Centre for Artificial Intelligence and Robotics (CAIR) has already been formed, and biometric identification, facial recognition, criminal investigation, crowd and traffic management, agriculture, healthcare, education, and other applications of AI are currently being used.
Even a task force on artificial intelligence (AI) was established on August 24, 2017. The government had promised to set up Centers of Excellence (CoEs) for research, education, and skill development in robotics, artificial intelligence (AI), digital manufacturing, big data analytics, quantum communication, and the Internet of Things (IoT) and by announcing the same in the current Union budget has planned to fulfill the same.
The government has also announced the development of 100 labs in engineering institutions for developing applications using 5G services that will collaborate with various authorities, regulators, banks, and other businesses.
Developing such labs aims to create new business models and employment opportunities. Among others, it will also create smart classrooms, precision farming, intelligent transport systems, and healthcare applications, as well as new pedagogy, curriculum, continual professional development dipstick survey, and ICT implementation will be introduced for training the teachers.
POSSIBLE ROLES OF AI:
The use of AI in top educational institutions will help students to learn at their own pace, using AI algorithms providing customised feedback and recommendations based on their performance, as it can also help students identify their strengths and weaknesses, allowing them to focus their study efforts more effectively and efficiently and will help train students in AI and make the country future-ready.
The main area of AI in healthcare, agriculture, and sustainable cities would be researching and developing practical AI applications in these sectors. In healthcare, AI can be effective by helping medical professionals diagnose diseases faster and more accurately by analysing medical images and patient data. It can also be used to identify the most effective treatments for specific patients based on their genetic and medical history.
Artificial Intelligence (AI) has the potential to revolutionise the agriculture industry by improving yields, reducing costs, and increasing efficiency. AI algorithms can collect and analyse data on soil moisture, crop health, and weather patterns to optimise crop management practices, improve yields and the health and well-being of livestock, predict potential health issues, and increase productivity. These algorithms can identify and target weeds and pests, reducing the need for harmful chemicals and increasing sustainability.
ROLE OF AI IN CYBERSPACE:
Artificial Intelligence (AI) plays a crucial role in cyberspace. AI technology can enhance security in cyberspace, prevent cyber-attacks, detect and respond to security threats, and improve overall cybersecurity. Some of the specific applications of AI in cyberspace include:
- Intrusion Detection: AI-powered systems can analyse large amounts of data and detect signs of potential cyber-attacks.
- Threat Analysis: AI algorithms can help identify patterns of behaviour that may indicate a potential threat and then take appropriate action.
- Fraud Detection: AI can identify and prevent fraudulent activities, such as identity theft and phishing, by analysing large amounts of data and detecting unusual behaviour patterns.
- Network Security: AI can monitor and secure networks against potential cyber-attacks by detecting and blocking malicious traffic.
- Data Security: AI can be used to protect sensitive data and ensure that it is only accessible to authorised personnel.
CONCLUSION:
Introducing AI in top educational institutions and partnering it with leading industries will prove to be a stepping stone to revolutionise the development of the country, as Artificial Intelligence (AI) has the potential to play a significant role in the development of a country by improving various sectors and addressing societal challenges. Overall, we hope to see an increase in efficiency and productivity across various industries, leading to increased economic growth and job creation, improved delivery of healthcare services by increasing access to care and, improving patient outcomes, making education more accessible and effective as AI has the potential to improve various sectors of a country and contribute to its overall development and progress. However, it’s important to ensure that AI is developed and used ethically, considering its potential consequences and impact on society.
References:

EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).

Introduction
As we delve deeper into the intricate, almost esoteric digital landscape of the 21st century, we are confronted by a new and troubling phenomenon that threatens the very bastions of our personal security. This is not a mere subplot in some dystopian novel but a harsh and palatable reality firmly rooted in today's technologically driven society. We must grapple with the consequences of the alarming evolution of cyber threats, particularly the sophisticated use of artificial intelligence in creating face swaps—a technique now cleverly harnessed by nefarious actors to undermine the bedrock of biometric security systems.
What is GoldPickaxe?
It was amidst the hum of countless servers and data centers that the term 'GoldPickaxe' began to echo, sending shivers down the spines of cybersecurity experts. Originating from the intricate web spun by a group of Chinese hackers as reported in Dark Reading. GoldPickaxe represents the latest in a long lineage of digital predators. It is an astute embodiment of the disguise, blending into the digital environment as a seemingly harmless government service app. But behind its innocuous facade, it bears the intent to ensnare and deceive, with the elderly demographic being especially susceptible to its trap.
Victims, unassuming and trustful, are cajoled into revealing their most sensitive information: phone numbers, private details, and, most alarmingly, their facial data. These virtual reflections, intended to be the safeguard of one's digital persona, are snatched away and misused in a perilous transformation. The attackers harness such biometric data, feeding it into the arcane furnaces of deepfake technology, wherein AI face-swapping crafts eerily accurate and deceptive facsimiles. These digital doppelgängers become the master keys, effortlessly bypassing the sentinel eyes of facial recognition systems that lock the vaults of Southeast Asia's financial institutions.
Through the diligent and unyielding work of the research team at Group-IB, the trajectory of one victim's harrowing ordeal—a Vietnamese individual pilfered of a life-altering $40,000—sheds light on the severity of this technological betrayal. The advancements in deep face technology, once seen as a marvel of AI, now present a clear and present danger, outpacing the mechanisms meant to deter unauthorized access, and leaving the unenlightened multitude unaware and exposed.
Adding weight to the discussion, experts, a potentate in biometric technology, commented with a somber tone: 'This is why we see face swaps as a tool of choice for hackers. It gives the threat actor this incredible level of power and control.' This chilling testament to the potency of digital fraudulence further emphasizes that even seemingly impregnable ecosystems, such as that of Apple’s, are not beyond the reach of these relentless invaders.
New Threat
Emerging from this landscape is the doppelgänger of GoldPickaxe specifically tailored for the iOS landscape—GoldDigger's mutation into GoldPickaxe for Apple's hallowed platform is nothing short of a wake-up call. It engenders not just a single threat but an evolving suite of menaces, including its uncanny offspring, 'GoldDiggerPlus,' which is wielding the terrifying power to piggyback on real-time communications of the affected devices. Continuously refined and updated, these threats become chimeras, each iteration more elusive, more formidable than its predecessor.
One ingenious and insidious tactic exploited by these cyber adversaries is the diversionary use of Apple's TestFlight, a trusted beta testing platform, as a trojan horse for their malware. Upon clampdown by Apple, the hackers, exhibiting an unsettling level of adaptability, inveigle users to endorse MDM profiles, hitherto reserved for corporate device management, thereby chaining these unknowing participants to their will.
How To Protect
Against this stark backdrop, the question of how one might armor oneself against such predation looms large. It is a question with no simple answer, demanding vigilance and proactive measures.
General Vigilance : Aware of the Trojan's advance, Apple is striving to devise countermeasures, yet individuals can take concrete steps to safeguard their digital lives.
Consider Lockdown Mode: It is imperative to exhibit discernment with TestFlight installations, to warily examine MDM profiles, and seriously consider embracing the protective embrace of Lockdown Mode. Activating Lockdown Mode on an iPhone is akin to drawing the portcullis and manning the battlements of one's digital stronghold. The process is straightforward: a journey to the settings menu, a descent into privacy and security, and finally, the sanctification of Lockdown Mode, followed by a device restart. It is a curtailment of convenience, yes, but a potent defense against the malevolence lurking in the unseen digital thicket.
As 'GoldPickaxe' insidiously carves its path into the iOS realm—a rare and unsettling occurrence—it flags the possible twilight of the iPhone's vaunted reputation for tight security. Should these shadow operators set their sights beyond Southeast Asia, angling their digital scalpels towards the U.S., Canada, and other English-speaking enclaves, the consequences could be dire.
Conclusion
Thus, it is imperative that as digital citizens, we fortify ourselves with best practices in cybersecurity. Our journey through cyberspace must be cautious, our digital trails deliberate and sparse. Let the specter of iPhone malware serve as a compelling reason to arm ourselves with knowledge and prudence, the twin guardians that will let us navigate the murky waters of the internet with assurance, outwitting those who weave webs of deceit. In heeding these words, we preserve not only our financial assets but the sanctity of our digital identities against the underhanded schemes of those who would see them usurped.
References
- https://www.timesnownews.com/technology-science/new-ios-malware-stealing-face-id-data-bank-infos-on-iphones-how-to-protect-yourself-article-107761568
- https://www.darkreading.com/application-security/ios-malware-steals-faces-defeat-biometrics-ai-swaps
- https://www.tomsguide.com/computing/malware-adware/first-ever-ios-trojan-discovered-and-its-stealing-face-id-data-to-break-into-bank-accounts