#FactCheck - Digitally Altered Video of Olympic Medalist, Arshad Nadeem’s Independence Day Message
Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Global cybersecurity spending is expected to breach USD 210 billion in 2025, a ~10% increase from 2024 (Gartner). This is a result of an evolving and increasingly critical threat landscape enabled by factors such as the proliferation of IoT devices, the adoption of cloud networks, and the increasing size of the internet itself. Yet, breaches, misuse, and resistance persist. In 2025, global attack pressure rose ~21% Y-o-Y ( Q2 averages) (CheckPoint) and confirmed breaches climbed ~15%( Verizon DBIR). This means that rising investment in cybersecurity may not be yielding proportionate reductions in risk. But while mechanisms to strengthen technical defences and regulatory frameworks are constantly evolving, the social element of trust and how to embed it into cybersecurity systems remain largely overlooked.
Human Error and Digital Trust (Individual Trust)
Human error is consistently recognised as the weakest link in cybersecurity. While campaigns focusing on phishing prevention, urging password updates and using two-factor authentication (2FA) exist, relying solely on awareness measures to address human error in cyberspace is like putting a Band-Aid on a bullet wound. Rather, it needs to be examined through the lens of digital trust. As Chui (2022) notes, digital trust rests on security, dependability, integrity, and authenticity. These factors determine whether users comply with cybersecurity protocols. When people view rules as opaque, inconvenient, or imposed without accountability, they are more likely to cut corners, which creates vulnerabilities. Therefore, building digital trust means shifting from blaming people to design: embedding transparency, usability, and shared responsibility towards a culture of cybersecurity so that users are incentivised to make secure choices.
Organisational Trust and Insider Threats (Institutional Trust)
At the organisational level, compliance with cybersecurity protocols is significantly tied to whether employees trust employers/platforms to safeguard their data and treat them with integrity. Insider threats, stemming from both malicious and non-malicious actors, account for nearly 60% of all corporate breaches (Verizon DBIR 2024). A lack of trust in leadership may cause employees to feel disengaged or even act maliciously. Further, a 2022 study by Harvard Business Review finds that adhering to cybersecurity protocols adds to employee workload. When they are perceived as hindering productivity, employees are more likely to intentionally violate these protocols. The stress of working under surveillance systems that feel cumbersome or unreasonable, especially when working remotely, also reduces employee trust and, hence, compliance.
Trust, Inequality, and Vulnerability (Structural Trust)
Cyberspace encompasses a social system of its own since it involves patterned interactions and relationships between human beings. It also reproduces the social structures and resultant vulnerabilities of the physical world. As a result, different sections of society place varying levels of trust in digital systems. Women, rural, and marginalised groups often distrust existing digital security provisions more, and with reason. They are targeted disproportionately by cyber attackers, and yet are underprotected by systems, since these are designed prioritising urban/ male/ elite users. This leads to citizens adopting workarounds like password sharing for “safety” and disengaging from cyber safety discourse, as they find existing systems inaccessible or irrelevant to their realities. Cybersecurity governance that ignores these divides deepens exclusion and mistrust.
Laws and Compliances (Regulatory Trust)
Cybersecurity governance is operationalised in the form of laws, rules, and guidelines. However, these may often backfire due to inadequate design, reducing overall trust in governance mechanisms. For example, CERT-In’s mandate to report breaches within six hours of “noticing” it has been criticised as the steep timeframe being insufficient to generate an effective breach analysis report. Further, the multiplicity of regulatory frameworks in cross-border interactions can be costly and lead to compliance fatigue for organisations. Such factors can undermine organisational and user trust in the regulation’s ability to protect them from cyber attacks, fuelling a check-box-ticking culture for cybersecurity.
Conclusion
Cybersecurity is addressed primarily through code, firewall, and compliance today. But evidence suggests that technological and regulatory fixes, while essential, are insufficient to guarantee secure behaviour and resilient systems. Without trust in institutions, technologies, laws or each other, cybersecurity governance will remain a cat-and-mouse game. Building a trust-based architecture requires mechanisms to improve accountability, reliability, and transparency. It requires participatory designs of security systems and the recognition of unequal vulnerabilities. Thus, unless cybersecurity governance acknowledges that cyberspace is deeply social, investment may not be able to prevent the harms it seeks to curb.
References
- https://www.gartner.com/en/newsroom/press-releases/2025-07-29
- https://blog.checkpoint.com/research/global-cyber-attacks-surge-21-in-q2-2025
- https://www.verizon.com/business/resources/reports/2024-dbir-executive-summary.pdf
- https://www.verizon.com/business/resources/reports/2025-dbir-executive-summary.pdf
- https://insights2techinfo.com/wp-content/uploads/2023/08/Building-Digital-Trust-Challenges-and-Strategies-in-Cybersecurity.pdf
- https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women
- https://www.upguard.com/blog/indias-6-hour-data-breach-reporting-rule

Introduction
The US national cybersecurity strategy was released at the beginning of March this year. The aim of the cybersecurity strategy is to build a more defensive and resilient digital mechanism through general investments in the cybersecurity infrastructure. It is important to invest in a resilient future, And the increasing digital diplomacy and private-sector partnerships, regulation of crucial industries, and holding software companies accountable if their products enable hackers in.
What is the cybersecurity strategy
The US National cybersecurity strategy is the plan which organisations pursue to fight against cyberattacks and cyber threats, and also they plan a risk assessment plan for the future in a resilient way. Through the cybersecurity strategy, there will be appropriate defences against cyber threats.
US National Cybersecurity Strategy-
the national cybersecurity strategy mainly depends on five pillars-
- Critical infrastructure- The national cybersecurity strategy intends to defend important infrastructure from cyberattacks, for example, hospitals and clean energy installations. This pillar mainly focuses on the security and resilience of critical systems and services that are critical.
- Disrupt & Threat Assessment- This strategy pillar seeks to address and eliminate cyber attackers who endanger national security and public safety.
- Shape the market forces in resilient and security has driven-
- Invest in resilient future approaches.
- Forging international partnerships to pursue shared goals.
Need for a National cybersecurity strategy in India –
India is becoming more reliant on technology for day-to-day purposes, communication and banking aspects. And as per the computer emergency response team (CERT-In), in 2022, ransomware attacks increased by 50% in India. Cybercrimes against individuals are also rapidly on the rise. To build a safe cyberspace, India also required a national cybersecurity strategy in the country to develop trust and confidence in IT systems.
Learnings for India-
India has a cybersecurity strategy just now but India can also implement its cybersecurity strategy as the US just released. For the threats assessments and for more resilient future outcomes, there is a need to eliminate cybercrimes and cyber threats in India.
Shortcomings of the US National Cybersecurity Strategy-
- The implementation of the United States National Cybersecurity Strategy has Some problems and things that could be improved in it. Here are some as follows:
- Significant difficulties: The cybersecurity strategy proved to be difficult for government entities. The provided guidelines do not fulfil the complexity and growing cyber threats.
- Insufficient to resolve desirable points: the implementation is not able to resolve some, of the aspects of national cybersecurity strategies, for example, the defined goals and resource allocation, which have been determined to be addressed by the national cybersecurity strategy and implementation plan.
- Lack of Specifying the Objectives: the guidelines shall track the cybersecurity progress, and the implementation shall define the specific objectives.
- Implementation Alone is insufficient: cyber-attacks and cybercrimes are increasing daily, and to meet this danger, the US cybersecurity strategy shall not depend on the implementation. However, the legislation will help to involve public-private collaboration, and technological advancement is required.
- The strategy calls for critical infrastructure owners and software companies to meet minimum security standards and be held liable for flaws in their products, but the implementation and enforcement of these standards and liability measures must be clearly defined.
Conclusion
There is a legitimate need for a national cybersecurity strategy to fight against the future consequences of the cyber pandemic. To plan proper strategies and defences. It is crucial to avail techniques under the cybersecurity strategy. And India is increasingly depending on technology, and cybercrimes are also increasing among individuals. Healthcare sectors and as well on educational sectors, so to resolve these complexities, there is a need for proper implementations.

A Foray into the Digital Labyrinth
In our digital age, the silhouette of truth is often obfuscated by a fog of technological prowess and cunning deception. With each passing moment, the digital expanse sprawls wider, and within it, synthetic media, known most infamously as 'deepfakes', emerge like phantoms from the machine. These adept forgeries, melding authenticity with fabrication, represent a new frontier in the malleable narrative of understood reality. Grappling with the specter of such virtual deceit, social media behemoths Facebook and YouTube have embarked on a prodigious quest. Their mission? To formulate robust bulwarks around the sanctity of fact and fiction, all the while fostering seamless communication across channels that billions consider an inextricable part of their daily lives.
In an exploration of this digital fortress besieged by illusion, we unpeel the layers of strategy that Facebook and YouTube have unfurled in their bid to stymie the proliferation of these insidious technical marvels. Though each platform approaches the issue through markedly different prisms, a shared undercurrent of necessity and urgency harmonizes their efforts.
The Detailing of Facebook's Strategic
Facebook's encampment against these modern-day chimaeras teems with algorithmic sentinels and human overseers alike—a union of steel and soul. The company’s layer upon layer of sophisticated artificial intelligence is designed to scrupulously survey, identify, and flag potential deepfake content with a precision that borders on the prophetic. Employing advanced AI systems, Facebook endeavours to preempt the chaos sown by manipulated media by detecting even the slightest signs of digital tampering.
However, in an expression of profound acumen, Facebook also serves reminder of AI's fallibility by entwining human discernment into its fabric. Each flagged video wages its battle for existence within the realm of these custodians of reality—individuals entrusted with the hefty responsibility of parsing truth from technologically enabled fiction.
Facebook does not rest on the laurels of established defense mechanisms. The platform is in a perpetual state of flux, with policies and AI models adapting to the serpentine nature of the digital threat landscape. By fostering its cyclical metamorphosis, Facebook not only sharpens its detection tools but also weaves a more resilient protective web, one capable of absorbing the shockwaves of an evolving battlefield.
YouTube’s Overture of Transparency and the Exposition of AI
Turning to the amphitheatre of YouTube, the stage is set for an overt commitment to candour. Against the stark backdrop of deepfake dilemmas, YouTube demands the unveiling of the strings that guide the puppets, insisting on full disclosure whenever AI's invisible hands sculpt the content that engages its diverse viewership.
YouTube's doctrine is straightforward: creators must lift the curtains and reveal any artificial manipulation's role behind the scenes. With clarity as its vanguard, this requirement is not just procedural but an ethical invocation to showcase veracity—a beacon to guide viewers through the murky waters of potential deceit.
The iron fist within the velvet glove of YouTube's policy manifests through a graded punitive protocol. Should a creator falter in disclosing the machine's influence, repercussions follow, ensuring that the ecosystem remains vigilant against hidden manipulation.
But YouTube's policy is one that distinguishes between malevolence and benign use. Artistic endeavours, satirical commentary, and other legitimate expositions are spared the policy's wrath, provided they adhere to the overarching principle of transparency.
The Symbiosis of Technology and Policy in a Morphing Domain
YouTube's commitment to refining its coordination between human insight and computerized examination is unwavering. As AI's role in both the generation and moderation of content deepens, YouTube—which, like a skilled cartographer, must redraw its policies increasingly—traverses this ever-mutating landscape with a proactive stance.
In a Comparative Light: Tracing the Convergence of Giants
Although Facebook and YouTube choreograph their steps to different rhythms, together they compose an intricate dance aimed at nurturing trust and authenticity. Facebook leans into the proactive might of their AI algorithms, reinforced by updates and human interjection, while YouTube wields the virtue of transparency as its sword, cutting through masquerades and empowering its users to partake in storylines that are continually rewritten.
Together on the Stage of Our Digital Epoch
The sum of Facebook and YouTube's policies is integral to the pastiche of our digital experience, a multifarious quilt shielding the sanctum of factuality from the interloping specters of deception. As humanity treads the line between the veracious and the fantastic, these platforms stand as vigilant sentinels, guiding us in our pursuit of an old-age treasure within our novel digital bazaar—the treasure of truth. In this labyrinthine quest, it is not merely about unmasking deceivers but nurturing a wisdom that appreciates the shimmering possibilities—and inherent risks—of our evolving connection with the machine.
Conclusion
The struggle against deepfakes is a complex, many-headed challenge that will necessitate a united front spanning technologists, lawmakers, and the public. In this digital epoch, where the veneer of authenticity is perilously thin, the valiant endeavours of these tech goliaths serve as a lighthouse in a storm-tossed sea. These efforts echo the importance of evergreen vigilance in discerning truth from artfully crafted deception.
References
- https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
- https://indianexpress.com/article/technology/artificial-intelligence/google-sheds-light-on-how-its-fighting-deep-fakes-and-ai-generated-misinformation-in-india-9047211/
- https://techcrunch.com/2023/11/14/youtube-adapts-its-policies-for-the-coming-surge-of-ai-videos/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/youtube-twitter-hunt-down-deepfakes