#FactCheck - Digitally Altered Video of Olympic Medalist, Arshad Nadeem’s Independence Day Message
Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Global cybersecurity spending is expected to breach USD 210 billion in 2025, a ~10% increase from 2024 (Gartner). This is a result of an evolving and increasingly critical threat landscape enabled by factors such as the proliferation of IoT devices, the adoption of cloud networks, and the increasing size of the internet itself. Yet, breaches, misuse, and resistance persist. In 2025, global attack pressure rose ~21% Y-o-Y ( Q2 averages) (CheckPoint) and confirmed breaches climbed ~15%( Verizon DBIR). This means that rising investment in cybersecurity may not be yielding proportionate reductions in risk. But while mechanisms to strengthen technical defences and regulatory frameworks are constantly evolving, the social element of trust and how to embed it into cybersecurity systems remain largely overlooked.
Human Error and Digital Trust (Individual Trust)
Human error is consistently recognised as the weakest link in cybersecurity. While campaigns focusing on phishing prevention, urging password updates and using two-factor authentication (2FA) exist, relying solely on awareness measures to address human error in cyberspace is like putting a Band-Aid on a bullet wound. Rather, it needs to be examined through the lens of digital trust. As Chui (2022) notes, digital trust rests on security, dependability, integrity, and authenticity. These factors determine whether users comply with cybersecurity protocols. When people view rules as opaque, inconvenient, or imposed without accountability, they are more likely to cut corners, which creates vulnerabilities. Therefore, building digital trust means shifting from blaming people to design: embedding transparency, usability, and shared responsibility towards a culture of cybersecurity so that users are incentivised to make secure choices.
Organisational Trust and Insider Threats (Institutional Trust)
At the organisational level, compliance with cybersecurity protocols is significantly tied to whether employees trust employers/platforms to safeguard their data and treat them with integrity. Insider threats, stemming from both malicious and non-malicious actors, account for nearly 60% of all corporate breaches (Verizon DBIR 2024). A lack of trust in leadership may cause employees to feel disengaged or even act maliciously. Further, a 2022 study by Harvard Business Review finds that adhering to cybersecurity protocols adds to employee workload. When they are perceived as hindering productivity, employees are more likely to intentionally violate these protocols. The stress of working under surveillance systems that feel cumbersome or unreasonable, especially when working remotely, also reduces employee trust and, hence, compliance.
Trust, Inequality, and Vulnerability (Structural Trust)
Cyberspace encompasses a social system of its own since it involves patterned interactions and relationships between human beings. It also reproduces the social structures and resultant vulnerabilities of the physical world. As a result, different sections of society place varying levels of trust in digital systems. Women, rural, and marginalised groups often distrust existing digital security provisions more, and with reason. They are targeted disproportionately by cyber attackers, and yet are underprotected by systems, since these are designed prioritising urban/ male/ elite users. This leads to citizens adopting workarounds like password sharing for “safety” and disengaging from cyber safety discourse, as they find existing systems inaccessible or irrelevant to their realities. Cybersecurity governance that ignores these divides deepens exclusion and mistrust.
Laws and Compliances (Regulatory Trust)
Cybersecurity governance is operationalised in the form of laws, rules, and guidelines. However, these may often backfire due to inadequate design, reducing overall trust in governance mechanisms. For example, CERT-In’s mandate to report breaches within six hours of “noticing” it has been criticised as the steep timeframe being insufficient to generate an effective breach analysis report. Further, the multiplicity of regulatory frameworks in cross-border interactions can be costly and lead to compliance fatigue for organisations. Such factors can undermine organisational and user trust in the regulation’s ability to protect them from cyber attacks, fuelling a check-box-ticking culture for cybersecurity.
Conclusion
Cybersecurity is addressed primarily through code, firewall, and compliance today. But evidence suggests that technological and regulatory fixes, while essential, are insufficient to guarantee secure behaviour and resilient systems. Without trust in institutions, technologies, laws or each other, cybersecurity governance will remain a cat-and-mouse game. Building a trust-based architecture requires mechanisms to improve accountability, reliability, and transparency. It requires participatory designs of security systems and the recognition of unequal vulnerabilities. Thus, unless cybersecurity governance acknowledges that cyberspace is deeply social, investment may not be able to prevent the harms it seeks to curb.
References
- https://www.gartner.com/en/newsroom/press-releases/2025-07-29
- https://blog.checkpoint.com/research/global-cyber-attacks-surge-21-in-q2-2025
- https://www.verizon.com/business/resources/reports/2024-dbir-executive-summary.pdf
- https://www.verizon.com/business/resources/reports/2025-dbir-executive-summary.pdf
- https://insights2techinfo.com/wp-content/uploads/2023/08/Building-Digital-Trust-Challenges-and-Strategies-in-Cybersecurity.pdf
- https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women
- https://www.upguard.com/blog/indias-6-hour-data-breach-reporting-rule

Introduction
With the increasing frequency and severity of cyber-attacks on critical sectors, the government of India has formulated the National Cyber Security Reference Framework (NCRF) 2023, aimed to address cybersecurity concerns in India. In today’s digital age, the security of critical sectors is paramount due to the ever-evolving landscape of cyber threats. Cybersecurity measures are crucial for protecting essential sectors such as banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises. This is an essential step towards safeguarding these critical sectors and preparing for the challenges they face in the face of cyber threats. Protecting critical sectors from cyber threats is an urgent priority that requires the development of robust cybersecurity practices and the implementation of effective measures to mitigate risks.
Overview of the National Cyber Security Policy 2013
The National Cyber Security Policy of 2013 was the first attempt to address cybersecurity concerns in India. However, it had several drawbacks that limited its effectiveness in mitigating cyber risks in the contemporary digital age. The policy’s outdated guidelines, insufficient prevention and response measures, and lack of legal implications hindered its ability to protect critical sectors adequately. Moreover, the policy should have kept up with the rapidly evolving cyber threat landscape and emerging technologies, leaving organisations vulnerable to new cyber-attacks. The 2013 policy failed to address the evolving nature of cyber threats, leaving organisations needing updated guidelines to combat new and sophisticated attacks.
As a result, an updated and more comprehensive policy, the National Cyber Security Reference Framework 2023, was necessary to address emerging challenges and provide strategic guidance for protecting critical sectors against cyber threats.
Highlights of NCRF 2023
- Strategic Guidance: NCRF 2023 has been developed to provide organisations with strategic guidance to address their cybersecurity concerns in a structured manner.
- Common but Differentiated Responsibility (CBDR): The policy is based on a CBDR approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
- Update of National Cyber Security Policy 2013: NCRF supersedes the National Cyber Security Policy 2013, which was due for an update to align with the evolving cyber threat landscape and emerging challenges.
- Different from CERT-In Directives: NCRF is distinct from the directives issued by the Indian Computer Emergency Response Team (CERT-In) published in April 2023. It provides a comprehensive framework rather than specific directives for reporting cyber incidents.
- Combination of robust strategies: National Cyber Security Reference Framework 2023 will provide strategic guidance, a revised structure, and a proactive approach to cybersecurity, enabling organisations to tackle the growing cyberattacks in India better and safeguard critical sectors.
Rising incidents of malware attacks on critical sectors
In recent years, there has been a significant increase in malware attacks targeting critical sectors. These sectors, including banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises, play a crucial role in the functioning of economies and the well-being of societies. The escalating incidents of malware attacks on these sectors have raised concerns about the security and resilience of critical infrastructure.
- Banking: The banking sector handles sensitive financial data and is a prime target for cybercriminals due to the potential for financial fraud and theft.
- Energy: The energy sector, including power grids and oil companies, is critical for the functioning of economies, and disruptions can have severe consequences for national security and public safety.
- Healthcare: The healthcare sector holds valuable patient data, and cyber-attacks can compromise patient privacy and disrupt healthcare services. Malware attacks on healthcare organisations can result in the theft of patient records, ransomware incidents that cripple healthcare operations, and compromise medical devices.
- Telecommunications: Telecommunications infrastructure is vital for reliable communication, and attacks targeting this sector can lead to communication disruptions and compromise the privacy of transmitted data. The interconnectedness of telecommunications networks globally presents opportunities for cybercriminals to launch large-scale attacks, such as Distributed Denial-of-Service (DDoS) attacks.
- Transportation: Malware attacks on transportation systems can lead to service disruptions, compromise control systems, and pose safety risks.
- Strategic Enterprises: Strategic enterprises, including defence, aerospace, intelligence agencies, and other sectors vital to national security, face sophisticated malware attacks with potentially severe consequences. Cyber adversaries target these enterprises to gain unauthorised access to classified information, compromise critical infrastructure, or sabotage national security operations.
- Government Enterprises: Government organisations hold a vast amount of sensitive data and provide essential services to citizens, making them targets for data breaches and attacks that can disrupt critical services.
Conclusion
The sectors of banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises face unique vulnerabilities and challenges in the face of cyber-attacks. By recognising the significance of safeguarding these sectors, we can emphasise the need for proactive cybersecurity measures and collaborative efforts between public and private entities. Strengthening regulatory frameworks, sharing threat intelligence, and adopting best practices are essential to ensure our critical infrastructure’s resilience and security. Through these concerted efforts, we can create a safer digital environment for these sectors, protecting vital services and preserving the integrity of our economy and society. The rising incidents of malware attacks on critical sectors emphasise the urgent need for updated cybersecurity policy, enhanced cybersecurity measures, a collaboration between public and private entities, and the development of proactive defence strategies. National Cyber Security Reference Framework 2023 will help in addressing the evolving cyber threat landscape, protect critical sectors, fill the gaps in sector-specific best practices, promote collaboration, establish a regulatory framework, and address the challenges posed by emerging technologies. By providing strategic guidance, this framework will enhance organisations’ cybersecurity posture and ensure the protection of critical infrastructure in an increasingly digitised world.
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms