#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

Introduction
In recent years, India has seen tremendous growth in its space industry. The satellite infrastructure of India now provides key services to a variety of sectors, including communication, navigation, broadcasting, disaster management and national security operations. Satellite communications globally will connect remote communities, aid in the delivery of Digital Governance and support India's strategic military capabilities. Given the expanding space ecosystem in India with the involvement of the public sector, private sector and research institutions, the security of satellite communications is becoming increasingly important.
At the same time, as satellite communication technologies become more pervasive, the risk of cyber threats targeting space systems increases. Cyberattacks against satellites, ground terminals or communication networks may critically impact, disrupt, damage, and/or destroy essential services, and expose sensitive information. To mitigate these risks, CERT-In (Computer Emergency Response Team), in collaboration with the SatCom Industry Association of India released a Cyber Security Framework and Guidelines for Space Platforms/Systems, including Satellite Communication, in 2026. This framework aims to establish and enhance cybersecurity measures throughout India's space ecosystem, while guiding how to better prepare for and respond to the growing volume of cyber threat activity targeting Space Systems.
Overview of the CERT-In Space Cybersecurity Framework
CERT-In introduced a dedicated cybersecurity framework for space systems in February 2026. Developed in collaboration with industry stakeholders, the framework provides guidelines to strengthen the security of satellite communication infrastructure across India. Although the guidelines are advisory in nature, they are designed to promote best practices and encourage organisations to adopt robust cybersecurity measures.
The framework targets a wide range of stakeholders involved in satellite communication operations. These include government agencies, satellite operators, ground station operators, equipment manufacturers, technology vendors, and emerging space startups. By outlining cybersecurity principles, technical controls, and governance mechanisms, the framework aims to create a coordinated approach to protecting space assets.
Another key objective of the guidelines is to foster collaboration between the public and private sectors. As India’s space industry expands and private participation increases, maintaining a secure and resilient ecosystem becomes essential. The framework, therefore, emphasises risk management, incident reporting, and continuous monitoring to strengthen the overall cybersecurity posture of the space sector.
Key Components of Satellite Communication Systems
Satellite communication systems are made up of multiple interconnected devices that can be used to deliver communication services. The cybersecurity framework groups these elements into three categories: the space segment, the ground segment, and the user segment.
The space segment is everything related to the satellite itself, including the satellite's onboard systems. This includes the satellite's communication payload, telemetry systems, antennas, power systems, and software that controls its operation. Because satellites operate in remote parts of space with very little opportunity for maintenance, securing these systems is critical in order to guard against unauthorized access to or control of these systems.
The ground segment comprises the terrestrial infrastructure responsible for controlling the satellite's operations. It consists of satellite mission control centres, ground stations, network gateways and data processing facilities. The ground stations send commands to the satellites and receive telemetry data from the satellites, which makes the ground station a very important physical interface point between the satellite asset located in outer space and a terrestrial network.
The user segment contains any device terminal being used by either an individual or an organisation that is accessing a satellite service. Examples of user devices are satellite phones, VSAT terminals, modems, and IoT devices connected to satellite networks. Since these devices connect directly to the communication networks, vulnerabilities in user equipment could also represent a significant threat to the cybersecurity of satellite communications.
Major Cyber Threats to Space Infrastructure
The space systems that support the delivery of satellite communications are being increasingly targeted with multiple types of cyber threats. A major category includes cyber-attacks on communication links between satellites and ground stations. Cyber criminals can attempt to jam the satellite’s communication link, intercept communication signals, or re-transmit previously sent communication signals in order to disrupt the operation of the affected satellites.
Attacks on the systems that control the satellite are serious threats to satellite operations. Cybercriminals and hostile actors can perform command injection attacks where commands are sent to a satellite, and the satellite responds through some undesired action. If cybercriminals are able to gain access to the telemetry or command channels, they can potentially disrupt the operation of the satellite or alter the telemetry data being received from the satellite.
The ground infrastructure that supports satellite communications is still a major target for cybercriminals. Mission control networks and data centres are susceptible to malware, ransomware, phishing, and insider threats. Attackers will frequently target ground stations because they provide a connection point to terrestrial networks and can exploit vulnerabilities from the ground station’s IT systems into the satellite control systems. The combination of these threats illustrates the need for an overall security strategy that encompasses all parts of the satellite communications ecosystem.
Key Security Principles and Measures
A comprehensive overview of multiple principles designed to increase the security of satellite communications is provided in the CERT-In Framework on Cybersecurity for Satellite Communications. The first of these principles, security by design, refers to ensuring that all cybersecurity controls associated with a system are implemented at the time of the system's initial design and development, not afterwards; therefore, security controls should be incorporated throughout the entire lifecycle of a satellite system.
The second principle, which is known as Defense-in-Depth, consists of implementing many different layers or tiers of security controls to protect a system against cyber threats or attacks. An example of the different categories of security controls includes physical security, network security, and access control, among others. By combining security controls across multiple categories, an organisation may be able to reduce the chance that one single vulnerability will result in the loss of the entire system.
The third principle in the Framework, Zero Trust Architecture (ZTA): Users and/or devices located within a network should not be able to rely on implicit trust. Therefore, every request for access to the network will be verified and continuously monitored for potential threats.
The previous two principles stated that secure satellite communications should be conducted using strong encryption and authentication methods, as well as secure communications methods, and that an enterprise monitoring system would be put into place to help detect anomalies or suspicious behaviour.
Conclusion
India is taking an important step toward protecting its expanding space ecosystem by creating a cybersecurity framework to safeguard cyberspace systems from cyber threats. The CERT-In guidelines offer a structured means of reducing the likelihood of cyber threats impacting satellite communication infrastructure through secure system design, continuous monitoring of systems and creating consistent partnerships among organisations. As well as providing evidence that both government and private sector organisations share a collective responsibility for the protection of space assets, both sectors participate in a collaborative effort.
India will need to implement rigorous cybersecurity measures as it expands its space infrastructure in order to ensure the continued availability of critical space infrastructure and ultimately develop its existing commercial satellite business operations with the highest level of safety and security.
References
- https://www.cert-in.org.in/s2cMainServlet?pageid=GUIDLNVIEW02&refcode=CISG-2026-01
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2233122®=3&lang=1

Introduction
In an age where the lines between truth and fiction blur with an alarming regularity, we stand at the precipice of a new and dangerous era. Amidst the wealth of information that characterizes the digital age, deep fakes and disinformation rise like ghosts, haunting our shared reality. These manifestations of a technological revolution that promised enlightenment instead threaten the foundations upon which our societies are built: trust, truth, and collective understanding.
These digital doppelgängers, enabled by advanced artificial intelligence, and their deceitful companion—disinformation—are not mere ghosts in the machine. They are active agents of chaos, capable of undermining the core of democratic values, human rights, and even the safety of individuals who dare to question the status quo.
The Perils of False Narratives in the Digital Age
As a society, we often throw around terms such as 'fake news' with a mixture of disdain and a weary acceptance of their omnipresence. However, we must not understate their gravity. Misinformation and disinformation represent the vanguard of the digital duplicitous tide, a phenomenon growing more complex and dire each day. Misinformation, often spread without malicious intent but with no less damage, can be likened to a digital 'slip of the tongue' — an error in dissemination or interpretation. Disinformation, its darker counterpart, is born of deliberate intent to deceive, a calculated move in the chess game of information warfare.
Their arsenal is varied and ever-evolving: from misleading memes and misattributed quotations to wholesale fabrications in the form of bogus news sites and carefully crafted narratives. Among these weapons of deceit, deepfakes stand out for their audacity and the striking challenge they pose to the concept of seeing to believe. Through the unwelcome alchemy of algorithms, these video and audio forgeries place public figures, celebrities, and even everyday individuals into scenarios they never experienced, uttering words they never said.
The Human Cost: Threats to Rights and Liberties
The impact of this disinformation campaign transcends inconvenience or mere confusion; it strikes at the heart of human rights and civil liberties. It particularly festers at the crossroads of major democratic exercises, such as elections, where the right to a truthful, unmanipulated narrative is not just a political nicety but a fundamental human right, enshrined in Article 25 of the International Convention on Civil and Political Rights (ICCPR).
In moments of political change, whether during elections or pivotal referenda, the deliberate seeding of false narratives is a direct assault on the electorate's ability to make informed decisions. This subversion of truth infects the electoral process, rendering hollow the promise of democratic choice.
This era of computational propaganda has especially chilling implications for those at the frontline of accountability—journalists and human rights defenders. They find themselves targets of character assassinations and smear campaigns that not only put their safety at risk but also threaten to silence the crucial voices of dissent.
It should not be overlooked that the term 'fake news' has, paradoxically, been weaponized by governments and political entities against their detractors. In a perverse twist, this label becomes a tool to shut down legitimate debate and shield human rights violations from scrutiny, allowing for censorship and the suppression of opposition under the guise of combatting disinformation.
Deepening the societal schisms, a significant portion of this digital deceit traffic in hate speech. Its contents are laden with xenophobia, racism, and calls to violence, all given a megaphone through the anonymity and reach the internet so readily provides, feeding a cycle of intolerance and violence vastly disproportionate to that seen in traditional media.
Legislative and Technological Countermeasures: The Ongoing Struggle
The fight against this pervasive threat, as illustrated by recent actions and statements by the Indian government, is multifaceted. Notably, Union Minister Rajeev Chandrasekhar's commitment to safeguarding the Indian populace from the dangers of AI-generated misinformation signals an important step in the legislative and policy framework necessary to combat deepfakes.
Likewise, Prime Minister Narendra Modi's personal experience with a deepfake video accentuates the urgency with which policymakers, technologists, and citizens alike must view this evolving threat. The disconcerting experience of actor Rashmika Mandanna serves as a sobering reminder of the individual harm these false narratives can inflict and reinforces the necessity of a robust response.
In their pursuit to negate these virtual apparitions, policymakers have explored various avenues ranging from legislative action to penalizing offenders and advancing digital watermarks. However, it is not merely in the realm of technology that solutions must be sought. Rather, the confrontation with deepfakes and disinformation is also a battle for the collective soul of societies across the globe.
As technological advancements continue to reshape the battleground, figures like Kris Gopalakrishnan and Manish Gangwar posit that only a mix of rigorous regulatory frameworks and savvy technological innovation can hold the front line against this rising tidal wave of digital distrust.
This narrative is not a dystopian vision of a distant future - it is the stark reality of our present. And as we navigate this new terrain, our best defenses are not just technological safeguards, but also the nurturing of an informed and critical citizenry. It is essential to foster media literacy, to temper the human inclination to accept narratives at face value and to embolden the values that encourage transparency and the robust exchange of ideas.
As we peer into the shadowy recesses of our increasingly digital existence, may we hold fast to our dedication to the truth, and in doing so, preserve the essence of our democratic societies. For at stake is not just a technological arms race, but the very quality of our democratic discourse and the universal human rights that give it credibility and strength.
Conclusion
In this age of digital deceit, it is crucial to remember that the battle against deep fakes and disinformation is not just a technological one. It is also a battle for our collective consciousness, a battle to preserve the sanctity of truth in an era of falsehoods. As we navigate the labyrinthine corridors of the digital world, let us arm ourselves with the weapons of awareness, critical thinking, and a steadfast commitment to truth. In the end, it is not just about winning the battle against deep fakes and disinformation, but about preserving the very essence of our democratic societies and the human rights that underpin them.