#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx

India’s online gaming industry has grown at lightning speed, drawing millions of users across age groups. From casual games and e-sports to fantasy leagues and online poker, digital entertainment has become both a social and economic phenomenon. But with this growth came rising concerns of addiction, financial loss, misleading ads, and even criminal misuse of gaming platforms for illegal betting. To address these concerns, the Government of India introduced the Promotion and Regulation of Online Gaming Act and draft Rules in October 2025. While the Act represents a crucial step toward accountability and user protection, it also raises difficult questions about freedom, innovation, and investor confidence.
The Current Legal Framework
The 2025 Act, along with corresponding changes in the Information Technology and GST laws, aims to create a safer and more transparent gaming environment.
1. Ban on real-money games:
Any online game where money is involved, whether it’s entry fees, bets, or prizes, is now banned, regardless of whether it is based on skill or chance. As a result, previously permitted formats such as fantasy sports, rummy, and poker once defended as “games of skill” now fall within the category of banned activities.
2. Promotion of e-sports and social gaming
Not all gaming is banned. Casual games, e-sports, and social games that don’t involve money are fully allowed. The government is encouraging these as part of India’s growing digital economy.
3. Advertising and financial restrictions: Banks, payment gateways, and advertisers cannot facilitate or promote real-money games. Any platform offering deposits or prize pools can be blocked.
4. Central regulatory authority: The law establishes a national body to classify games, monitor compliance, and address complaints. It has the power to order the locking of violative content and websites.
Why Regulation Was Needed
The push for regulation came after a surge in online betting scams, debt-related suicides, and disputes about whether certain apps were skill-based or chance-based. State governments had taken conflicting positions, some banning, others licensing such games. Meanwhile, offshore gaming apps operated freely in India’s grey market.
The 2025 Act thus attempts to impose uniformity, protect minors, and bring moral and fiscal discipline to a rapidly expanding digital frontier. Its underlying philosophy resembles that of the Digital Personal Data Protection Act, encouraging responsible use of technology rather than an unregulated free-for-all.
Key Challenges and Gaps
(a) Clarity of Definitions
The Act bans all real-money games, ignoring the difference between skill-based games and chance-based games. This could lead to legal challenges under Article 19(1)(g), which protects the right to do business. Games like rummy or fantasy cricket, which need real skill, arguably shouldn’t be banned outright
(b) Weak Consumer and Child Protection
Although age verification and KYC are mandated, compliance at the user-end remains uncertain. India needs a Responsible Gaming Code covering:
- Spending limits and cooling-off periods;
- Self-exclusion options;
- Transparent disclosure of odds; and
- Algorithmic fairness audits.
These measures can help mitigate addiction and prevent exploitation of minors.
(c) Federal Conflicts
“Betting and gambling” fall within the State List under India’s Constitution, yet the 2025 Act seeks national uniformity. States like Tamil Nadu and Karnataka already have independent bans. Without harmonisation, legal disputes between state and central authorities could multiply. A cooperative federal framework allowing states to adopt central norms voluntarily could offer flexibility without fragmentation.
(d) Regulatory Transparency
The gaming regulator has a lot of power, like deciding which games are allowed and blocking websites. But it’s not clear who chooses its members or how people can challenge its decisions. Including court oversight, public input, and regular reporting would make the regulator fairer and more reliable.
What’s Next for India’s Online Gaming
India’s online gaming scene is at a turning point. Banning all money-based games might reduce risks, but it also slows innovation and limits opportunities. A better approach could be to license skill-based or low-risk games with proper KYC and audits, set up a Responsible Gaming Charter with input from government, industry, and civil society, and create rules for offshore platforms targeting Indian players. Player data should be protected under the Digital Personal Data Protection Act, 2023, and the law should be reviewed every few years to keep up with new tech like the metaverse, NFTs, and AI-powered games.
Conclusion
CyberPeace has already provided its detailed feedback to MEITy as on 30th October, 2025 hopes the finalised rules are released soon with the acknowledgment of the challenges discussed. The Promotion and Regulation of Online Gaming Act, 2025, marks an important turning point since this is India’s first serious attempt to bring order to a chaotic digital arena. The goal is to keep players safe, stop crime, and hold platforms accountable. But the tricky part is moving away from blanket bans. We need rules that let new ideas grow, respect people’s rights, and keep players safe. With a few smart changes and fair enforcement, India could have a gaming industry that’s safe, responsible, and ready to compete globally.
References
- https://ssrana.in/articles/indias-online-gaming-bill-2025-regulation-prohibition-and-the-future-of-digital-play/
- https://www.google.com/amp/s/m.economictimes.com/news/economy/policy/new-online-gaming-law-takes-effect-money-games-banned-from-today/amp_articleshow/124255401.cms
- https://www.google.com/amp/s/timesofindia.indiatimes.com/technology/tech-news/government-proposes-to-make-violation-of-online-money-game-rules-non-bailable-draft-rules-ban-/amp_articleshow/124277740.cms
- https://www.egf.org.in/
- https://www.pib.gov.in/PressNoteDetails.aspx?NoteId=155075&ModuleId=3

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions