#FactCheck! Viral Image Claiming Virat Kohli and Rohit Sharma Visited Kedarnath Is AI-Generated
A photo featuring Indian cricketers Virat Kohli and Rohit Sharma is being widely shared on social media. In the image, both players are seen holding a Shivling, with the Kedarnath temple visible in the background. Users sharing the image claim that Virat Kohli and Rohit Sharma recently visited Kedarnath.
However, CyberPeace Foundation’s investigation found the claim to be false. Our verification established that the viral image is not real but has been created using Artificial Intelligence (AI) and is being circulated with a misleading narrative.
The Claim
An Instagram user shared the viral image on December 22, 2025, with the caption stating that Rohit Sharma and Virat Kohli are in Kedarnath. The post has since been widely reshared by other users, who assumed the image to be authentic. Link, archive link, screenshot:

Fact Check
On closely examining the viral image, the Desk noticed visual inconsistencies suggesting that it may be AI-generated. To verify this, the image was scanned using the AI detection tool HIVE Moderation. According to the results, the image was found to be 99 per cent AI-generated.

Further verification was conducted using another AI detection tool, Sightengine. The analysis revealed that the image was 93 per cent likely to be AI-generated, reinforcing the findings from the previous tool.

Conclusion
CyberPeace Foundation’s research confirms that the viral image claiming Virat Kohli and Rohit Sharma visited Kedarnath is fabricated. The image has been generated using AI technology and is being falsely shared on social media as a real photograph.
Related Blogs

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide

Introduction
Netizens across the globe have been enjoying the fruits of technological advancements in the digital century. Our personal and professional life has been impacted deeply by the new technologies. The previous year we saw an exponential rise in blockchain integration and the applications of Web 3.0. There is no denying that the Covid-19 pandemic caused a rapid rise in technology and internet penetration all across the globe, bringing the world closer with respect to connectivity and the exchange of ideas and knowledge. Tech advancements have definitely made our lives easier, but the same has also opened the doors to various vulnerabilities and new potential threats. As cyberspace expands, so do the vulnerabilities associated with it, and it is critical we take note of such issues and create safeguards to the extent that such incidents are prevented before they occur. We need to create sustainable and secure cyberspace for future generations.MetaVerse in 2023The metaverse was introduced by Facebook (now Meta) in 2021 as a peak into the future of cyberspace. Since then, tech developers have been working towards arming the metaverse with extraordinary innovations and applications. Netizens came across news like someone bought a house or a plot in the metaverse, someone bought a car in the metaverse, and so on, these news were taken to be the evidence of the netizen’s transition towards the new digital age as we have seen in sci-fi movies. But today this type of news has become history and the metaverse is expanding faster than ever. Let us look at the latest developments and trends in the metaverse-
- Avatar creation - The avatar creation in the metaverse will be a pivotal move as the avatars will represent the user, and essentially it will be the digital, version of the user and will be similar to the user's personal and physical traits to maintain realism in the metaverse.
- Architecture firms - Metaverse has its own set of architects who will be working towards creating your dream home or pro[erty in the metaverse, the heavy code-based services are now being sold just as if they were in the physical space.
- Mining - The metaverse already has companies who are mining gold, silver, petroleum, and other resources for the avatars in the metaverse, for instance, if someone has bought a car in the metaverse, it will still need fuel to run.
- Security firms - These firms are the first line of defenders in the metaverse as they provide tech-based solutions and protocols to secure one’s avatar and belongings in the metaverse.
- Metaverse Police - Interpol, along with its global partner organization has created the metaverse police, who will be working towards creating a safe cyber ecosystem by maintaining compliance with digital laws and ethics.
Advancements beyond metaverse in 2023
Technology continues to be a critical force for change in the world. Technology breakthroughs give enterprises more possibilities to lift their productivity and invent offerings. And while it remains difficult to forecast how technology trends will play out, business leaders can plan ahead better by watching the development of new technologies, anticipating how companies could utilize them, and understanding the factors that impact innovation and adoption.
- Applied observability
It advances the practice of pattern recognition. To foresee and identify abnormalities and offer solutions, one must have the capacity to delve deeply into complicated systems and a stream of data. Data fuels this aspect of tech growth in the future.
- Digital Immune System
To ensure that all major systems operate round-the-clock to deliver uninterrupted services, Digital Immune System will combine observability, AI-augmented testing, chaos engineering, site reliability engineering (SRE), and software supply chain security. This will take the efficiency of the systems to a new level.
- Super apps
These represent the upcoming shift in application usage, design, and development, where consumers will utilise a single app to manage most systems in an enterprise ecosystem. Over 50% of the world’s population will utilise super apps on a daily basis to fulfill their daily personal and professional needs.
- AR/VR and BlockChain technology
A combination of better interconnected, safe, and immersive virtual environments where people and businesses may recreate real-life scenarios will be created by combining AR/VR, AI/ML, IoT, and Blockchain, thus creating a new vertical of innovation with keen technologies of Web 3.0.
- AAI
The next level of AI, i.e., Advanced Artificial Intelligence (AI), will revolutionise machine learning, pattern recognition, and computing. It aims to fully automate processes without requiring any manual input, thus eradicating the issues of human error and bad actor influence completely.
- Corporate Metaverse
Aside from its power as a marketing tool, the metaverse promises to provide platforms, tools, and entire virtual worlds where business can be done remotely, efficiently, and intelligently. We can expect to see the metaverse concept merge with the idea of the “digital twin” – virtual simulations of real-world products, processes, or operations that can be used to test and prototype new ideas in the safe environment of the digital domain. From wind farms to Formula 1 cars, designers are recreating physical objects inside virtual worlds where their efficiency can be stress-tested under any conceivable condition without the resource costs that would be incurred by testing them in the physical world.ConclusionIn 2023, we will see more advanced use cases for technology such as motion capture, which will mean that as well as looking and sounding more like us, our avatars will adopt our own unique gestures and body language. We may even start to see further developments in the fields of autonomous avatars – meaning they won't be under our direct control but will be enabled by AI to act as our representatives in the digital world while we ourselves get on with other, completely unrelated tasks. As we go deeper into cyberspace, we need to remember the basic safety practices and inculcate them with respect to cyberspace and work towards creating string policies and legislations to safeguard the digital rights and duties of the netizen to create a wholesome and interdependent cyber ecosystem.