#FactCheck - Viral image claiming to show injury marks of the MP Kangana Ranaut slapped is fake & misleading
Executive Summary:
The viral image in the social media which depicts fake injuries on the face of the MP(Member of Parliament, Lok Sabha) Kangana Ranaut alleged to have been beaten by a CISF officer at the Chandigarh airport. The reverse search of the viral image taken back to 2006, was part of an anti-mosquito commercial and does not feature the MP, Kangana Ranaut. The findings contradict the claim that the photos are evidence of injuries resulting from the incident involving the MP, Kangana Ranaut. It is always important to verify the truthfulness of visual content before sharing it, to prevent misinformation.

Claims:
The images circulating on social media platforms claiming the injuries on the MP, Kangana Ranaut’s face were because of an assault incident by a female CISF officer at Chandigarh airport. This claim hinted that the photos are evidence of the physical quarrel and resulting injuries suffered by the MP, Kangana Ranaut.



Fact Check:
When we received the posts, we reverse-searched the image and found another photo that looked similar to the viral one. We could verify through the earring in the viral image with the new image.

The reverse image search revealed that the photo was originally uploaded in 2006 and is unrelated to the MP, Kangana Ranaut. It depicts a model in an advertisement for an anti-mosquito spray campaign.
We can validate this from the earrings in the photo after the comparison between the two photos.

Hence, we can confirm that the viral image of the injury mark of the MP, Kangana Ranaut has been debunked as fake and misleading, instead it has been cropped out from the original photo to misrepresent the context.
Conclusion:
Therefore, the viral photos on social media which claimed to be the results of injuries on the MP, Kangana Ranaut’s face after being assaulted allegedly by a CISF officer at the airport in Chandigarh were fake. Detailed analysis of the pictures provided the fact that the pictures have no connection with Ranaut; the picture was a 2006 anti-mosquito spray advertisement; therefore, the allegations that show these images as that of Ranaut’s injury are fake and misleading.
- Claim: photos circulating on social media claiming to show injuries on the MP, Kangana Ranaut's face following an assault incident by a female CISF officer at Chandigarh airport.
- Claimed on: X (Formerly known as Twitter), thread, Facebook
- Fact Check: Fake & Misleading
Related Blogs

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf

Introduction
The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1 2024, urging platforms to prevent bias, discrimination, and threats to electoral integrity by using AI, generative AI, LLMs, or other algorithms. The advisory requires that AI models deemed unreliable or under-tested in India must obtain explicit government permission before deployment. While leveraging Artificial Intelligence models, Generative AI, software, or algorithms in their computer resources, Intermediaries and platforms need to ensure that they prevent bias, discrimination, and threats to electoral integrity. As Intermediaries are required to follow due diligence obligations outlined under “Information Technology (Intermediary Guidelines and Digital Media Ethics Code)Rules, 2021, updated as of 06.04.2023”. This advisory is issued to urge the intermediaries to abide by the IT rules and regulations and compliance therein.
Key Highlights of the Advisories
- Intermediaries and platforms must ensure that users of Artificial Intelligence models/LLM/Generative AI, software, or algorithms do not allow users to host, display, upload, modify, publish, transmit, store, update, or share unlawful content, as per Rule 3(1)(b) of the IT Rules.
- The government emphasises intermediaries and platforms to prevent bias or discrimination in their use of Artificial Intelligence models, LLMs, and Generative AI, software, or algorithms, ensuring they do not threaten the integrity of the electoral process.
- The government requires explicit permission to use deemed under-testing or unreliable AI models, LLMs, or algorithms on the Indian internet. Further, it must be deployed with proper labelling of potential fallibility or unreliability. Further, users can be informed through a consent popup mechanism.
- The advisory specifies that all users should be well informed about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. It entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform.
- The advisory also indicates measures advocating to combat deepfakes or misinformation. The advisory necessitates identifying synthetically created content across various formats, advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Rajeev Chandrasekhar, Union Minister of State for IT, specified that
“Advisory is aimed at the Significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups. Advisory is aimed at untested AI platforms from deploying on the Indian Internet. Process of seeking permission , labelling & consent based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers. Safety & Trust of India's Internet is a shared and common goal for Govt, users and Platforms.”
Conclusion
MeitY's advisory sets the stage for a more regulated Al landscape. The Indian government requires explicit permission for the deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet. Alongside intermediaries, the advisory also applies to digital platforms that incorporate Al elements. Advisory is aimed at significant platforms and will not apply to startups. This move safeguards users and fosters innovation by promoting responsible AI practices, paving the way for a more secure and inclusive digital environment.
References
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf
- https://economictimes.indiatimes.com/tech/technology/govts-ai-advisory-will-not-apply-to-startups-mos-it-rajeev-chandrasekhar/articleshow/108197797.cms?from=mdr
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf

Pretext
On 20th October 2022, the Competition Commission of India (CCI) imposed a penalty of Rs. 1,337.76 crores on Google for abusing its dominant position in multiple markets in the Android Mobile device ecosystem, apart from issuing cease and desist orders. The CCI also directed Google to modify its conduct within a defined timeline. Smart mobile devices need an operating system (OS) to run applications (apps) and programs. Android is one such mobile operating system that Google acquired in 2005. In the instant matter, the CCI examined various practices of Google w.r.t. licensing of this Android mobile operating system and various proprietary mobile applications of Google (e.g., Play Store, Google Search, Google Chrome, YouTube, etc.).
The Issue
Google was found to be misusing its dominant position in the tech market, and the same was the reason behind the penalty. Google argued about the competitive constraints being faced from Apple. In relation to understanding the extent of competition between Google’s Android ecosystem and Apple’s iOS ecosystem, the CCI noted the differences in the two business models, which affect the underlying incentives of business decisions. Apple’s business is primarily based on a vertically integrated smart device ecosystem that focuses on the sale of high-end smart devices with state-of-the-art software components. In contrast, Google’s business was found to be driven by the ultimate intent of increasing users on its platforms so that they interact with its revenue-earning service, i.e., online searches, which directly affects the sale of online advertising services by Google. It was seen that google had created a dominant position among the android phone manufacturers as they were made to have a set of google apps preinstalled in the device to increase the user’s dependency on google services. The CCI felt that Google had created a dominant position to which they replied that the same operations are done by Apple as well, to which the commission responded that apple is a phone and app manufacturer and they have Apple-owned apps in Apple devices only, but Google here in had made a pseudo mandate for android manufactures to have the google apps pre-installed which is, in turn, a possible way of disrupting the market equilibrium and violative of market practices. The CCI imposed a penalty of Rs. 1,337.76 for abusing its dominant position in multiple markets in India, CCI delineated the following five relevant markets in the present matter –

- The market for licensable OS for smart mobile devices in India
- The market for app store for Android smart mobile OS in India
- The market for general web search services in India
- The market for non-OS specific mobile web browsers in India
- The market for online video hosting platforms (OVHP) in India.
Supreme Courts Opinion
In October 2022, the Competition Commission of India (CCI) ruled that Google, owned by Alphabet Inc, exploited its dominant position in Android and told it to remove restrictions on device makers, including those related to the pre-installation of apps and ensuring exclusivity of its search. Google lost a challenge in the Supreme Court to block the directives, as the learned court refused to put a stay on the imposed penalty, further giving seven days to comply. The Supreme Court has said a lower tribunal—where Google first challenged the Android directives—can continue to hear the company’s appeal and must rule by March 31.
Counterpoint Research estimates that about 97% of 600 million smartphones in India run on Android. Apple has just a 3% share. Hoping to block the implementation of the CCI directives, Google challenged the CCI order in the Supreme Court by warning it could stall the growth of the Android ecosystem. It also said it would be forced to alter arrangements with more than 1,100 device manufacturers and thousands of app developers if the directives kick in. Google has been concerned about India’s decision as the steps are seen as more sweeping than those imposed in the European Commission’s 2018 ruling. There it was fined for putting in place what the Commission called unlawful restrictions on Android mobile device makers. Google is still challenging the record $4.3 billion fine in that case. In Europe, Google made changes later, including letting Android device users pick their default search engine, and said device makers would be able to license the Google mobile application suite separately from the Google Search App or the Chrome browser.
Conclusion
As the world goes deeper into cyberspace, the big tech companies have more control over the industry and the markets, but the same should not turn into anarchy in the global markets. The Tech giants need to be made aware that compliance is the utmost duty for all companies, and enforcement of the law of the land will be maintained no matter what. Earlier India lacked policies and legislation to govern cyberspace, but in the recent proactive stance by the govt, a lot of new bills have been tabled, one of them being the Intermediary Rules 2021, which has laid down the obligations nand duties of the companies by setting up an intermediary in the country. Such bills coupled with such crucial judgments on tech giants will act as a test and barrier for other tech companies who try to flaunt the rules and avoid compliance.