#FactCheck-A manipulated image showing Indian cricketer Virat Kohli allegedly watching Rahul Gandhi's media briefing on his mobile phone has been widely shared online.
Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Related Blogs

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf

Introduction
Social media has become integral to our lives and livelihood in today’s digital world. Influencers are now strong people who shape trends, views, and consumer behaviour. Influencers have become targets for bad actors aiming to abuse their fame due to their significant internet presence. Unfortunately, account hacking has grown frequently, with significant ramifications for influencers and their followers. Furthermore, the emergence of social media platforms in recent years has opened the way for influencer culture. Influencers exert power over their followers’ ideas, lifestyle choices, and purchase decisions. Influencers and brands frequently collaborate to exploit their reach, resulting in a mutually beneficial environment. As a result, the value of influencer accounts has risen dramatically, attracting the attention of hackers trying to abuse their potential for financial gain or personal advantage.
Instances of recent attacks
Places of worship
The hackers have targeted renowned temples for fulfilling their malicious activities the recent attack happened on The Khautji Shyam Temple, a famous religious institution with enormous cultural and spiritual value for its adherents. It serves as a place of worship, community events, and numerous religious activities. However, since technology has invaded all sectors of life, the temple’s online presence has developed, giving worshippers access to information, virtual darshans (holy viewings), and interactive forums. Unfortunately, this digital growth has also rendered the shrine vulnerable to cyber threats. The hackers hacked the Facebook page twice in the month, demanded donations and hacked the cheques the devotes gave to the trust. The second event happened by posting objectional images on the page and hurting the sentiments of the devotees. The Committee of the temple has filed an FIR under various charges and is also seeking help from the cyber cell.
Social media Influencers
Influencers enjoy a vast online following worldwide, but their presence is limited to the digital space. Hence every video, photo is of importance to them. An incident took place with leading news anchor and reporter Barkha Dutt, where in her youtube channel was hacked into, and all the posts made from the channel were deleted. The hackers also replaced the channel’s logo with Tesla and were streaming a live video on the channel featuring Elon Musk. A similar incident was reported by influencer Tanmay Bhatt, who also lost all the content e had posted on his channel. The hackers use the following methods to con social media influencers:
- Social engineering
- Phishing
- Brute Force Attacks
Such attacks on influencers can cause harm to their reputation, can also cause financial loss, and even lose the trust of the viewers or the followers who follow them, thus further impacting the collaborations.

Safeguards
Social media influencers need to be very careful about their cyber security as their prominent presence is in the online world. The influencers from different platforms should practice the following safeguards to protect themselves and their content better online
Secure your accounts
Protecting your accounts with passphrases or strong passwords is the first step. The best strategy for doing this is to create a passphrase, a phrase only you know. We advise choosing a passphrase with at least four words and 15 characters.
To further secure your accounts, you must enable multi-factor authentication in the second step.
To access your account, a hacker must guess your password and provide a second authentication factor (such as a face scan or fingerprint) that matches yours.
Be careful about who has access
Many social media influencers collaborate with a team to help generate and post content while building their personal brands.
This entails using team members who can write and produce material that influencers can share themselves, according to some of them. In these situations, the influencer is the only person who still has access to the account.
There are more potential weak spots when more people have access. Additionally, it increases the number of ways a password or account access could fall into the hands of a cybercriminal. Only some staff members will be as cautious about password security as you may be.
Stay up-to-date on the threats
What’s the most significant way to combat threats to computer security? Information.
Cybercriminals constantly adapt their methods. It’s crucial to stay informed about these threats and how they can be utilised against you.
But it’s not just threats. Social media platforms and other service providers are likewise changing their offerings to avoid these challenges.
Educate yourself to protect yourself. You can keep one step ahead of the hazards that cybercriminals offer by continuously educating yourself.
Preach cybersecurity
As influencers, cyber security should be preached, no matter your agenda.
This will also enable users to inculcate best practices for digital hygiene.
This will also boost the reporting numbers and increase population awareness, thus eradicating such bad actors from our cyberspace.
Acknowledge the risks
Keeping a blind eye will always hurt the safety aspects, as ignorance always causes issues.
Risks should be kept in mind while creating the digital routine and netiquette
Always inform your users of risk existing and potential risks
Monitor threats
After the acknowledgement, it is essential to monitor threats.
Active lookout for threats will allow you to understand the modus Operandi and the vulnerabilities to avoid criminals
Threats monitoring is also a basic netizens’ responsibility to ensure that the threats are reported as they emerge.
Interpret the data
All cyber nodal agencies release data and trends of cybercrimes, understand the trends and protect your vulnerabilities.
Data interpretation can lead to an early flagging of threats and issues, thus protecting the cyber ecosystem by and large.
Create risk profiles
All influencers should create risk profiles and backup profiles.
This will also help protect one’s data as it can be stored on different profiles.
Risk profiles and having a private profile are essential to safeguard the basic cyber interests of an influencer.

Conclusion
As we go deeper into the digital age, we see more technologies emerging, but along with them, we see a new generation of cyber threats and challenges. The physical, as well as the cyberspace, is now inter twinned and interdependent. Practising basic cyber security practices, hygiene, netiquette, and monitoring best practices will go a long way in protecting the online interests of the Influencers and will impact their followers to engage in best practices thus safeguarding the cyber ecosystem at large.

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724