#FactCheck: A viral claim suggests that by turning on Advance Chat Privacy, Meta AI can avoid reading Whatsapp chats.
Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Introduction
Rajeev Chandrasekhar, Minister of State at the Ministry of Electronics and Information Technology, has emphasised the need for an open internet. He stated that no platform can deny content creators access to distribute and monetise content and that large technology companies have begun to play a significant role in the digital evolution. Chandrasekhar emphasised that the government does not want the internet or monetisation to be in the purview of just one or two companies and does not want 120 crore Indians on the internet in 2025 to be catered to by big islands on the internet.
The Voice for Open Internet
India's Minister of State for IT, Rajeev Chandrasekhar, has stated that no technology company or social media platform can deny content creators access to distribute and monetise their content. Speaking at the Digital News Publishers Association Conference in Delhi, Chandrasekhar emphasized that the government does not want the internet or monetization of the internet to be in the hands of just one or two companies. He argued that the government does not like monopoly or duopoly and does not want 120 crore Indians on the Internet in 2025 to be catered to by big islands on the internet.
Chandrasekhar highlighted that large technology companies have begun to exert influence when it comes to the dissemination of content, which has become an area of concern for publishers and content creators. He stated that if any platform finds it necessary to block any content, they need to give reasons or grounds to the creators, stating that the content is violating norms.
As India tries to establish itself as an innovator in the technology sector, a recent corpus of Rs 1 lakh crore was announced by the government in the interim Budget of 2024-25. As big companies continue to tighten their stronghold on the sector, content moderation has become crucial. Under the IT Rules Act, 11 types of categories are unlawful under IT Act and criminal law. Platforms must ensure no user posts content that falls under these categories, take down any such content, and gateway users to either de-platforming or prosecuting. Chandrasekhar believes that the government has to protect the fundamental rights of people and emphasises legislative guardrails to ensure platforms are accountable for the correctness of the content.
Monetizing Content on the Platform
No platform can deny a content creator access to the platform to distribute and monetise it,' Chandrasekhar declared, boldly laying down a gauntlet that defies the prevailing norms. This tenet signals a nascent dawn where creators may envision reaping the rewards borne of their creative endeavours unfettered by platform restrictions.
An increasingly contentious issue that shadows this debate is the moderation of content within the digital realm. In this vast uncharted expanse, the powers that be within these monolithic platforms assume the mantle of vigilance—policing the digital avenues for transgressions against a conscribed code of conduct. Under the stipulations of India's IT Rules Act, for example, platforms are duty-bound to interdict user content that strays into territories encompassing a spectrum of 11 delineated unlawful categories. Violations span the gamut from the infringement of intellectual property rights to the propagation of misinformation—each category necessitating swift and decisive intervention. He raised the alarm against misinformation—a malignant growth fed by the fertile soils of innovation—a phenomenon wherein media reports chillingly suggest that up to half of the information circulating on the internet might be a mere fabrication, a misleading simulacrum of authenticity.
The government's stance, as expounded by Chandrasekhar, pivots on an axis of safeguarding citizens' fundamental rights, compelling digital platforms to shoulder the responsibility of arbiters of truth. 'We are a nation of over 90 crores today, a nation progressing with vigour, yet we find ourselves beset by those who wish us ill,'
Upcoming Digital India Act
Awaiting upon the horizon, India's proposed Digital India Act (DIA), still in its embryonic stage of pre-consultation deliberation, seeks to sculpt these asymmetries into a more balanced form. Chandrasekhar hinted at the potential inclusion within the DIA of regulatory measures that would sculpt the interactions between platforms and the mosaic of content creators who inhabit them. Although specifics await the crucible of public discourse and the formalities of consultation, indications of a maturing framework are palpable.
Conclusion
It is essential that the fable of digital transformation reverberates with the voices of individual creators, the very lifeblood propelling the vibrant heartbeat of the internet's culture. These are the voices that must echo at the centre stage of policy deliberations and legislative assembly halls; these are the visions that must guide us, and these are the rights that we must uphold. As we stand upon the precipice of a nascent digital age, the decisions we forge at this moment will cascade into the morrow and define the internet of our future. This internet must eternally stand as a bastion of freedom, of ceaseless innovation and as a realm of boundless opportunity for every soul that ventures into its infinite expanse with responsible use.
References
- https://www.financialexpress.com/business/brandwagon-no-platform-can-deny-a-content-creator-access-to-distribute-and-monetise-content-says-mos-it-rajeev-chandrasekhar-3386388/
- https://indianexpress.com/article/india/meta-content-monetisation-social-media-it-rules-rajeev-chandrasekhar-9147334/
- https://www.medianama.com/2024/02/223-rajeev-chandrasekhar-content-creators-publishers/

Overview:
In today’s digital landscape, safeguarding personal data and communications is more crucial than ever. WhatsApp, as one of the world’s leading messaging platforms, consistently enhances its security features to protect user interactions, offering a seamless and private messaging experience
App Lock: Secure Access with Biometric Authentication
To fortify security at the device level, WhatsApp offers an app lock feature, enabling users to protect their app with biometric authentication such as fingerprint or Face ID. This feature ensures that only authorized users can access the app, adding an additional layer of protection to private conversations.
How to Enable App Lock:
- Open WhatsApp and navigate to Settings.
- Select Privacy.
- Scroll down and tap App Lock.
- Activate Fingerprint Lock or Face ID and follow the on-screen instructions.

Chat Lock: Restrict Access to Private Conversations
WhatsApp allows users to lock specific chats, moving them to a secured folder that requires biometric authentication or a passcode for access. This feature is ideal for safeguarding sensitive conversations from unauthorized viewing.
How to Lock a Chat:
- Open WhatsApp and select the chat to be locked.
- Tap on the three dots (Android) or More Options (iPhone).
- Select Lock Chat
- Enable the lock using Fingerprint or Face ID.

Privacy Checkup: Strengthening Security Preferences
The privacy checkup tool assists users in reviewing and customizing essential security settings. It provides guidance on adjusting visibility preferences, call security, and blocked contacts, ensuring a personalized and secure communication experience.
How to Run Privacy Checkup:
- Open WhatsApp and navigate to Settings.
- Tap Privacy.
- Select Privacy Checkup and follow the prompts to adjust settings.

Automatic Blocking of Unknown Accounts and Messages
To combat spam and potential security threats, WhatsApp automatically restricts unknown accounts that send excessive messages. Users can also manually block or report suspicious contacts to further enhance security.
How to Manage Blocking of Unknown Accounts:
- Open WhatsApp and go to Settings.
- Select Privacy.
- Tap to Advanced
- Enable Block unknown account messages

IP Address Protection in Calls
To prevent tracking and enhance privacy, WhatsApp provides an option to hide IP addresses during calls. When enabled, calls are routed through WhatsApp’s servers, preventing location exposure via direct connections.
How to Enable IP Address Protection in Calls:
- Open WhatsApp and go to Settings.
- Select Privacy, then tap Advanced.
- Enable Protect IP Address in Calls.

Disappearing Messages: Auto-Deleting Conversations
Disappearing messages help maintain confidentiality by automatically deleting sent messages after a predefined period—24 hours, 7 days, or 90 days. This feature is particularly beneficial for reducing digital footprints.
How to Enable Disappearing Messages:
- Open the chat and tap the Chat Name.
- Select Disappearing Messages.
- Choose the preferred duration before messages disappear.

View Once: One-Time Access to Media Files
The ‘View Once’ feature ensures that shared photos and videos can only be viewed a single time before being automatically deleted, reducing the risk of unauthorized storage or redistribution.
How to Send View Once Media:
- Open a chat and tap the attachment icon.
- Choose Camera or Gallery to select media.
- Tap the ‘1’ icon before sending the media file.

Group Privacy Controls: Manage Who Can Add You
WhatsApp provides users with the ability to control group invitations, preventing unwanted additions by unknown individuals. Users can restrict group invitations to ‘Everyone,’ ‘My Contacts,’ or ‘My Contacts Except…’ for enhanced privacy.
How to Adjust Group Privacy Settings:
- Open WhatsApp and go to Settings.
- Select Privacy and tap Groups.
- Choose from the available options: Everyone, My Contacts, or My Contacts Except

Conclusion
WhatsApp continuously enhances its security features to protect user privacy and ensure safe communication. With tools like App Lock, Chat Lock, Privacy Checkup, IP Address Protection, and Disappearing Messages, users can safeguard their data and interactions. Features like View Once and Group Privacy Controls further enhance confidentiality. By enabling these settings, users can maintain a secure and private messaging experience, effectively reducing risks associated with unauthorized access, tracking, and digital footprints. Stay updated and leverage these features for enhanced security.