#FactCheck: A viral claim suggests that by turning on Advance Chat Privacy, Meta AI can avoid reading Whatsapp chats.
Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
As e-sports flourish in India, mobile gaming platforms and apps have contributed massively to this boom. The wave of online mobile gaming has led to a new recognition of esports. As we see the Sports Ministry being very proactive for e-sports and e-athletes, it is pertinent to ensure that we do not compromise our cyber security for the sake of these games. When we talk about online mobile gaming, the most common names that come to our minds are PUBG and BGMI. As news for all Indian gamers, BGMI is set to be relaunched in India after approval from the Ministry of Electronics and Information Technology.
Why was BGMI banned?
The Govt banned Battle Ground Mobile India on the pretext of being a Chinese application and the fact that all the data was hosted in China itself. This caused a cascade of compliance and user safety issues as the Data was stored outside India. Since 2020 The Indian Govt has been proactive in banning Chinese applications, which might have an adverse effect on national security and Indian citizens. Nearly 200 plus applications have been banned by the Govt, and most of them were banned due to their data hubs being in China. The issue of cross-border data flow has been a key issue in Geo-Politics, and whoever hosts the data virtually owns it as well and under the potential threat of this fact, all apps hosting their data in China were banned.
Why is BGMI coming back?
BGMI was banned for not hosting data in India, and since the ban, the Krafton Inc.-owned game has been engaging in Idnai to set up data banks and servers to have a separate gaming server for Indian players. These moves will lead to a safe gaming ecosystem and result in better adherence to the laws and policies of the land. The developers have not declared a relaunch date yet, but the game is expected to be available for download for iOS and Android users in the coming few days. The game will be back on app stores as a letter from the Ministry of Electronics and Information Technology has been issued stating that the games be allowed and made available for download on the respective app stores.
Grounds for BGMI
BGMI has to ensure that they comply with all the laws, policies and guidelines in India and have to show the same to the Ministry to get an extension on approval. The game has been permitted for only 90 days (3 Months). Hon’ble MoS Meity Rajeev Chandrashekhar stated in a tweet “This is a 3-month trial approval of #BGMI after it has complied with issues of server locations and data security etc. We will keep a close watch on other issues of User harm, Addiction etc., in the next 3 months before a final decision is taken”. This clearly shows the magnitude of the bans on Chinese apps. The ministry and the Govt will not play the soft game now, it’s all about compliance and safeguarding the user’s data.
Way Forward
This move will play a significant role in the future, not only for gaming companies but also for other online industries, to ensure compliance. This move will act as a precedent for the issue of cross-border data flow and the advantages of data localisation. It will go a long way in advocacy for the betterment of the Indian cyber ecosystem. Meity alone cannot safeguard the space completely, it is a shared responsibility of the Govt, industry and netizens.
Conclusion
The advent of online mobile gaming has taken the nation by storm, and thus, being safe and secure in this ecosystem is paramount. The provisional permission form BGMI shows the stance of the Govt and how it is following the no-tolerance policy for noncompliance with laws. The latest policies and bills, like the Digital India Act, Digital Personal Data Protection Act, etc., will go a long way in securing the interests and rights of the Indian netizen and will create a blanket of safety and prevention of issues and threats in the future.

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Introduction
The Department of Telecommunications (DoT) has launched the 'Digital Intelligence Platform (DIP)'and the 'Chakshu' facility on the Sanchar Saathi portal to combat cybercrimes and financial frauds. Union telecom, IT and railways minister Ashwini Vaishnaw announced the initiatives, stating that the government has been working to counter cyber frauds at national, organizational, and individual levels. The Sanchar Saathi portal has successfully tackled such attacks, and the two new portals will further enhance the capacity to check any kind of cyber security threat.
The Digital Intelligence Platform is a secure and integrated platform for real-time intelligence sharing, information exchange, and coordination among stakeholders, including telecom operators, law enforcement agencies, banks, financial institutions, social media platforms, and identity document issuing authorities. It also contains information regarding cases detected as misuse of telecom resources.
The 'Chakshu' facility allows citizens to report suspected fraud communication received over call, SMS, or WhatsApp with the intention of defrauding, such as KYC expiry, bank account/payment wallet/SIM/gas connection/electricity connection, sextortion, impersonations a government official/relative for sending money, and disconnection of all mobile numbers by the Department of Telecommunications.
The launch of these proactive initiatives or steps represents another significant stride by the Ministry of Communications and the Department of Telecommunications in combating cybersecurity threats to citizens' digital assets.
In this age of technology, there is a reason to be concerned about the threats posed by cybercrooks to individuals and organizations. The risk of using digital means for communication, e-commerce, and critical infrastructure has increased significantly. It is important to have proper measures in place to prevent cybercrime and destructive behavior. The Department of Telecommunication has unveiled "Chakshu," a digital intelligence portal aimed at combating cybercrimes. This platform seeks to enhance the country's cyber defense capabilities by providing enforcement agencies with effective tools and actionable intelligence for countering cybercrimes, including financial frauds.
Digital Intelligence Platform (DIP)
Digital Intelligence Platform (DIP) developed by the Department of Telecommunications is a secure and integrated platform for real-time intelligence sharing, information exchange and coordination among the stakeholders i.e. Telecom Service Providers(TSPs), law enforcement agencies (LEAs), banks and financial institutions(FIs), social media platforms, identity document issuing authorities etc. The portal also contains information regarding the cases detected as misuse of telecom resources. The shared information could be useful to the stakeholders in their respective domains. It also works as a backend repository for the citizen-initiated requests on the Sanchar Saathi portal for action by the stakeholders. The DIP is accessible to the stakeholders through secure connectivity, and the relevant information is shared based on their respective roles. However, the platform is not accessible to citizens.
What is Chakshu?
Chakshu, which means “eye” in Hindi, is a new feature on the Sanchar Saathi portal. This citizen-friendly platform allows you to report suspicious communication you receive via calls, SMS, or WhatsApp. “Chakshu” is a new advanced tool to safeguard against modern-day cybercriminal activities. Chakshu is a sophisticated design that uses the latest technologies for assembling and analyzing digital information and provides law enforcement agencies with useful data on what should be done next. Below are some of its attributes.
Here are some examples of what you can report:
- Fraudulent messages claiming your KYC (Know Your Customer)details need to be updated.
- Fraudulent requests to update your bank account, payment wallet, or SIM card details.
- Phishing attempts impersonating government officials or relatives asking for money.
- Fraudulent threats of disconnection of your sim connections.
How Chakshu Aims to crackdown Cybercrime and Financial Frauds
Chakshu is a new tool on the Sanchar Saathi platform that invites individuals to report suspected fraudulent communications received by phone, SMS, or WhatsApp. These fraudulent activities may include attempts to deceive individuals through schemes such as KYC expiry or update requests for bank accounts, payment wallets, SIM cards, gas connections, and electricity connections, sextortion, impersonation of government officials or relatives for financial gain, or false claims of mobile number disconnection by the Department of Telecommunications.
The tool is well-designed and equipped to help the investigators with actionable intelligence and insights, enabling LEAs to conduct targeted investigations on financial frauds and cyber-crimes; the tool helps in gathering a comprehensive data analysis and evidence collection capability by mapping out the connection between individuals, organizations and illicit activities, it, therefore, allows the law enforcement agencies in dismantling criminal activities and help the law enforcement agencies.
Chakshu’s Impact
India has launched Chakshu, a digital intelligence tool that strengthens the country's cybersecurity policy. Chakshu employs modern technology and real-time data analysis to enhance India's cyber defenses. Law enforcement can detect and neutralize possible threats by taking proactive approach to threat analysis and prevention before they become significant crises. Chakshu also improves the resilience of critical infrastructure and digital ecosystems, safeguarding them against cyber-attacks. Overall, Chakshu plays an important role in India's cybersecurity posture and the protection of national interests in the digital era.
Where can Chaksu be accessed?
Chakshu can be accessed through the government's Sanchar Saathi web portal:https://sancharsaathi.gov.in
Conclusion
The launch of the Digital Intelligence Platform and Chakshu facility is a step forward in safeguarding citizens from cybercrimes and financial fraud. These initiatives use advanced technology and stakeholder collaboration to empower law enforcement agencies. The Department of Telecommunications' proactive approach demonstrates the government's commitment to cybersecurity defenses and protecting digital assets, ensuring a safer digital environment for citizens and critical infrastructure.
References
- https://telecom.economictimes.indiatimes.com/news/policy/dot-launches-digital-intelligence-portal-chakshu-facility-to-curb-cybercrimes-financial-frauds/108220814
- https://bankingfrontiers.com/digital-intelligence-platform-launched-to-curb-cybercrime-financial-fraud/
- https://www.business-standard.com/india-news/calcutta-hc-justice-abhijit-gangopadhyay-sends-his-resignation-to-prez-cji-124030500367_1.html
- https://www.the420.in/dip-chakshu-government-launches-powerful-weapons-against-cybercrime/
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2011383