Health-Related Social Media Content: A Misinformation Vector in India
Sharisha Sahay
Research Analyst - Policy & Advocacy, CyberPeace
PUBLISHED ON
Jan 20, 2025
10
Introduction
Misinformation regarding health is sensitive and can have far-reaching consequences. These include its effect on personal medical decisions taken by individuals, lack of trust in conventional medicine, delay in seeking treatments, and even loss of life. The fast-paced nature and influx of information on social media can aggravate the situation further. Recently, a report titled Health Misinformation Vectors in India was presented at the Health of India Summit, 2024. It provided certain key insights into health-related misinformation circulating online.
The Health Misinformation Vectors in India Report
The analysis was conducted by the doctors at First Check, a global health fact-checking initiative alongside DataLEADS, a Delhi-based digital media and technology company. The report covers health-related social media content that was posted online from October 2023 to November 2024. It mentions that among all the health scares, misinformation regarding reproductive health, cancer, vaccines, and lifestyle diseases such as diabetes and obesity is the most prominent type that is spread through social media. Misinformation regarding reproductive health includes illegal abortion methods that often go unchecked and even tips on conceiving a male child, among other things.
In order to combat this misinformation, the report encourages stricter regulations regarding health-related content on digital media, inculcating technology for health literacy and misinformation management in public health curricula and recommending tech platforms to work on algorithms that prioritise credible information and fact-checks. Doctors state that people affected by life-threatening diseases are particularly vulnerable to such misinformation, as they are desperate to seek options for treatment for themselves and their family members to have a chance at life. In a diverse society, with the lack of clear and credible information, limited access to or awareness about tools that cross-check content, and low digital literacy, people gravitate towards alternate sources of information which also fosters a sense of disengagement among the public overall. The diseases mentioned in the report, which are prone to misinformation, are life-altering and require attention from healthcare professionals.
CyberPeace Outlook
Globally, there are cases of medically-unqualified social media influencers who disperse false/mis- information regarding various health matters. The topics covered are mostly associated with stigma and are still undergoing research. This gap allows for misinformation to be fostered. An example is misinformation regarding PCOS( Polycystic Ovary Syndrome) which is circulating online.
In the midst of all of this, YouTube has released a new feature that aligns with combating health misinformation, trying to bridge the gap between healthcare professionals and Indians who look for trustworthy health-related information online. The initiative includes a feature that allows doctors, nurses, and other healthcare professionals to sign up for a health information source license. This would help by labeling all their informative videos, as addressed- from a healthcare professional. Earlier, this feature was available only for health organisations including a health source information panel and health content shelves, but this step broadens the scope for verification of licenses of individual healthcare professionals.
As digital literacy continues to grow, methods of seeking credible information, especially regarding sensitive topics such as health, require a combined effort on the part of all the stakeholders involved. We need a robust strategy for battling health-related misinformation online, including more awareness programmes and proactive participation from the consumers as well as medical professionals regarding such content.
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
Lost your phone? How to track and block your lost or stolen phone? Fear not, Say hello to Sanchar Saathi, the newly launched portal by the government. The smartphone has become an essential part of our daily life, our lots of personal data are stored in our smartphones, and if a phone is lost or stolen, it can be a frustrating experience. With the government initiative launching Sanchar Saathi Portal, you can now track and block your lost or stolen smartphone. The Portal uses a central equipment identity register to help users block their lost phones. It helps you track your lost and stolen smartphone. So now, say hello to Sanchar Saathi, the newly launched portal by the government. Users should keep an FIR copy of their lost/stolen smartphone handy for using certain features of the portal. FIR copy is also required for tracking the phone on the website. This portal allows users to track lost/stolen smartphones, and they can block the device across all telecom networks.
Preventing Data Leakage and Mobile Phone Theft
When you lose your phone or your phone is stolen, you worry as your smartphone holds your various personal sensitive information such as your bank account information, UPI IDs, and social media accounts such as WhatsApp, which cause a serious concern of data leakage and misuse in such a situation. Sanchar saathi portal addresses this problem and serves as a platform for blocking data saved on a lost or stolen device. This feature protects the users against financial fraud, identity thrift, and data leakage by blocking access to your lost or stolen device and ensuring that unauthorised parties cannot access or abuse important information.
How the Sanchar Saathi Portal Works
To file a complaint regarding their lost or stolen smartphones the users are required to provide IMEI (International Mobile Equipment Identity) number. The official website of the portal is https://sancharsaathi.gov.in/ users can access the “Citizen Centric Services” option on the homepage. Then users may, by clicking on “Block Your Lost/Stolen Mobile”, can fill out the form. Users need to fill in details such as IMEI number, contact number, model number of the smartphone, mobile purchase invoice, and information such as the date, time, district, and state where the device was lost or stolen. Users must keep a copy of the FIR handy and fill in their personal information, such as their name, email address, and residence. After completing and selecting the ‘Complete tab’, the form will be submitted, and access to the lost/stolen smartphone will be blocked.
Enhancing Security with SIM Card Verification
Using this portal, users can access their associated sim card numbers and block any unauthorised use. In this way portal allows owners to take immediate action if their sim card is being used or misused by someone else. The Sanchar Saathi Portal allows you to check the status of active SIM cards registered under an individual’s name. And it is an extra security feature provided by the portal. This proactive strategy helps users to safeguard their personal information against possible abuse and identity theft.
Advantages of the Sanchar Saathi Portal
The Sanchar Saathi platform offers various benefits for reducing mobile phone theft and protecting personal data. The portal offers a simplified and user-friendly platform for making complaints. The online complaint tracking function keeps consumers informed of the status of their complaints, increasing transparency and accountability.
The portal allows users to block access to personal data on lost/stolen smartphones which reduces the chances or potential risk of data leakage.
The portal SIM card verification feature acts as an extra layer of security, enabling users to monitor any unauthorised use of their personal information. This proactive approach empowers users to take immediate action if they detect any suspicious activity, preventing further damage to their personal data.
Conclusion
Our smartphones store large amounts of sensitive information and Data, so it becomes crucial to protect our smartphones from any unauthorised access, especially in case when the smartphone is lost or stolen. The Sanchar Saathi portal is a commendable step by the government by offering a comprehensive solution to combat mobile phone theft and protect personal data, the portal contributes to a safer digital environment for smartphone users.
The portal provides the option of blocking access to your lost/stolen device and also checking the SIM card verification. These features of the portal empower users to take control of their data security. In this way, the portal contributes to preventing mobile phone theft and data leakage.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.