#FactCheck: Beware of Fake Emails Distributing Fraudulent e-PAN Cards
Executive Summary:
We have identified a post addressing a scam email that falsely claims to offer a download link for an e-PAN Card. This deceptive email is designed to mislead recipients into disclosing sensitive financial information by impersonating official communication from Income Tax Department authorities. Our report aims to raise awareness about this fraudulent scheme and emphasize the importance of safeguarding personal data against such cyber threats.

Claim:
Scammers are sending fake emails, asking people to download their e-PAN cards. These emails pretend to be from government authorities like the Income Tax Department and contain harmful links that can steal personal information or infect devices with malware.
Fact Check:
Through our research, we have found that scammers are sending fake emails, posing as the Income Tax Department, to trick users into downloading e-PAN cards from unofficial links. These emails contain malicious links that can lead to phishing attacks or malware infections. Genuine e-PAN services are only available through official platforms such as the Income Tax Department's website (www.incometaxindia.gov.in) and the NSDL/UTIITSL portals. Despite repeated warnings, many individuals still fall victim to such scams. To combat this, the Income Tax Department has a dedicated page for reporting phishing attempts: Report Phishing - Income Tax India. It is crucial for users to stay cautious, verify email authenticity, and avoid clicking on suspicious links to protect their personal information.

Conclusion:
The emails currently in circulation claiming to provide e-PAN card downloads are fraudulent and should not be trusted. These deceptive messages often impersonate government authorities and contain malicious links that can result in identity theft or financial fraud. Clicking on such links may compromise sensitive personal information, putting individuals at serious risk. To ensure security, users are strongly advised to verify any such communication directly through official government websites and avoid engaging with unverified sources. Additionally, any phishing attempts should be reported to the Income Tax Department and also to the National Cyber Crime Reporting Portal to help prevent the spread of such scams. Staying vigilant and exercising caution when handling unsolicited emails is crucial in safeguarding personal and financial data.
- Claim: Fake emails claim to offer e-PAN card downloads.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
Personalised advertisements deploy a mechanism that derives from the collection of the user’s data. Although it allows for a more tailored user experience, one cannot ignore the method through which this is achieved. Recently, as per a report by the Indian Express on 13th November 2024, Meta has come up with a less personalised ad option on Facebook and Instagram for its users in the European Union (EU). This was done due to the incompatibility of their previous ad offer with the EU’s Digital Markets Act (DMA).
Relevant Legislation
In October 2023, Meta came up with a “Pay or Consent” option for their users in the EU. It gave the users two options: either to pay a monthly subscription fee to avail of the ad-free usage variant of Facebook and Instagram, or to give consent to see personalised ads based on the user’s data. This consent model was introduced in their attempts to comply with the EU’s DMA. However, this was found to be incompatible with the said mandate, according to the EU regulators, as they believed that the users should not only have the option to consent to ads but also have access to less personalised but equivalent alternatives. It is this decision that pushed Meta to come up with less personalised ad options for users in the EU. The less-personalised ad option claims to rely on limited data and show ads that are only based on the context of what is being viewed i.e. during a Facebook or Instagram session requiring a minimum set of data points such as location, age, gender, and the user’s engagement with the ads. However, choosing this option also allows for such ads to be less skippable.
The EU’s Digital Markets Act came into force on November 1, 2022. The purpose was to make the digital marketing sector fairer and in doing so, identify what they consider to be “Gatekeepers” (core platform services such as messenger services, search engines, and app stores) and a list of do’s and don’ts for them. One of them, applicable to the case mentioned above, is the effective consent required by the user in case the gatekeeper decides to target advertisements enabled by tracking the users' activity outside the gatekeeper's core platform services.
The Indian Context
Although no such issues have been raised in India yet, it is imperative to know that in the Indian context, the DPDP (Digital Personal Data Protection) Act 2023 governs personal data regulation. This includes rules for Data Fiduciaries (those who, alone or in partnership with others, determine the means and purpose of processing personal data), the Data Principal (those who give data), Consent Managers, and even rules regarding processing data of children.
CyberPeace Recommendations:
At the level of the user, one can take steps to ensure limited collection of personal data by following the mentioned steps:
- Review Privacy Settings- Reviewing Privacy settings for one’s online accounts and devices is a healthy practice to avoid giving unnecessary information to third-party applications.
- Private Browsing- Browsing through private mode or incognito is encouraged, as it prevents websites from tracking your activity and personal data.
- Using Ad-blockers- Certain websites have a user option to block ads when the user first visits their page. Availing of this prevents spam advertisements from the respective websites.
- Using VPN- Using Virtual Private Networks enables users to hide their IP address and their data to be encrypted, preventing third-party actors from tracking the users' online activities
- Other steps include clearing cookies and cache data and using the location-sharing feature with care.
Conclusion
Meta’s compliance with the EU’s DMA signals that social media platforms cannot circumnavigate their way around rules. Balancing the services provided while respecting user privacy is of the utmost importance. The EU has set precedence for a system that respects this and can be used as an example to help set guidelines for how other countries can continue to deal with similar issues and set standards accordingly.
References
- https://indianexpress.com/article/technology/tech-news-technology/meta-less-personalised-ads-eu-regulatory-demands-9667266/
- https://rainmaker.co.in/blog/view/the-price-of-personalization-how-targeted-advertising-breaches-data-privacy-and-challenges-the-gdprs-shield
- https://www.infosecurity-magazine.com/magazine-features/fines-data-protection-violations/
- https://www.forbes.com/councils/forbestechcouncil/2023/09/01/the-landscape-of-personalized-advertising-efficiency-versus-privacy/
- https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next
- https://economictimes.indiatimes.com/news/how-to/how-to-safeguard-privacy-in-the-era-of-personalised-ads/articleshow/102748711.cms?from=mdr
- https://www.business-standard.com/technology/tech-news/facebook-instagram-users-in-europe-can-opt-for-less-personalised-ads-124111201558_1.html
- https://digital-markets-act.ec.europa.eu/about-dma_en

Introduction
The rise in start-up culture, increasing investments, and technological breakthroughs are being encouraged alongside innovations and the incorporation of generative Artificial Intelligence elements. Witnessing the growing focus on human-centred AI, its potential to transform industries like education remains undeniable. Enhancing experiences and inculcating new ways of learning, there is much to be explored. Recently, a Delhi-based non-profit called Rocket Learning, in collaboration with Google.org, launched Appu- a personalised AI educational tool providing a multilingual and conversational learning experience for kids between 3 and 6.
AI Appu
Developed in 6 months, along with the help of dedicated Google.org fellows, interactive Appu has resonated with those the founders call “super-users,” i.e. parents and caregivers. Instead of redirecting students to standard content and instructional videos, it operates on the idea of conversational learning, one equally important for children in the targeted age bracket. Designed in the form of an elephant, Appu is supposed to be a personalised tutor, helping both children and parents understand concepts through dialogue. AI enables the generation of different explanations in case of doubt, aiding in understanding. If children were to answer in mixed languages instead of one complete sentence in a single language (eg, Hindi and English), the AI would still consider it as a response. The AI lessons are two minutes long and are inculcated with real-world examples. The emphasis on interactive and fun learning of concepts through innovation enhances the learning experience. Currently only available in Hindi, it is being worked on to include 20 other languages such as Punjabi and Marathi.
UNESCO, AI, and Education
It is important to note that such innovations also find encouragement in UNESCO’s mandate as AI in education contributes to achieving the 2030 Agenda of Sustainable Development Goals (here; SDG 4- focusing on quality education). Within the ambit of the Beijing Consensus held in 2019, UNESCO encourages a human-centred approach to AI, and has also developed the “Artificial Intelligence and Education: Guidance for Policymakers” aiming towards understanding its potential and opportunities in education as well as the core competencies it needs to work on. Another publication was launched during one of the flagship events of UNESCO- (Digital Learning Week, 2024) - AI competency frameworks for both, students and teachers which provide a roadmap for assessing the potential and risks of AI, each covering common aspects such as AI ethics, and human-centred mindset and even certain distinct options such as AI system design for students and AI pedagogy for teachers.
Potential Challenges
While AI holds immense promise in education, innovation with regard to learning is contentious as several risks must be carefully managed. Depending on the innovation, AI’s struggle with multitasking beyond the classroom, such as administrative duties and tedious grading, which require highly detailed role descriptions could prove to be a challenge. This can become exhausting for developers managing innovative AI systems, as they would have to fit various responses owing to the inherent nature of AI needing to be trained to produce output. Security concerns are another major issue, as data breaches could compromise sensitive student information. Implementation costs also present challenges, as access to AI-driven tools depends on financial resources. Furthermore, AI-driven personalised learning, while beneficial, may inadvertently reduce student motivation, also compromising students' soft skills, such as teamwork and communication, which are crucial for real-world success. These risks highlight the need for a balanced approach to AI integration in education.
Conclusion
Innovations related to education, especially the ones that focus on a human-centred AI approach, have immense potential in not only enhancing learning experiences but also reshaping how knowledge is accessed, understood, and applied. Untapped potential using other services is also encouraged in this sector. However, maintaining a balance between fostering intrigue and ensuring the inculcation of ethical and secure AI remains imperative.
References
- https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers?hub=32618
- https://www.unesco.org/en/digital-education/artificial-intelligence
- https://www.deccanherald.com/technology/google-backed-rocket-learning-launches-appu-an-ai-powered-tutor-for-kids-3455078
- https://indianexpress.com/article/technology/artificial-intelligence/how-this-google-backed-ai-tool-is-reshaping-education-appu-9896391/
- https://www.thehindu.com/business/ai-appu-to-tutor-children-in-india/article69354145.ece
- https://www.velvetech.com/blog/ai-in-education-risks-and-concerns/

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html