Domestic UPI Frauds: Finance Ministry Presented Data in LokSabha
Introduction
According to the Finance Ministry's data, the incidence of domestic Unified Payment Interface (UPI) fraud rose by 85% in FY 2023-24 compared to FY 2022-23. Further, as of September of FY 2024-25, 6.32 lakh fraud cases had been already reported, amounting to Rs 485 crore. The data was shared on 25th November 2024, by the Finance Ministry in response to a question in Lok Sabha’s winter session about the fraud in UPI transactions during the past three fiscal years.
Statistics

UPI Frauds and Government's Countermeasures
On the query as to measures taken by the government for safe and secure UPI transactions and prevention of fraud in the transactions, the ministry has highlighted the measures as follows:
- The Reserve Bank of India (RBI) has launched the Central Payment Fraud Information Registry (CPFIR), a web-based tool for reporting payment-related frauds, operational since March 2020, and it requires requiring all Regulated Entities (RE) to report payment-related frauds to the said CPFIR.
- The Government, RBI, and National Payments Corporation of India (NPCI) have implemented various measures to prevent payment-related frauds, including UPI transaction frauds. These include device binding, two-factor authentication through PIN, daily transaction limits, and limits on use cases.
- Further, NPCI offers a fraud monitoring solution for banks, enabling them to alert and decline transactions using AI/ML models. RBI and banks are also promoting awareness through SMS, radio, and publicity on 'cyber-crime prevention'.
- The Ministry of Home Affairs has launched a National Cybercrime Reporting Portal (NCRP) (www.cybercrime.gov.in) and a National Cybercrime Helpline Number 1930 to help citizens report cyber incidents, including financial fraud. Customers can also report fraud on the official websites of their bank or bank branches.
- The Department of Telecommunications has introduced the Digital Intelligence Platform (DIP) and 'Chakshu' facility on the Sanchar Saathi portal, enabling citizens to report suspected fraud messages via call, SMS, or WhatsApp.
Conclusion
UPI is India's most popular digital payment method. As of June 2024, there are around 350 million active users of the UPI in India. The Indian Cyber Crime Coordination Centre (I4C) report indicates that ‘Online Financial Fraud’, a cyber crime category under NCRP, is the most prevalent among others. The rise of financial fraud, particularly UPI fraud is cause for alarm, the scammers use sophisticated strategies to deceive victims. It is high time for netizens to exercise caution and care with their personal and financial information, stay aware of common tactics used by fraudsters, and adhere to best security practices for secure transactions and the safe use of UPI services.
References
Related Blogs
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

A few of us were sitting together, talking shop - which, for moms, inevitably circles back to children, their health and education. Mothers of teenagers were concerned that their children seemed to spend an excessive amount of time online and had significantly reduced verbal communication at home.
Reena shared that she was struggling to understand her two boys, who had suddenly transformed from talkative, lively children into quiet, withdrawn teenagers.
Naaz nodded. “My daughter is glued to her device. I just can’t get her off it! What do I do, girls? Any suggestions?”
Mou sighed, “And what about the rising scams? I keep warning my kids about online threats, but I’m not sure I’m doing enough.”
Not just scams, those come later. What worries me more are the videos and photos of unsuspecting children being edited and misused on digital platforms,” added Reena.
The Digital Parenting Dilemma
For parents, it’s a constant challenge—allowing children internet access means exposing them to potential risks while restricting it invites criticism for being overly strict.
‘What do I do?’ is a question that troubles many parents, as they know how addictive phones and gaming devices can be. (Fun fact: Even parents sometimes struggle to resist endlessly scrolling through social media!)
‘What should I tell them, and when?’ This becomes a pressing concern when parents hear about cyberbullying, online grooming, or even cyberabduction.
‘How do I ensure they stay cybersafe?’ This remains an ongoing worry, as children grow and their online activities evolve.
Whether it’s a single-child, dual-income household, a two-child, single-income family, or any other combination, parents have their hands full managing work, chores, and home life. Sometimes, children have to be left alone—with grandparents, caregivers, or even by themselves for a few hours—making it difficult to monitor their digital lives. While smartphones help parents stay connected and track their child’s location, they can also expose children to risks if not used responsibly.
Breaking It Down
Start cybersafety discussions early and tailor them to your child’s age.
For simplicity, let’s categorize learning into five key age groups:
- 0 – 2 years
- 3 – 7 years
- 8 – 12 years
- 13 – 16 years
- 16 – 19 years
Let’s explore the key safety messages for each stage.
Reminder:
Children will always test boundaries and may resist rules. The key is to lead by example—practice cybersafety as a family.
0 – 2 Years: Newborns & Infants
Pediatricians recommend avoiding screen exposure for children under two years old. If you occasionally allow screen time (for example, while changing them), keep it to a minimum. Children are easily distracted—use this to your advantage.
What can you do?
- Avoid watching TV or using mobile devices in front of them.
- Keep activity books, empty boxes, pots, and ladles handy to engage them.
3 – 7 Years: Toddlers & Preschoolers
Cybersafety education should ideally begin when a child starts engaging with screens. At this stage, parents have complete control over what their child watches and for how long.
What can you do?
- Keep screen time limited and fully supervised.
- Introduce basic cybersecurity concepts, such as stranger danger and good picture vs. bad picture.
- Encourage offline activities—educational toys, books, and games.
- Restrict your own screen time when your child is awake to set a good example.
- Set up parental controls and create child-specific accounts on devices.Secure all devices with comprehensive security software.
8 – 12 Years: Primary & Preteens
Cyber-discipline should start now. Strengthen rules, set clear boundaries, and establish consequences for rule violations.
What can you do?
- Increase screen time gradually to accommodate studies, communication, and entertainment.
- Teach them about privacy and the dangers of oversharing personal information.
- Continue stranger-danger education, including safe/unsafe websites and apps.
- Emphasize reviewing T&Cs before downloading apps.Introduce concepts like scams, phishing, deepfakes, and virus attacks using real-life examples.
- Keep banking and credit card credentials private—children may unintentionally share sensitive information.
Cyber Safety Mantras:
- STOP. THINK. ACT.
- Do Not Trust Blindly Online.
13 – 16 Years: The Teenage Phase
Teenagers are likely to resist rules and demand independence, but if cybersecurity has been a part of their upbringing, they will tolerate parental oversight.
What can you do?
- Continue parental controls but allow greater access to previously restricted content.
- Encourage open conversations about digital safety and online threats.
- Respect their need for privacy but remain involved as a silent observer.
- Discuss cyberbullying, harassment, and online reputation management.
- Keep phones out of bedrooms at night and maintain device-free zones during family time.
- Address online relationships and risks like dating scams, sextortion, and trafficking.
16 – 19 Years: The Transition to Adulthood
By this stage, children have developed a sense of responsibility and maturity. It’s time to gradually loosen control while reinforcing good digital habits.
What can you do?
- Monitor their online presence without being intrusive.Maintain open discussions—teens still value parental advice.
- Stay updated on digital trends so you can offer relevant guidance.
- Encourage digital balance by planning device-free family outings.
Final Thoughts
As a parent, your role is not just to set rules but to empower your child to navigate the digital world safely. Lead by example, encourage responsible usage, and create an environment where your child feels comfortable discussing online challenges with you.
Wishing you a safe and successful digital parenting journey!
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)