#FactCheck: Viral Fake Post Claims Central Government Offers Unemployment Allowance Under ‘PM Berojgari Bhatta Yojna’
Executive Summary:
A viral thumbnail and numerous social posts state that the government of India is giving unemployed youth ₹4,500 a month under a program labeled "PM Berojgari Bhatta Yojana." This claim has been shared on multiple online platforms.. It has given many job-seeking individuals hope, however, when we independently researched the claim, there was no verified source of the scheme or government notification.

Claim:
The viral post states: "The Central Government is conducting a scheme called PM Berojgari Bhatta Yojana in which any unemployed youth would be given ₹ 4,500 each month. Eligible candidates can apply online and get benefits." Several videos and posts show suspicious and unverified website links for registration, trying to get the general public to share their personal information.

Fact check:
In the course of our verification, we conducted a research of all government portals that are official, in this case, the Ministry of Labour and Employment, PMO India, MyScheme, MyGov, and Integrated Government Online Directory, which lists all legitimate Schemes, Programmes, Missions, and Applications run by the Government of India does not posted any scheme related to the PM Berojgari Bhatta Yojana.

Numerous YouTube channels seem to be monetizing false narratives at the expense of sentiment, leading users to misleading websites. The purpose of these scams is typically to either harvest data or market pay-per-click ads that suspend disbelief in outrageous claims.
Our research findings were backed up later by the PIB Fact Check which shared a clarification on social media. stated that: “No such scheme called ‘PM Berojgari Bhatta Yojana’ is in existence. The claim that has gone viral is fake”.

To provide some perspective, in 2021-22, the Rajasthan government launched a state-level program under the Mukhyamantri Udyog Sambal Yojana (MUSY) that provided ₹4,500/month to unemployed women and transgender persons, and ₹4000/month to unemployed males. This was not a Central Government program, and the current viral claim falsely contextualizes past, local initiatives as nationwide policy.

Conclusion:
The claim of a ₹4,500 monthly unemployment benefit under the PM Berojgari Bhatta Yojana is incorrect. The Central Government or any government department has not launched such a scheme. Our claim aligns with PIB Fact Check, which classifies this as a case of misinformation. We encourage everyone to be vigilant and avoid reacting to viral fake news. Verify claims through official sources before sharing or taking action. Let's work together to curb misinformation and protect citizens from false hopes and data fraud.
- Claim: A central policy offers jobless individuals ₹4,500 monthly financial relief
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The United Nations General Assembly (UNGA) has unanimously adopted the first global resolution on Artificial Intelligence (AI), encouraging countries to take into consideration human rights, keeping personal data safe, and further monitoring the threats associated with AI. This non-binding resolution proposed by the United States and co-sponsored by China and over 120 other nations advocates the strengthening of privacy policies. This step is crucial for governments across the world to shape how AI grows because of the dangers it carries that could undermine the protection, promotion, and right to human dignity and fundamental freedoms. The resolution emphasizes the importance of respecting human rights and fundamental freedoms throughout the life cycle of AI systems, highlighting the benefits of digital transformation and safe AI systems.
Key highlights
● This is indeed a landmark move by the UNGA, which adopted the first global resolution on AI. This resolution encourages member countries to safeguard human rights, protect personal data, and monitor AI for risks.
● Global leaders have shown their consensus for safe, secure, trustworthy AI systems that advance sustainable development and respect fundamental freedom.
● Resolution is the latest in a series of initiatives by governments around the world to shape AI. Therefore, AI will have to be created and deployed through the lens of humanity and dignity, Safety and Security, human rights and fundamental freedoms throughout the life cycle of AI systems.
● UN resolution encourages global cooperation, warns against improper AI use, and emphasizes the issues of human rights.
● The resolution aims to protect from potential harm and ensure that everyone can enjoy its benefits. The United States has worked with over 120 countries at the United Nations, including Russia, China, and Cuba, to negotiate the text of the resolution adopted.
Brief Analysis
AI has become increasingly prevalent in recent years, with chatbots such as the Chat GPT taking the world by storm. AI has been steadily attempting to replicate human-like thinking and solve problems. Furthermore, machine learning, a key aspect of AI, involves learning from experience and identifying patterns to solve problems autonomously. The contemporary emergence of AI has, however, raised questions about its ethical implications, potential negative impact on society, and whether it is too late to control it.
While AI is capable of solving problems quickly and performing various tasks with ease, it also has its own set of problems. As AI continues to grow, global leaders have called for regulations to prevent significant harm due to the unregulated AI landscape to the world and encourage the use of trustworthy AI. The European Union (EU) has come up with an AI act called the “European AI Act”. Recently, a Senate bill called “The AI Consent Bill” was introduced in the US. Similarly, India is also proactively working towards setting the stage for a more regulated Al landscape by fostering dialogues and taking significant measures. Recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory on AI, which requires explicit permission to deploy under-testing or unreliable AI models related to India's Internet. The following advisory also indicates measures advocating to combat deepfakes or misinformation.
AI has thus become a powerful tool that has raised concerns about its ethical implications and the potential negative influence on society. Governments worldwide are taking action to regulate AI and ensure that it remains safe and effective. Now, the groundbreaking move of the UNGA, which adopted the global resolution on AI, with the support of all 193 U.N. member nations, shows the true potential of efforts by countries to regulate AI and promote safe and responsible use globally.
New AI tools have emerged in the public sphere, which may threaten humanity in an unexpected direction. AI is able to learn by itself through machine learning to improve itself, and developers often are surprised by the emergent abilities and qualities of these tools. The ability to manipulate and generate language, whether with words, images, or sounds, is the most important aspect of the current phase of the ongoing AI Revolution. In the future, AI can have several implications. Hence, it is high time to regulate AI and promote the safe, secure and responsible use of it.
Conclusion
The UNGA has approved its global resolution on AI, marking significant progress towards creating global standards for the responsible development and employment of AI. The resolution underscores the critical need to protect human rights, safeguard personal data, and closely monitor AI technologies for potential hazards. It calls for more robust privacy regulations and recognises the dangers associated with improper AI systems. This profound resolution reflects a unified stance among UN member countries on overseeing AI to prevent possible negative effects and promote safe, secure and trustworthy AI.
References

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx
.jpeg)
Introduction and Brief Analysis
A movie named “The Artifice Girl” portrayed A law enforcement agency developing an AI-based personification of a 12-year-old girl who appears to be exactly like a real person. Believing her to be an actual girl, perpetrators of child sexual exploitation were caught attempting to seek sexual favours. The movie showed how AI aided law enforcement, but the reality is that the emergence of Artificial Intelligence has posed numerous challenges in multiple directions. This example illustrates both the promise and the complexity of using AI in sensitive areas like law enforcement, where technological innovation must be carefully balanced with ethical and legal considerations.
Detection and Protection tools are constantly competing with technologies that generate content, automate grooming and challenge legal boundaries. Such technological advancements have provided enough ground for the proliferation of Child Sexual Exploitation and Abuse Material (CSEAM). Also known as child pornography under Section 2 (da) of Protection of Children from Sexual Offences Act, 2012, it defined it as - “means any visual depiction of sexually explicit conduct involving a child which includes a photograph, video, digital or computer-generated image indistinguishable from an actual child and image created, adapted, or modified, but appears to depict a child.”
Artificial Intelligence is a category of technologies that attempt to shape human thoughts and behaviours using input algorithms and datasets. Two Primary applications can be considered in the context of CSEAM: classifiers and content generators. Classifiers are programs that learn from large data sets, which may be labelled or unlabelled and further classify what is restricted or illegal. Whereas generative AI is also trained on large datasets, it uses that knowledge to create new things. Majority of current AI research related to AI for CSEAM is done by the use of Artificial neural networks (ANNs), a type of AI that can be trained to identify unusual connections between items (classification) and to generate unique combinations of items (e.g., elements of a picture) based on the training data used.
Current Legal Landscape
The legal Landscape in terms of AI is yet unclear and evolving, with different nations trying to track the evolution of AI and develop laws. However, some laws directly address CSEAM. The International Centre for Missing and Exploited Children (ICMEC) combats Illegal sexual content involving children. They have a “Model Legislation” for setting recommended sanctions/sentencing. According to research performed in 2018, Illegal sexual content involving children is illegal in 118 of the 196 Interpol member states. This figure represents countries that have sufficient legislation in place to meet 4 or 5 of the 5 criteria defined by the ICMEC.
CSEAM in India can be reported on various portals like the ‘National Cyber Crime Reporting Portal’. Online crimes related to children, including CSEAM, can be reported to this portal by visiting cybercrime.gov.in. This portal allows anonymous reporting, automatic FIR registration and tracking of your complaint. ‘I4C Sahyog Portal’ is another platform managed by the Indian Cyber Crime Coordination Centre (I4C). This portal integrates with social media platforms.
The Indian legal front for AI is evolving and CSEAM is well addressed in Indian laws and through judicial pronouncements. The Supreme Court judgement on Alliance and Anr v S Harish and ors is a landmark in this regard. The following principles were highlighted in this judgment.
- The term “child pornography” should be substituted by “Child Sexual Exploitation and Abuse Material” (CSEAM) and shall not be used for any further judicial proceeding, order, or judgment. Also, parliament should amend the same in POCSO and instead, the term CSEAM should be endorsed.
- Parliament to consider amending Section 15 (1) of POCSO to make it more convenient for the general public to report by way of an online portal.
- Implementing sex education programs to give young people a clear understanding of consent and the consequences of exploitation. To help prevent Problematic sexual behaviour (PSB), schools should teach students about consent, healthy relationships and appropriate behaviour.
- Support services to the victims and rehabilitation programs for the offenders are essential.
- Early identification of at-risk individuals and implementation of intervention strategies for youth.
Distinctive Challenges
According to a report by the National Centre for Missing and Exploited Children (NCMEC), a significant number of reports about child sexual exploitation and abuse material (CSEAM) are linked to perpetrators based outside the country. This highlights major challenges related to jurisdiction and anonymity in addressing such crimes. Since the issue concerns children and considering the cross-border nature of the internet and the emergence of AI, Nations across the globe need to come together to solve this matter. Delays in the extradition procedure and irregular legal processes across the jurisdictions hinder the apprehension of offenders and the delivery of justice to victims.
CyberPeace Recommendations
For effective regulation of AI-generated CSEAM, laws are required to be strengthened for AI developers and trainers to prevent misuse of their tools. AI should be designed with its ethical considerations, ensuring respect for privacy, consent and child rights. There can be a self-regulation mechanism for AI models to recognise and restrict red flags related to CSEAM and indicate grooming or potential abuse.
A distinct Indian CSEAM reporting portal is urgently needed, as cybercrimes are increasing throughout the nation. Depending on the integrated portal may lead to ignorance of AI-based CSEAM cases. This would result in faster response and focused tracking. Since AI-generated content is detectable. The portal should also include an automated AI-content detection system linked directly to law enforcement for swift action.
Furthermore, International cooperation is of utmost importance to win the battle of AI-enabled challenges and to fill the jurisdictional gaps. A united global effort is required. Using a common technology and unified international laws is essential to tackle AI-driven child sexual exploitation across borders and protect children everywhere. CSEAM is an extremely serious issue. Children are among the most vulnerable to such harmful content. This threat must be addressed without delay, through stronger policies, dedicated reporting mechanisms and swift action to protect children from exploitation.
References:
- https://www.sciencedirect.com/science/article/pii/S2950193824000433?ref=pdf_download&fr=RR-2&rr=94efffff09e95975
- https://aasc.assam.gov.in/sites/default/files/swf_utility_folder/departments/aasc_webcomindia_org_oi d_4/portlet/level_2/pocso_act.pdf
- https://www.manupatracademy.com/assets/pdf/legalpost/just-rights-for-children-alliance-and-anr-vs-sharish-and-ors.pdfhttps://www.icmec.orghttps://www.missingkids.org/theissues/generative-ai