Centre Proposes New Bills for Criminal Law
Introduction
Criminal justice in India is majorly governed by three laws which are – Indian Penal Code, Criminal Procedure Code and Indian Evidence Act. The centre, on 11th August 2023’ Friday, proposes a new bill in parliament Friday, which is replacing the country’s major criminal laws, i.e. Indian Penal Code, Criminal Procedure Code and Indian Evidence Act.
The following three bills are being proposed to replace major criminal laws in the country:
- The Bharatiya Nyaya Sanhita Bill, 2023 to replace Indian Penal Code 1860.
- The Bharatiya Nagrik Suraksha Sanhita Bill, 2023, to replace The Code Of Criminal Procedure, 1973.
- The Bharatiya Sakshya Bill, 2023, to replace The Indian Evidence Act 1872.
Cyber law-oriented view of the new shift in criminal lawNotable changes:Bharatiya Nyaya Sanhita Bill, 2023 Indian Penal Code 1860.
Way ahead for digitalisation
The new laws aim to enhance the utilisation of digital services in court systems, it facilitates online registration of FIR, Online filing of the charge sheet, serving summons in electronic mode, trial and proceedings in electronic mode etc. The new bills also allow the virtual appearance of witnesses, accused, experts, and victims in some instances. This shift will lead to the adoption of technology in courts and all courts to be computerised in the upcoming time.
Enhanced recognition of electronic records
With the change in lifestyle in terms of the digital sphere, significance is given to recognising electronic records as equal to paper records.
Conclusion
The criminal laws of the country play a significant role in establishing law & order and providing justice. The criminal laws of India were the old laws existing under British rule. There have been several amendments to criminal laws to deal with the growing crimes and new aspects. However, there was a need for well-established criminal laws which are in accordance with the present era. The step of the legislature by centralising all criminal laws in their new form and introducing three bills is a good approach which will ultimately strengthen the criminal justice system in India, and it will also facilitate the use of technology in the court system.
Related Blogs

Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. A large number of teenagers are also active and have their accounts on social media. They use social media to connect with their friends and family. Social media offers ease to connect and communicate with larger communities and even showcase your creativity. On the other hand, it also poses some challenges or issues such as inappropriate content, online harassment, online stalking, misuse of personal information, abusive and dishearted content etc. There could be unindented consequences on teenagers' mental health by such threats or overuse of social media. The data shows some teens spend hours a day on social media hence it has a larger impact on them whether we notice it or not. Social media addiction and its negative repercussions such as overuse of social media by teens and online threats and vulnerabilities is a growing concern that needs to be taken seriously by social media platforms, regulatory policies and even user's responsibilities. Recently Colorado and California led a joint lawsuit filed by 33 states in the U.S. District Court for the Northern District of California against meta on the concern of child safety.
Meta and concern of child users safety
Recently Meta, the company that owns Facebook, Instagram, WhatsApp, and Messenger, has been sued by more than three dozen states for allegedly using features to hook children to its platforms. The lawsuit claims that Meta violated consumer protection laws and deceived users about the safety of its platforms. The states accuse Meta of designing manipulative features to induce young users' compulsive and extended use, pushing them into harmful content. However, Meta has responded by stating that it is working to provide a safer environment for teenagers and expressing disappointment in the lawsuit.
According to the complaint filed by the states, Meta “designed psychologically manipulative product features to induce young users’ compulsive and extended use" of platforms like Instagram. The states allege that Meta's algorithms were designed to push children and teenagers into rabbit holes of toxic and harmful content, with features like "infinite scroll" and persistent alerts used to hook young users. However, meta responded with disappointment with a lawsuit stating that meta working productively with companies across the industry to create clear, age-appropriate standards for the many apps.
Unplug for sometime
Overuse of social media is associated with increased mental health repercussions along with online threats and risks. Social media’s effect on teenagers is driven by factors such as inadequate sleep, exposure to cyberbullying and online threats and lack of physical activity. Its admitted that social media can help teens feel more connected to their friends and their support system and showcase their creativity to the online world. However, social media overuse by teens is often linked with underlying issues that require attention. To help teenagers, encourage them for responsible use and unplug from social media for some time, encourage them to get outside in nature, do physical activities, and express themselves creatively.
Understanding the threats & risks
- Psychological effects
- Addiction: Excessive use of social media will lead to procrastination and excessively using social media can lead to physical and psychological addiction because it triggers the brain's reward system.
- Mental Conditions Associated: Excessively using social media can be harmful for mental well-being which can also lead to depression and anxiety, self-consciousness and may also lead to social anxiety disorder.
- Eyes, Carpal tunnel syndrome: Excessive spending time on screen may lead to put a real strain on your eyes. Eye problems caused by computer/phone screen use fall under computer vision syndrome (CVS). Carpal tunnel syndrome is caused by pressure on the median nerve.
- Cyberbullying: Cyberbullying is one of the major concerns faced in online interactions on social media. Cyberbullying takes place using the internet or other digital communication technology to bully, harass, or intimidate others and it has become a major concern of online harassment on popular social media platforms. Cyberbullying may include spreading rumours or posting hurtful comments. Cyberbullying has emerged as a phenomenon that has a socio-psychological impact on the victims.
- Online grooming: Online grooming is defined as the tactics abusers deploy through the internet to sexually exploit children. The average time for a bad actor to lure children into his trap is 3 minutes, which is a very alarming number.
- Ransomware/Malware/Spyware: Cybercrooks impose threats such as ransomware, malware and spyware by deploying malicious links on social media. This poses serious cyber threats, and it causes consequences such as financial losses, data loss, and reputation damage. Ransomware is a type of malware which is designed to deny a user or organisation access to their files on the computer. On social media, cyber crooks post malicious links which contain malware, and spyware threats. Hence it is important to be cautious before clicking on any such suspicious link.
- Sextortion: Sextortion is a crime where the perpetrator threatens the victim and demands ransom or asks for sexual favours by threatening the victim to expose or reveal the victim’s sexual activity. It is a kind of sexual blackmail, it may take place on social media and youngsters are mostly targeted. The cyber crooks also misuse the advanced AI Deepfake technology which is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfakes technology since easily accessible, is misused by fraudsters to commit various crimes including sextortion or deceiving and scamming people through fake images or videos which look realistic.
- Child sexual abuse material(CSAM): CSAM is inappropriate or illicit content which is prohibited by the laws and regulatory guidelines. Child while using the internet if encounters age-restricted or inappropriate content which may be harmful to them child. Through regulatory guidelines, internet service providers are refrained from hosting the CSAM content on the websites and blocking such inappropriate or CSAM content.
- In App purchases: The teen user also engages in-app purchases on social media or online gaming where they might fall into financial fraud or easy money scams. Where fraudster targets through offering exciting job offers such as part-time job, work-from-home job, small investments, liking content on social media, and earning money out of this. This has been prevalent on social media and fraudsters target innocent people ask for their personal and financial information, and commit financial fraud by scamming people on the pretext of offering exciting offers.
Safety tips:
To stay safe while using social media teens or users are encouraged to follow the best practices and stay aware of the online threats. Users must keep in regard to the best practices. Such as;
- Safe web browsing.
- Utilising privacy settings of your social media accounts.
- Using strong passwords and enabling two-factor authentication.
- Be careful about what you post or share.
- Becoming familiar with the privacy policy of the social media platforms.
- Being selective of adding unknown users to your social media network.
- Reporting any suspicious activity to the platform or relevant forum.
Conclusion:
Child safety is a major concern on social media platforms. Social media-related offences such as cyberstalking, hacking, online harassment and threats, sextortion, and financial fraud are seen as the most occurring cyber crimes on social media. The tech giants must ensure the safety of teen users on social media by implementing and adopting the best mechanisms on the platform. CyberPeace Foundation is working towards advocating for a Child-friendly SIM to protect from the illicit influence of the internet and Social Media.
References:
- https://www.scientificamerican.com/article/heres-why-states-are-suing-meta-for-hurting-teens-with-facebook-and-instagram/
- https://www.nytimes.com/2023/10/24/technology/states-lawsuit-children-instagram-facebook.html

Introduction:
The G7 Summit is an international forum that includes member states from France, the United States, the United Kingdom, Germany, Japan, Italy, Canada and the European Union (EU). The annual G7 meeting that is held every year was hosted by Japan this year in May 2023. It took place in Hiroshima. Artificial Intelligence (AI) was the major theme of this G7 summit. Key takeaways from this G7 summit highlight that leaders together focused on escalating the adoption of AI for beneficial use cases across the economy and the government and improving the governing structure to mitigate the potential risks of AI.
Need for fair and responsible use of AI:
The G7 recognises that they really need to work together to ensure the responsible and fair use of AI to help establish technical standards for the same. Members of the G7 countries agreed to adopt an open and enabling environment for the development of AI technologies. They also emphasized that AI regulations should be based on democratic values. G7 summit calls for the responsible use of AI. The ministers discussed the risks involved in AI technology programs like ChatGPT. They came up with an action plan for promoting responsible use of AI with human beings leading the efforts.
Further Ministers from the Group of Seven (G7) countries (Canada, France, Germany, Italy, Japan, the UK, the US, and the EU) met virtually on 7 September 2023 and committed to creating ‘international guiding principles applicable for all AI actors’, and a code of conduct for organisations developing ‘advanced’ AI systems.
What is HAP (Hiroshima AI Process)
Hiroshima AI Process (HAP) aims to establish trustworthy AI technical standards at the international level. The G7 agreed on creating a ministerial forum to prompt the fair use of AI. Hiroshima AI Process (HAP) is an effort by G7 to determine a way forward to regulate AI. The HAP establishes a forum for international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI at the global level.
The HAP will be operating in close connection with organisations including the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
This Hiroshima AI Process (HAP) initiated at the Annual G7 Summit held in Hiroshima, Japan is a significant step towards regulating AI and the Hiroshima AI Process (HAP) is likely to conclude by December 2023.
G7 leaders emphasized fostering an environment where trustworthy AI systems are designed, developed and deployed for the common good worldwide. They advocated for international standards and interoperable tools for trustworthy AI that enable Innovation by creating a comprehensive policy framework, including overall guiding principles for all AI actors in the AI ecosystem.
Stressing upon fair use of advanced technologies:
The impact and misuse of generative AI was also discussed by the G7 leaders. The G7 members also stressed misinformation and disinformation in the realm of generative AI models. As they are capable of creating synthetic content such as deepfakes. In particular, they noted that the next generation of interactive generative media will leverage targeted influence content that is highly personalized, localized, and conversational.
In the digital landscape, there is a rapid advancement of technologies such as generative
Artificial Intelligence (AI), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities, hence certain regulatory mechanisms at the global level will ensure and advocate for the ethical, reasonable and fair use of such advanced technologies.
Conclusion:
The G7 summit held in May 2023 focused on advanced international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI, in line with shared democratic values. AI governance has become a global issue, countries around the world are coming forward and advocating for the responsible and fair use of AI and influence on global AI governance and standards. It is significant to establish a regulatory framework that defines AI capabilities and identifies areas prone to misuse. And set forth reasonable technical standards while also fostering innovations. Hence overall prioritizing data privacy, integrity, and security in the evolving nature of advanced technologies.
References:
- https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf
- https://www.g7hiroshima.go.jp/en/summit/about/
- https://www.drishtiias.com/daily-updates/daily-news-analysis/the-hiroshima-ai-process-for-global-ai-governance
- https://www.businesstoday.in/technology/news/story/hiroshima-ai-process-g7-calls-for-adoption-of-international-technical-standards-for-ai-382121-2023-05-20
.webp)
Introduction to Grooming
The term grooming is believed to have been first used by a group of investigators in the 1970s to describe patterns of seduction of an offender towards a child. It eventually evolved and began being commonly used by law enforcement agencies and has now replaced the term seduction for this behavioural pattern. At its core, grooming refers to conditioning a child by an adult offender to further their wrong motives. In its most popular sense, it refers to the sexual victimisation of children whereby an adult befriends a minor and builds an emotional connection to sexually abuse, exploit and even trafficking such a victim. The onset of technology has shifted the offline physical proximity of perpetrators to the internet, enabling groomers to integrate themselves completely into the victim’s life by maintaining consistent contact. It is noted that while grooming can occur online and offline, groomers often establish online contact before moving the ‘relationship’ offline to commit sexual offences.
Underreporting and Vulnerability of Teenagers
Given the elusive nature of the crime, cyber grooming remains one of the most underreported crimes by victims, who are often unaware or embarrassed to share their experiences. Teenagers are particularly more susceptible to cyber grooming since they not only have more access to the internet but also engage in more online risk-taking behaviours such as posting sensitive and personal pictures. Studies indicate that individuals aged 18 to 23 often lack awareness regarding the grooming process. They frequently engage in relationships with groomers without recognising the deceptive and manipulative tactics employed, mistakenly perceiving these relationships as consensual rather than abusive.
Rise of Cyber Grooming incidents after COVID-19 pandemic
There has been an uptick in cyber grooming after the COVID-19 pandemic, whereby an adult poses as a teenager or a child and befriends a minor on child-friendly websites or social media outlets and builds an emotional connection with the victim. The main goal is to obtain intimate and personal data of the minor, often in the form of sexual chats, pictures or videos, to threaten and coerce them into continuing such acts. The grooming process usually begins with seemingly harmless inquiries about the minor's age, interests, and family background. Over time, these questions gradually shift to topics concerning sexual experiences and desires. Research and data indicate that online grooming is primarily carried out by males, who frequently choose their victims based on attractiveness, ease of access, and the ability to exploit the minor's vulnerabilities.
Beyond Sexual Exploitation: Ideological and Commercial Grooming
Grooming is not confined to sexual exploitation. The rise of technology has expanded the influence of extremist ideological groups, granting them access to children who can be coerced into adopting their beliefs. This phenomenon, known as ideological grooming, presents significant personal, social, national security, and law enforcement challenges. Additionally, a new trend, termed digital commercial grooming, involves malicious actors manipulating minors into procuring and using drugs. Violent extremists are improving their online recruitment strategies, learning from each other to target and recruit supporters more effectively and are constantly leveraging children’s vulnerabilities to reinforce anti-government ideologies.
Policy Recommendations to Combat Cyber Grooming
To address the pervasive issue of cyber grooming and child recruitment by extremist groups, several policy recommendations can be implemented. Social media and online platforms should enhance their monitoring and reporting systems to swiftly detect and remove grooming behaviours. This includes investing in AI technologies for content moderation and employing dedicated teams to respond to reports promptly. Additionally, collaborative efforts with cybersecurity experts and child psychologists to develop educational campaigns and tools that teach children about online safety and identify grooming tactics should be mandated. Legislation should also be strengthened to include provisions specifically addressing cyber grooming, ensuring strict penalties for offenders and protections for victims. In this regard, international cooperation among law enforcement agencies and tech companies is essential to create a unified approach to tackling cross-border online threats to children's safety and security.
References:
- Lanning, Kenneth “The Evolution of Grooming: Concept and Term”, Journal of Interpersonal Violence, 2018, Vol. 33 (1) 5-16. https://www.nationalcac.org/wp-content/uploads/2019/05/The-evolution-of-grooming-Concept-and-term.pdf
- Jonie Chiu, Ethel Quayle, “Understanding online grooming: An interpretative phenomenological analysis of adolescents' offline meetings with adult perpetrators”, Child Abuse & Neglect, Volume 128, 2022, 105600, ISSN 0145-2134,https://doi.org/10.1016/j.chiabu.2022.105600. https://www.sciencedirect.com/science/article/pii/S014521342200120X
- “Online child sexual exploitation and abuse”, Sharinnf Electronic Resources on Laws and Crime, United Nations Office for Drugs and Crime. https://sherloc.unodc.org/cld/en/education/tertiary/cybercrime/module-12/key-issues/online-child-sexual-exploitation-and-abuse.html
- Mehrotra, Karishma, “In the pandemic, more Indian children are falling victim to online grooming for sexual exploitation” The Scroll.in, 18 September 2021. https://scroll.in/magazine/1005389/in-the-pandemic-more-indian-children-are-falling-victim-to-online-grooming-for-sexual-exploitation
- Lorenzo-Dus, Nuria, “Digital Grooming: Discourses of Manipulation and Cyber-Crime”, 18 December 2022 https://academic.oup.com/book/45362
- Strategic orientations on a coordinated EU approach to prevention of radicalisation in 2022-2023 https://home-affairs.ec.europa.eu/system/files/2022-03/2022-2023%20Strategic%20orientations%20on%20a%20coordinated%20EU%20approach%20to%20prevention%20of%20radicalisation_en.pdf
- “Handbook on Children Recruited and Exploited by Terrorist and Violent Extremist Groups: The Role of the Justice System”, United Nations Office on Drugs and Crime, 2017. https://www.unodc.org/documents/justice-and-prison-reform/Child-Victims/Handbook_on_Children_Recruited_and_Exploited_by_Terrorist_and_Violent_Extremist_Groups_the_Role_of_the_Justice_System.E.pdf