Centre Proposes New Bills for Criminal Law
Introduction
Criminal justice in India is majorly governed by three laws which are – Indian Penal Code, Criminal Procedure Code and Indian Evidence Act. The centre, on 11th August 2023’ Friday, proposes a new bill in parliament Friday, which is replacing the country’s major criminal laws, i.e. Indian Penal Code, Criminal Procedure Code and Indian Evidence Act.
The following three bills are being proposed to replace major criminal laws in the country:
- The Bharatiya Nyaya Sanhita Bill, 2023 to replace Indian Penal Code 1860.
- The Bharatiya Nagrik Suraksha Sanhita Bill, 2023, to replace The Code Of Criminal Procedure, 1973.
- The Bharatiya Sakshya Bill, 2023, to replace The Indian Evidence Act 1872.
Cyber law-oriented view of the new shift in criminal lawNotable changes:Bharatiya Nyaya Sanhita Bill, 2023 Indian Penal Code 1860.
Way ahead for digitalisation
The new laws aim to enhance the utilisation of digital services in court systems, it facilitates online registration of FIR, Online filing of the charge sheet, serving summons in electronic mode, trial and proceedings in electronic mode etc. The new bills also allow the virtual appearance of witnesses, accused, experts, and victims in some instances. This shift will lead to the adoption of technology in courts and all courts to be computerised in the upcoming time.
Enhanced recognition of electronic records
With the change in lifestyle in terms of the digital sphere, significance is given to recognising electronic records as equal to paper records.
Conclusion
The criminal laws of the country play a significant role in establishing law & order and providing justice. The criminal laws of India were the old laws existing under British rule. There have been several amendments to criminal laws to deal with the growing crimes and new aspects. However, there was a need for well-established criminal laws which are in accordance with the present era. The step of the legislature by centralising all criminal laws in their new form and introducing three bills is a good approach which will ultimately strengthen the criminal justice system in India, and it will also facilitate the use of technology in the court system.
Related Blogs

Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.
.webp)
Introduction
Cyber slavery is a form of modern exploitation that begins with online deception and evolves into physical human trafficking. In recent times, cyber slavery has emerged as a serious threat that involves exploiting individuals through digital means under coercive or deceptive conditions. Offenders target innocent individuals and lure them by giving fake promises to offer them employment or alike. Cyber slavery can occur on a global scale, targeting vulnerable individuals worldwide through the internet and is a disturbing continuum of online manipulation that leads to real-world abuse and exploitation, where individuals are entrapped by false promises and subjected to severe human rights violations. It can take many different forms, such as coercive involvement in cybercrime, forced employment in online frauds, exploitation in the gig economy, or involuntary slavery. This issue has escalated to the highest level where Indians are being trafficked for jobs in countries like Laos and Cambodia. Recently over 5,000 Indians were reported to be trapped in Southeast Asia, where they are allegedly being coerced into carrying out cyber fraud. It was reported that particularly Indian techies were lured to Cambodia for high-paying jobs and later they found themselves trapped in cyber fraud schemes, forced to work 16 hours a day under severe conditions. This is the harsh reality for thousands of Indian tech professionals who are lured under false pretences to employment in Southeast Asia, where they are forced into committing cyber crimes.
Over 5,000 Indians Held in Cyber Slavery and Human Trafficking Rings
India has rescued 250 citizens in Cambodia who were forced to run online scams, with more than 5,000 Indians stuck in Southeast Asia. The victims, mostly young and tech-savvy, are lured into illegal online work ranging from money laundering and crypto fraud to love scams, where they pose as lovers online. It was reported that Indians are being trafficked for jobs in countries like Laos and Cambodia, where they are forced to conduct cybercrime activities. Victims are often deceived about where they would be working, thinking it will be in Thailand or the Philippines. Instead, they are sent to Cambodia, where their travel documents are confiscated and they are forced to carry out a variety of cybercrimes, from stealing life savings to attacking international governmental or non-governmental organizations. The Indian embassy in Phnom Penh has also released an advisory warning Indian nationals of advertisements for fake jobs in the country through which victims are coerced to undertake online financial scams and other illegal activities.
Regulatory Landscape
Trafficking in Human Beings (THB) is prohibited under the Constitution of India under Article
23 (1). The Immoral Traffic (Prevention) Act, of 1956 (ITPA) is the premier legislation for the prevention of trafficking for commercial sexual exploitation. Section 111 of the Bharatiya Nyaya Sanhita (BNS), 2023, is a comprehensive legal provision aimed at combating organized crime and will be useful in persecuting people involved in such large-scale scams. India has also ratified certain bilateral agreements with several countries to facilitate intelligence sharing and coordinated efforts to combat transnational organized crime and human trafficking.
CyberPeace Policy Recommendations
● Misuse of Technology has exploited the new genre of cybercrimes whereby cybercriminals utilise social media platforms as a tool for targeting innocent individuals. It requires collective efforts from social media companies and regulatory authorities to time to time address the new emerging cybercrimes and develop robust preventive measures to counter them.
● Despite the regulatory mechanism in place, there are certain challenges such as jurisdictional challenges, challenges in detection due to anonymity, and investigations challenges which significantly make the issue of cyber human trafficking a serious evolving threat. Hence International collaboration between the countries is encouraged to address the issue considering the present situation in a technologically driven world. Robust legislation that addresses both national and international cases of human trafficking and contains strict penalties for offenders must be enforced.
● Cybercriminals target innocent people by offering fake high-pay job opportunities, building trust and luring them. It is high time that all netizens should be aware of such tactics deployed by bad actors and recognise the early signs of them. By staying vigilant and cross-verifying the details from authentic sources, netizens can safeguard themselves from such serious threats which even endanger their life by putting them under restrictions once they are being trafficked. It is a notable fact that the Indian government and its agencies are continuously making efforts to rescue the victims of cyber human trafficking or cyber slavery, they must further develop robust mechanisms in place to conduct specialised operations by specialised government agencies to rescue the victims in a timely manner.
● Capacity building and support mechanisms must be encouraged by government entities, cyber security experts and Non-Governmental Organisations (NGOs) to empower the netizens to follow best practices while navigating the online landscape, providing them with helpline or help centres to report any suspicious activity or behaviour they encounter, and making them empowered to feel safe on the Internet while simultaneously building defenses to stay protected from cyber threats.
References:
2. https://www.bbc.com/news/world-asia-india-68705913
3. https://therecord.media/india-rescued-cambodia-scam-centers-citizens
4. https://www.the420.in/rescue-indian-tech-workers-cambodia-cyber-fraud-awareness/
7. https://www.dyami.services/post/intel-brief-250-indian-citizens-rescued-from-cyber-slavery
8. https://www.mea.gov.in/human-trafficking.htm
9. https://www.drishtiias.com/blog/the-vicious-cycle-of-human-trafficking-and-cybercrime

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21