Centre Proposes New Bills for Criminal Law
Introduction
Criminal justice in India is majorly governed by three laws which are – Indian Penal Code, Criminal Procedure Code and Indian Evidence Act. The centre, on 11th August 2023’ Friday, proposes a new bill in parliament Friday, which is replacing the country’s major criminal laws, i.e. Indian Penal Code, Criminal Procedure Code and Indian Evidence Act.
The following three bills are being proposed to replace major criminal laws in the country:
- The Bharatiya Nyaya Sanhita Bill, 2023 to replace Indian Penal Code 1860.
- The Bharatiya Nagrik Suraksha Sanhita Bill, 2023, to replace The Code Of Criminal Procedure, 1973.
- The Bharatiya Sakshya Bill, 2023, to replace The Indian Evidence Act 1872.
Cyber law-oriented view of the new shift in criminal lawNotable changes:Bharatiya Nyaya Sanhita Bill, 2023 Indian Penal Code 1860.
Way ahead for digitalisation
The new laws aim to enhance the utilisation of digital services in court systems, it facilitates online registration of FIR, Online filing of the charge sheet, serving summons in electronic mode, trial and proceedings in electronic mode etc. The new bills also allow the virtual appearance of witnesses, accused, experts, and victims in some instances. This shift will lead to the adoption of technology in courts and all courts to be computerised in the upcoming time.
Enhanced recognition of electronic records
With the change in lifestyle in terms of the digital sphere, significance is given to recognising electronic records as equal to paper records.
Conclusion
The criminal laws of the country play a significant role in establishing law & order and providing justice. The criminal laws of India were the old laws existing under British rule. There have been several amendments to criminal laws to deal with the growing crimes and new aspects. However, there was a need for well-established criminal laws which are in accordance with the present era. The step of the legislature by centralising all criminal laws in their new form and introducing three bills is a good approach which will ultimately strengthen the criminal justice system in India, and it will also facilitate the use of technology in the court system.
Related Blogs
Introduction
In the age of advanced technology, Cyber threats continue to grow, and so are the cyber hubs. A new name has been added to the cyber hub, Purnia, a city in India, is now evolving as a new and alarming menace-biometric cloning and financial crimes. This emerging cyber threat involves replicating an individual’s biometric data, such as fingerprint or facial recognition, to gain unauthorised access to their bank accounts and carry out fraudulent activities. In this blog, we will have a look at the methods employed, the impact on individuals and institutions, and the necessary steps to mitigate the risk.
The Backdrop
Purnia, a bustling city in the state of Bihar, India, is known for its rich cultural heritage, However, underneath its bright appearance comes a hidden danger—a rising cyber threat with the potential to devastate its citizens’ financial security. Purnia has seen the growth of a dangerous trend in recent years, such as biometric cloning for financial crimes, after several FIRs were registered with Kasba and Amaur police stations. The Police came into action and started an investigation.
Modus Operandi unveiled
The modus Operandi of cyber criminals includes hacking into databases, intercepting data during transactions, or even physically obtaining fingerprints of facial images from objects or surfaces. Let’s understand how they gathered all this data and why Bihar was not targeted.
These criminals are way smart they operate in the three states. They targeted and have open access to obtain registry and agreement paperwork from official websites, albeit it is not available online in Bihar. As a result, the scam was conducted in other states rather than Bihar; further, the fraudsters were involved in downloading the fingerprints, biometrics, and Aadhaar numbers of buyers and sellers from the property registration documents of Andhra Pradesh, Haryana, and Telangana.
After Cloning fingerprints, the fraudster withdrew money after linking with Aadhaar Enabled Payment System (AEPS) from various bank accounts. The fraudsters stamped the fingerprint on rubber trace paper and utilised a polymer stamp machine and heating at a specific temperature with a chemical to make duplicate fingerprints used in unlawful financial transactions from several consumers’ bank accounts.
Investigation Insight
After the breakthrough, the police teams recovered a large number of smartphones, ATM cards, rubber stamps of fingerprints, Aadhar numbers, scanners, Stamp machines, laptops, and chemicals, and along with this, 17 people were arrested.
During the investigation, it was found that the cybercriminals employ Sophisticated money laundering techniques to obscure the illicit origins of the stolen funds. The fraudsters transfer money into various /multiple accounts or use cryptocurrency. Using these tactics makes it more challenging for authorities to trace back money and get it back.
Impact of biometric Cloning scam
The Biometric scam has far-reaching implications both for society, Individuals, and institutions. These kinds of scams cause financial losses and create emotional breakdowns, including anger, anxiety, and a sense of violation. This also broke the trust in a digital system.
It also seriously impacts institutions. Biometric cloning frauds may potentially cause severe reputational harm to financial institutions and organisations. When clients fall prey to such frauds, it erodes faith in the institution’s security procedures, potentially leading to customer loss and a tarnished reputation. Institutions may suffer legal and regulatory consequences, and they must invest money in investigating the incident, paying victims, and improving their security systems to prevent similar instances.
Raising Awareness
Empowering Purnia Residents to Protect Themselves from Biometric Fraud: Purnia must provide its inhabitants with knowledge and techniques to protect their personal information as it deals with the increasing issue of biometric fraud. Individuals may defend themselves from falling prey to these frauds by increasing awareness about biometric fraud and encouraging recommended practices. This blog will discuss the necessity of increasing awareness and present practical recommendations to help Purnia prevent biometric fraud. Here are some tips that one can follow;
- Securing personal Biometric data: It is crucial to safeguard personal biometric data. Individuals should be urged to secure their fingerprints, face scans, and other biometric information in the same way that they protect their passwords or PINs. It is critical to ensure that biometric data is safely maintained and shared with only trustworthy organisations with strong security procedures in place.
- Verifying Service providers: Residents should be vigilant while submitting biometric data to service providers, particularly those providing financial services. Before disclosing any sensitive information, it is important to undertake due diligence and establish the validity and reliability of the organisation. Checking for relevant certificates, reading reviews, and getting recommendations can assist people in making educated judgments and avoiding unscrupulous companies.
- Personal Cybersecurity: Individuals should implement robust cybersecurity practices to reduce the danger of biometric fraud. This includes using difficult and unique passwords, activating two-factor authentication, upgrading software and programs on a regular basis, and being wary of phishing efforts. Individuals should also refrain from providing personal information or biometric data via unprotected networks or through untrustworthy sources.
- Educating the Elderly and Vulnerable Groups: Special attention should be given to educating the elderly and other vulnerable groups who may be more prone to scams. Awareness campaigns may be modified to their individual requirements, emphasising the significance of digital identities, recognising possible risks, and seeking help from reliable sources when in doubt. Empowering these populations with knowledge can help keep them safe from biometric fraud.
Measures to Stay Ahead
As biometric fraud is a growing concern, staying a step ahead is essential. By following these simple steps, one can safeguard themselves.
- Multi-factor Authentication: MFA is one of the best methods for security. MFA creates multi-layer security or extra-layer security against unauthorised access. MFA incorporates a biometric scan and a password.
- Biometric Encryption: Biometric encryption securely stores and transmits biometric data. Rather than keeping raw biometric data, encryption methods transform it into mathematical templates that cannot be reverse-engineered. These templates are utilised for authentication, guaranteeing that the original biometric information is not compromised even if the encrypted data is.
- AI and Machine Learning (ML): AI and ML technologies are critical in detecting and combating biometric fraud. These systems can analyse massive volumes of data in real-time, discover trends, and detect abnormalities. Biometric systems may continually adapt and enhance accuracy by employing AI and ML algorithms, boosting their capacity to distinguish between legitimate users and fraudulent efforts.
Conclusion
The Biometric fraud call needs immediate attention to protect the bankers from the potential consequences. By creating awareness, we can save ourselves; additionally, by working together, we can create a safer digital environment. The use of biometric verification was inculcated to increase factor authentication for a banker. However, we see that the bad actors have already started to bypass the tech and even wreak havoc upon the netizens by draining their accounts of their hard-earned money. The banks and the cyber cells nationwide need to work together in synergy to increase awareness and safety mechanisms to prevent such cyber crimes and create effective and efficient redressal mechanisms for the citizens.
Reference
Introduction
The Data Protection Data Privacy Act 2023 is the most essential step towards protecting, prioritising, and promoting the users’ privacy and data protection. The Act is designed to prioritize user consent in data processing while assuring uninterrupted services like online shopping, intermediaries, etc. The Act specifies that once a user provides consent to the following intermediary platforms, the platforms can process the data until the user withdraws the rights of it. This policy assures that the user has the entire control over their data and is accountable for its usage.
A keen Outlook
The Following Act also provides highlights for user-specific purpose, which is limited to data processing. This step prevents the misuse of data and also ensures that the processed data is being for the purpose for which it was obtained at the initial stage from the user.
- Data Fudiary and Processing of Online Shopping Platforms: The Act Emphasises More on Users’ Consent. Once provided, the Data Fudiary can constantly process the data until it is specifically withdrawn by the Data Principal.
- Detailed Analysis
- Consent as a Foundation: The Act places the user's consent as a backbone to the data processing. It sets clear boundaries for data processing. It can be Collecting, Processing, and Storing, and must comply with users’ consent before being used.
- Uninterrupted Data processing: With the given user consent, the intermediaries are not time-restrained. As long as the user does not obligate their consent, the process will be ongoing.
- Consent and Order Fulfillment: Consent, once provided, encloses all the activities related to the specific purpose for which it was meant to the data it was given for subsequent actions such as order fulfilment.
- Detailed Analysis
- Purpose-Limited Consent: The consent given is purpose-limited. The platform cannot misuse the obtained data for its personal use.
- Seamless User Experience: By ensuring that the user consent covers the full transactions, spared from the unwanted annoyance of repeated consent requests from the actual ongoing activities.
- Data Retention and Rub Out on Online Platforms: Platforms must ensure data minimisation post its utilisation period. This extends to any kind of third-party processors they might take on.
- Detailed Analysis
- Minimization and Security Assurance: By compulsory data removal on post ultization,This step helps to reduce the volume of data platforms hold, which leads to minimizing the risk to data.
- Third-Party Accountability, User Privacy Protection.
Influence from Global frameworks
The impactful changes based on global trends and similar legislation( European Union’s GDPR) here are some fruitful changes in intermediaries and social media platforms experienced after the implementation of the DPDP Act 2023.
- Solidified Consent Mechanism: Platforms and intermediatries need to ensure the users’ consent is categorically given, and informed, and should be specific to which the data is obtained. This step may lead to user-friendly consent forms activities and prompts.
- Data Minimizations: Platforms that tend to need to collect the only data necessary for the specific purpose mentioned and not retain information beyond its utility.
- Transparency and Accountability: Data collecting Platforms need to ensure transparency in data collecting, data processing, and sharing practices. This involves more detailed policy and regular audits.
- Data Portability: Users have the right to request for a copy of their own data used in format, allowing them to switch platforms effectively.
- Right to Obligation: Users can have the request right to deletion of their data, also referred to as the “Right to be forgotten”.
- Prescribed Reporting: Under circumstances of data breaches, intermediary platforms are required to report the issues and instability to the regulatory authorities within a specific timeline.
- Data Protection Authorities: Due to the increase in data breaches, Large platforms indeed appoint data protection officers, which are responsible for the right compliance with data protection guidelines.
- Disciplined Policies: Non-compliance might lead to a huge amount of fines, making it indispensable to invest in data protection measures.
- Third-Party Audits: Intermediaries have to undergo security audits by external auditors to ensure they are meeting the expeditions of the following compliances.
- Third-Party Information Sharing Restrictions: Sharing personal information and users’ data with third parties (such as advertisers) come with more detailed and disciplined guideline and user consent.
Conclusion
The Data Protection Data Privacy Act 2023 prioritises user consent, ensuring uninterrupted services and purpose-limited data processing. It aims to prevent data misuse, emphasising seamless user experiences and data minimisation. Drawing inspiration from global frameworks like the EU's GDPR, it introduces solidified consent mechanisms, transparency, and accountability. Users gain rights such as data portability and data deletion requests. Non-compliance results in significant fines. This legislation sets a new standard for user privacy and data protection, empowering users and holding platforms accountable. In an evolving digital landscape, it plays a crucial role in ensuring data security and responsible data handling.
References:
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.mondaq.com/india/privacy-protection/1355068/data-protection-law-in-india-analysis-of-dpdp-act-2023-for-businesses--part-i
- https://www.hindustantimes.com/technology/explained-indias-new-digital-personal-data-protection-framework-101691912775654.html
Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.