Deepfake Alert: Sachin Tendulkar's Warning Against Technology Misuse
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html
Related Blogs

Introduction
Significantly, in March 2023, the Prevention of Money Laundering Act, 2002's regulations placed Virtual Digital Asset Service Providers functioning located under the purview of the Anti Money Laundering/Counter Financing of Terrorism (AML-CFT) scheme. An important step toward controlling VDA SP operations and guaranteeing adherence to Anti-Money Laundering and Combating the Financing of Terrorism (AML-CFT) regulations.
The significance of AML-CFT procedures
The AML-CFT framework's incorporation of Virtual Digital Asset Service Providers (VDA SPs) is essential for protecting the banking industry from illegal activities including the laundering of funds and counter-financing of terrorist attacks. These regulations become more crucial as the market for digital assets develops and becomes more well-known.
The practice of money laundering is hiding the source of the sum received illegally, thus it's critical to have strict policies in place to track down and stop these kinds of operations. Furthermore, funding for terrorism is a serious danger to international safety, hence stopping the flow of money to terrorist companies is a top concern for global officials.
The goal of policymakers' move to include VDA SPs in the AML-CFT architecture is to set up control and surveillance procedures that will guarantee these organisations' open and honest operations. This involves tracking transactions, flagging questionable activity, and conducting extensive customer investigations. Incorporating such procedures not only reduces the potential for financial crimes but also builds confidence and trust in the electronic asset market.
It is important to see the significance of AML-CFT procedures and the changes in the legal framework to reflect the evolving characteristics of digital currencies. These procedures are essential to preserving the reliability and safety of the wider banking system.
Notifications of Compliance Show Cause
Under Section 13 of the PMLA Act 2002, FIU IND sent adherence Show Cause Notices to nine offshore Virtual Digital Asset Service Providers (VDA SPs) as part of its dedication to upholding compliance with regulations. This affirmative step requires organisations to be scrutinised and attempted to bring them under inspection.
Governmental Response
The Director of FIU IND has addressed the Secretary of the Ministry of Electronics and Information Technology to take further measures due to the disregard of offshore firms. According to the notification, URLs connected to these organisations that operate in India in violation of the PML Act's requirements must be blocked.
Mandatory Registration for VDA SPs
Virtual Digital Asset Service Providers (both onshore and offshore) who perform a range of operations, including the trading of digital goods for monetary currencies, the distribution of digital currency, and the management or preservation of electronic assets, are now obliged to register with FIU.
Range of Statutory Responsibilities
In accordance with the PML Act, VDA SPs are subject to several requirements, including documentation, disclosure, and other duties. One of their responsibilities is to register with the FIU IND. The primary focus is on guaranteeing that VDA SPs comply with AML-CFT protocols, hence enhancing the general reliability of the banking industry.
Difficulties with Offshore Compliance
There are many obstacles in guaranteeing that offshore organisations comply with Anti Money Laundering/Counter Financing of Terrorism (AML-CFT), chief amongst them being their unwillingness to undergo registration. Some overseas Virtual Digital Asset Service Providers (VDA SPs) have been reluctant to comply with the existing rules and regulations, even though they cater to a significant number of Indian users. There are several reasons for this hesitation, such as worries about heightened monitoring, the expense of compliance, and the apparent complexity of governmental processes. Regulatory organisations have taken steps to close the discrepancy between offshore businesses' real activities and the regulations they must follow. In addition to maintaining the trustworthiness of the economic system, resolving the issues with offshore adherence is essential for promoting confidence and openness in the market for electronic assets.
Conclusion
FIU IND has demonstrated its dedication to creating an effective regulatory framework for Virtual Digital Asset Service Providers through its recent measures. India hopes to fortify its countermeasures against money laundering and safeguard the financial well-being of its users by expanding the AML-CFT legislation to offshore firms. The continuous efforts to restrict the URLs of non-compliant companies show a proactive approach to stopping illicit activity and fostering a safe and law-abiding virtual asset ecosystem. The safety and soundness of the banking sector will be crucially maintained by laws and regulations as the digital world develops.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=1991372
- https://www.thehindubusinessline.com/books/reviews/business-economy/fiu-ind-issues-compliance-showcause-notices-to-nine-offshore-vda-sps/article67684613.ece
- https://business.outlookindia.com/news/fiu-issues-notice-to-9-offshore-crypto-platforms-writes-to-meity-for-blocking-of-urls

Introduction
In the age of advanced technology, Cyber threats continue to grow, and so are the cyber hubs. A new name has been added to the cyber hub, Purnia, a city in India, is now evolving as a new and alarming menace-biometric cloning and financial crimes. This emerging cyber threat involves replicating an individual’s biometric data, such as fingerprint or facial recognition, to gain unauthorised access to their bank accounts and carry out fraudulent activities. In this blog, we will have a look at the methods employed, the impact on individuals and institutions, and the necessary steps to mitigate the risk.
The Backdrop
Purnia, a bustling city in the state of Bihar, India, is known for its rich cultural heritage, However, underneath its bright appearance comes a hidden danger—a rising cyber threat with the potential to devastate its citizens’ financial security. Purnia has seen the growth of a dangerous trend in recent years, such as biometric cloning for financial crimes, after several FIRs were registered with Kasba and Amaur police stations. The Police came into action and started an investigation.
Modus Operandi unveiled
The modus Operandi of cyber criminals includes hacking into databases, intercepting data during transactions, or even physically obtaining fingerprints of facial images from objects or surfaces. Let’s understand how they gathered all this data and why Bihar was not targeted.
These criminals are way smart they operate in the three states. They targeted and have open access to obtain registry and agreement paperwork from official websites, albeit it is not available online in Bihar. As a result, the scam was conducted in other states rather than Bihar; further, the fraudsters were involved in downloading the fingerprints, biometrics, and Aadhaar numbers of buyers and sellers from the property registration documents of Andhra Pradesh, Haryana, and Telangana.
After Cloning fingerprints, the fraudster withdrew money after linking with Aadhaar Enabled Payment System (AEPS) from various bank accounts. The fraudsters stamped the fingerprint on rubber trace paper and utilised a polymer stamp machine and heating at a specific temperature with a chemical to make duplicate fingerprints used in unlawful financial transactions from several consumers’ bank accounts.
Investigation Insight
After the breakthrough, the police teams recovered a large number of smartphones, ATM cards, rubber stamps of fingerprints, Aadhar numbers, scanners, Stamp machines, laptops, and chemicals, and along with this, 17 people were arrested.
During the investigation, it was found that the cybercriminals employ Sophisticated money laundering techniques to obscure the illicit origins of the stolen funds. The fraudsters transfer money into various /multiple accounts or use cryptocurrency. Using these tactics makes it more challenging for authorities to trace back money and get it back.
Impact of biometric Cloning scam
The Biometric scam has far-reaching implications both for society, Individuals, and institutions. These kinds of scams cause financial losses and create emotional breakdowns, including anger, anxiety, and a sense of violation. This also broke the trust in a digital system.
It also seriously impacts institutions. Biometric cloning frauds may potentially cause severe reputational harm to financial institutions and organisations. When clients fall prey to such frauds, it erodes faith in the institution’s security procedures, potentially leading to customer loss and a tarnished reputation. Institutions may suffer legal and regulatory consequences, and they must invest money in investigating the incident, paying victims, and improving their security systems to prevent similar instances.
Raising Awareness
Empowering Purnia Residents to Protect Themselves from Biometric Fraud: Purnia must provide its inhabitants with knowledge and techniques to protect their personal information as it deals with the increasing issue of biometric fraud. Individuals may defend themselves from falling prey to these frauds by increasing awareness about biometric fraud and encouraging recommended practices. This blog will discuss the necessity of increasing awareness and present practical recommendations to help Purnia prevent biometric fraud. Here are some tips that one can follow;
- Securing personal Biometric data: It is crucial to safeguard personal biometric data. Individuals should be urged to secure their fingerprints, face scans, and other biometric information in the same way that they protect their passwords or PINs. It is critical to ensure that biometric data is safely maintained and shared with only trustworthy organisations with strong security procedures in place.
- Verifying Service providers: Residents should be vigilant while submitting biometric data to service providers, particularly those providing financial services. Before disclosing any sensitive information, it is important to undertake due diligence and establish the validity and reliability of the organisation. Checking for relevant certificates, reading reviews, and getting recommendations can assist people in making educated judgments and avoiding unscrupulous companies.
- Personal Cybersecurity: Individuals should implement robust cybersecurity practices to reduce the danger of biometric fraud. This includes using difficult and unique passwords, activating two-factor authentication, upgrading software and programs on a regular basis, and being wary of phishing efforts. Individuals should also refrain from providing personal information or biometric data via unprotected networks or through untrustworthy sources.
- Educating the Elderly and Vulnerable Groups: Special attention should be given to educating the elderly and other vulnerable groups who may be more prone to scams. Awareness campaigns may be modified to their individual requirements, emphasising the significance of digital identities, recognising possible risks, and seeking help from reliable sources when in doubt. Empowering these populations with knowledge can help keep them safe from biometric fraud.
Measures to Stay Ahead
As biometric fraud is a growing concern, staying a step ahead is essential. By following these simple steps, one can safeguard themselves.
- Multi-factor Authentication: MFA is one of the best methods for security. MFA creates multi-layer security or extra-layer security against unauthorised access. MFA incorporates a biometric scan and a password.
- Biometric Encryption: Biometric encryption securely stores and transmits biometric data. Rather than keeping raw biometric data, encryption methods transform it into mathematical templates that cannot be reverse-engineered. These templates are utilised for authentication, guaranteeing that the original biometric information is not compromised even if the encrypted data is.
- AI and Machine Learning (ML): AI and ML technologies are critical in detecting and combating biometric fraud. These systems can analyse massive volumes of data in real-time, discover trends, and detect abnormalities. Biometric systems may continually adapt and enhance accuracy by employing AI and ML algorithms, boosting their capacity to distinguish between legitimate users and fraudulent efforts.
Conclusion
The Biometric fraud call needs immediate attention to protect the bankers from the potential consequences. By creating awareness, we can save ourselves; additionally, by working together, we can create a safer digital environment. The use of biometric verification was inculcated to increase factor authentication for a banker. However, we see that the bad actors have already started to bypass the tech and even wreak havoc upon the netizens by draining their accounts of their hard-earned money. The banks and the cyber cells nationwide need to work together in synergy to increase awareness and safety mechanisms to prevent such cyber crimes and create effective and efficient redressal mechanisms for the citizens.
Reference

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india