Deepfake Alert: Sachin Tendulkar's Warning Against Technology Misuse
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html
Related Blogs

Executive Summary:
A viral video depicting a powerful tsunami wave destroying coastal infrastructure is being falsely associated with the recent tsunami warning in Japan following an earthquake in Russia. Fact-checking through reverse image search reveals that the footage is from a 2017 tsunami in Greenland, triggered by a massive landslide in the Karrat Fjord.

Claim:
A viral video circulating on social media shows a massive tsunami wave crashing into the coastline, destroying boats and surrounding infrastructure. The footage is being falsely linked to the recent tsunami warning issued in Japan following an earthquake in Russia. However, initial verification suggests that the video is unrelated to the current event and may be from a previous incident.

Fact Check:
The video, which shows water forcefully inundating a coastal area, is neither recent nor related to the current tsunami event in Japan. A reverse image search conducted using keyframes extracted from the viral footage confirms that it is being misrepresented. The video actually originates from a tsunami that struck Greenland in 2017. The original footage is available on YouTube and has no connection to the recent earthquake-induced tsunami warning in Japan

The American Geophysical Union (AGU) confirmed in a blog post on June 19, 2017, that the deadly Greenland tsunami on June 17, 2017, was caused by a massive landslide. Millions of cubic meters of rock were dumped into the Karrat Fjord by the landslide, creating a wave that was more than 90 meters high and destroying the village of Nuugaatsiaq. A similar news article from The Guardian can be found.

Conclusion:
Videos purporting to depict the effects of a recent tsunami in Japan are deceptive and repurposed from unrelated incidents. Users of social media are urged to confirm the legitimacy of such content before sharing it, particularly during natural disasters when false information can exacerbate public anxiety and confusion.
- Claim: Recent natural disasters in Russia are being censored
- Claimed On: Social Media
- Fact Check: False and Misleading
.webp)
Introduction
In the labyrinthine world of digital currencies, a new chapter unfolds as India intensifies its scrutiny over the ethereal realm of offshore cryptocurrency exchanges. With nuance and determination that virtually mirrors the Byzantine complexities of the very currencies they seek to regulate, Indian authorities embark on a course of stringent oversight, bringing to the fore an ever-evolving narrative of control and compliance in the fintech sector. The government's latest manoeuvre—a directive to Apple Inc. to excise the apps of certain platforms, including the colossus Binance, from its App Store in India—signals a crescendo in the nation's efforts to rein in the unbridled digital bazaar that had hitherto thrived in a semi-autonomous expanse of cyberspace.
The directive, with ramifications as significant and intricate as the cryptographic algorithms that underpin the blockchain, stems from the Ministry of Electronics and Information Technology, which has cast eight exchanges, including Bitfinex, HTX, and Kucoin, into the shadows, rendering their apps as elusive as the Higgs boson in the vast App Store universe. The movement of these exchanges from visibility to obscurity in the digital storefront is cloaked in secrecy, with sources privy to this development remaining cloaked in anonymity, their identities as guarded as the cryptographic keys that secure blockchain transactions.
The Contention
This escalation, however, did not manifest from the vacuum of the ether; it is the culmination of a series of precipitating actions that began unfolding on December 28th, when the Indian authorities unfurled a net over nine exchanges, ensnaring them with suspicions of malfeasance. The spectre of inaccessible funds, a byproduct of this entanglement, has since haunted Indian crypto traders, prompting a migration of deposits to local exchanges that operate within the nation's regulatory framework—a fortress against the uncertainties of the offshore crypto tempest.
The extent of the authorities' reach manifests further, beckoning Alphabet Inc.'s Google to follow in Apple's footsteps. Yet, in a display of the unpredictable nature of enforcement, the Google Play Store in India still played host to the very apps that Apple's digital Eden had forsaken as of a nondescript Wednesday afternoon, marked by the relentless march of time. The triad of power-brokers—Apple, Google, and India's technology ministry—has maintained a stance as enigmatic as the Sphinx, their communications as impenetrable as the vaults that secure the nation's precious monetary reserves.
Compounding the tightening of this digital noose, the Financial Intelligence Unit of India, a sentinel ever vigilant at the gates of financial propriety, unfurled a compliance show-cause notice to the nine offshore platforms, an ultimatum demanding they justify their elusive presence in Indian cyberspace. The FIU's decree echoed with clarity amidst the cacophony of regulatory overtures: these digital entities were tethered to operations sequestered in the shadows, skirting the reach of India's anti-money laundering edicts, their websites lingering in cyberspace like forbidden fruit, tantalisingly within reach yet potentially laced with the cyanide of non-compliance.
In this chaotic tableau of constraint and control, a glimmer of presence remains—only Bitstamp has managed to brave the regulatory storm, maintaining its presence on the Indian App Store, a lone beacon amid the turbulent sea of regimentation. Kraken, another leviathan of crypto depths, presented only its Pro version to the Indian connoisseurs of the digital marketplace. An aura of silence envelops industry giants such as Binance, Bitfinex, and KuCoin, their absence forming a void as profound as the dark side of the moon in the consciousness of Indian users. HTX, formerly known as Huobi, has announced a departure from Indian operations with the detached finality of a distant celestial body, cold and indifferent to the gravitational pull of India's regulatory orbit.
Compliances
In compliance with the provisions of the Money Laundering Act (PMLA) 2002 and the recent uproar on crypto assessment apps, Apple store finally removed these apps namely Binance and Kucoin from the store after receiving show cause notice. The alleged illegal operation and failure to comply with existing money laundering laws are major reasons for their removal.
The Indian Narrative
The overarching narrative of India's embrace of rigid oversight aligns with a broader global paradigm shift, where digital financial assets are increasingly subjected to the same degree of scrutiny as their physical analogues. The persistence in imposing anti-money laundering provisions upon the crypto sector reflects this shift, with India positioning its regulatory lens in alignment with the stars of international accountability. The preceding year bore witness to seismic shifts as Indian authorities imposed a tax upon crypto transactions, a move that precipitated a downfall in trading volumes, reminiscent of Icarus's fateful flight—hubris personified as his waxen appendages succumbed to the unrelenting kiss of the sun.
On a local scale, trading powerhouses lament the imposition of a 1% levy, colloquially known as Tax Deducted at Source. This fiscal shackle drove an exodus of Indian crypto traders into the waiting, seemingly benevolent arms of offshore financial Edens, absolved of such taxational rites. As Sumit Gupta, CEO of CoinDCX, recounted, this fiscal migration witnessed the haemorrhaging of revenue. His estimation that a staggering 95% of trading volume abandoned local shores for the tranquil harbours of offshore havens punctuates the magnitude of this phenomenon.
Conclusion
Ultimately, the story of India's proactive clampdown on offshore crypto exchanges resembles a meticulously woven tapestry of regulatory ardour, financial prudence, and the inexorable progression towards a future where digital incarnations mirror the scrutinised tangibility of physical assets. It is a saga delineating a nation's valiant navigation through the tempestuous, cryptic waters of cryptocurrency, helming its ship with unwavering determination, with eyes keenly trained on the farthest reaches of the horizon. Here, amidst the fusion of digital and corporeal realms, India charts its destiny, setting its sails towards an inextricably linked future that promises to shape the contour of the global financial landscape.
References
- https://www.business-standard.com/markets/cryptocurrency/govt-escalates-clampdown-on-offshore-crypto-venues-like-binance-report-124011000586_1.html
- https://www.cnbctv18.com/technology/india-escalates-clampdown-on-offshore-crypto-exchanges-like-binance-18763111.htm
- https://economictimes.indiatimes.com/tech/technology/centre-blocks-web-platforms-of-offshore-crypto-apps-binance-kucoin-and-others/articleshow/106783697.cms?from=mdr

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide