#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
In an era expounded by rapid communications and live coverage of global affairs, users often encounter misinformation continuously, and it has emerged as a huge challenge. Misinformation is false or inaccurate information, believed to be true, and shared without any intention to deceive. On the other hand, disinformation refers to false information that is intended to mislead, especially with set propaganda. It steadily affects all aspects of life and can even lead to a profound impact on geopolitics, international relations, wars, etc. When modern media announces “breaking news,” it captures attention and keeps viewers engaged. In the rush for television rating points, information may be circulated without proper fact-checking. This urgency can result in the spread of unverified claims and the elevation of irrelevant details, while truly important issues are overlooked. Such practices can distort public understanding and impact strategic political decisions.
Misinformation and Fake News in Recent History
The phenomenon of misinformation is not limited to isolated incidents but has become a recurring feature of political events around the globe. This business has increasingly become visible in recent political history, where it has not only sensationalised the general public but also affected international relations and democratic outcomes. For example, during Slovakia’s elections in 2023, the country experienced a major surge of online misinformation. Over 365,000 misleading posts were posted on social media platforms, majorly influencing public opinion and leading to challenges for voters. A lot of this content was amplified by political leaders. The media's rush to deliver content sometimes makes it easier for false narratives to dominate the public sphere, shaping voter opinions and undermining informed political discourse.
Current Geopolitical Interference by Misinformation
In the recent Hamas-Israel conflict, manipulated images and unverified reports complicated diplomacy. Such campaigns distort facts, complicate humanitarian responses, and escalate conflicts. This growing trend shows how misinformation now acts as a weapon of war, exploiting media urgency and undermining international stability.
Indo–Pak Conflict Exaggeration
The India-Pakistan conflict is a long-dragged and complex issue in South Asia. It has been continuously dragged from traditional to contemporary media. But in recent tensions and war situations media raised serious concerns about misinformation. Live media coverage can sometimes mislead the public with speculative information. The live coverage continuously addressed it as breaking news and escalated excitement and fear, distorting the reality on the ground. Moreover, the real-time reporting of sensitive military activities like mock drills, blackouts, troop movements, air strikes, etc., interfered with strategic operations. Such reporting may lead to obstructing decision-making processes and placing operational missions at risk. Later Defence Ministry called it out in one of their X posts. Such media-driven exaggeration causes mass hysteria, and eventually, emotional and patriotic sentiments are evoked.
Legal and Political Recommendations
The intersection of media urgency and national security may have serious geopolitical repercussions if not managed with legal and ethical restrictions. International Frameworks like UNESCO‘s Guidelines for regulating Digital Platforms, 2023, and the Digital Services Act, 2022, regulate and govern digital platforms.
Despite the existence of international and national guidelines, there remains an urgent need to strengthen cyber laws by imposing strict penalties and compensation mechanisms for the dissemination of unverified information. Media outlets must also refrain from indiscriminately labelling every report as “breaking news.” Since the modern media deals in digital data, the protection of strategic state movements should be regulated with checks and balances.
Ethical considerations should be maintained during the publication or streaming of any information. Media should have self-regulations to fact-check and publish only authorised and double-verified information.
Given the borderless nature of the internet and the rapid, global spread of misinformation, international cooperation is imperative. Addressing the challenges posed by cross-border mis/disinformation requires a shared understanding and coordinated response among states at the global level.
References
- https://pam.int/wp-content/uploads/2024/10/EN-Background-paper-on-disinformation-and-fake-news-Jan-2024.pdf
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3172349
- https://www.unesco.org/sites/default/files/medias/fichiers/2023/04/draft2_guidelines_for_regulating_digital_platforms_en.pdf
- https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm