#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction:
CDR is a term that refers to Call detail records, The Telecom Industries holds the call details data of the users. As it amounts to a large amount of data, the telecom companies retain the data for a period of 6 months. CDR plays a significant role in investigations and cases in the courts. It can be used as pivotal evidence in court proceedings to prove or disprove certain facts & circumstances. Power of Interception of Call detail records is allowed for reasonable grounds and only by the authorized authority as per the laws.
Admissibility of CDR’s in Courts:
Call Details Records (CDRs) can be used as effective pieces of evidence to assist the court in ascertaining the facts of the particular case and inquiring about the commission of an offence, and according to the judicial pronouncements, it is made clear that CDRs can be used supporting or secondary evidence in the court. However, it cannot be the sole basis of the conviction. Section 92 of the Criminal Procedure Code 1973 provides procedure and empowers certain authorities to apply for court or competent authority intervention to seek the CDR.
Legal provisions to obtain CDR:
The CDR can be obtained under the statutory provisions of law contained in section 92 Criminal Procedure Code, 1973. Or under section 5(2) of Indian Telegraph Act 1885, read with rule 419(A) Indian Telegraph Amendment rule 2007. The guidelines were also issued in 2016 by Ministry of Ministry of Home Affairs for seeking Call details records (CDRs)
How long is CDR stored with telecom Companies (Data Retention)
Call Data is retained by telecom companies for a period of 6 months. As the data amounts to high storage, almost several Petabytes per year, telecom companies store the call details data for a period of 6 months and archive the rest of it to tapes.
New Delhi 25Cr jewellery heist
Recently, an incident took place where a 25-crore jewellery theft was carried out in a jewellery shop in Delhi, It was planned and executed by a man from Chhattisgarh. After committing the crime, the criminal went back to Chhattisgarh. It was a case of a 25Cr heist, and the police started their search & investigation. Police used technology and analysed the mobile numbers which were active at the crime scene. Delhi police used advanced software to analyse data. The police were able to trace the mobile number of thieves or suspects active at the crime scene. They discovered suspected contacts who were active within the range of the crime scene, and it helped in the arrest of the main suspects. From around 5,000 mobile numbers active around the crime scene, police have used advanced software that analyses huge data, and then police found a number registered outside of Delhi. The surveillance on the number has revealed that the suspected criminal has moved to the MP from Delhi, then moved further to Bhilai Chattisgarh. Police have successfully arrested the suspected criminal. This incident highlights how technology or call data can assist law enforcement agencies in investigating and finding the real culprits.
Conclusion:
CDR refers to call detail records retained by telecom companies for a period of 6 months, it can be obtained through lawful procedure and by competent authorities only. CDR can be helpful in cases before the court or law enforcement agencies, to assist the court and law enforcement agencies in ascertaining the facts of the case or to prove or disprove certain things. It is important to reiterated that unauthorized seeking of CDR is not allowed; the intervention of the court or competent authority is required to seek the CDR from the telecom companies. CDRs cannot be unauthorizedly obtained, and there has to be a directive from the court or competent authority to do so.
References:
- https://indianlegalsystem.org/cdr-the-wonder-word/#:~:text=CDR%20is%20admissible%20as%20secondary,the%20Indian%20Evidence%20Act%2C%201872.
- https://timesofindia.indiatimes.com/city/delhi/needle-in-a-haystack-how-cops-scanned-5k-mobile-numbers-to-crack-rs-25cr-heist/articleshow/104055687.cms?from=mdr
- https://www.ndtv.com/delhi-news/just-one-man-planned-executed-rs-25-crore-delhi-heist-another-thief-did-him-in-4436494

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india