#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Attempts at countering the spread of misinformation can include various methods and differing degrees of engagement by different stakeholders. The inclusion of Artificial Intelligence, user awareness and steps taken on the part of the public at a larger level, focus on innovation to facilitate clear communication can be considered in the fight to counter misinformation. This becomes even more important in spaces that deal with matters of national security, such as the Indian army.
IIT Indore’s Intelligent Communication System
As per a report in Hindustan Times on 14th November 2024, IIT Indore has achieved a breakthrough on their project regarding Intelligent Communication Systems. The project is supported by the Department of Telecommunications (DoT), the Ministry of Electronics and Information Technology (MeitY), and the Council of Scientific and Industrial Research (CSIR), as part of a specialised 6G research initiative (Bharat 6G Alliance) for innovation in 6G technology.
Professors at IIT Indore claim that the system they are working on has features different from the ones currently in use. They state that the receiver system can recognise coding, interleaving (a technique used to enhance existing error-correcting codes), and modulation methods together in situations of difficult environments, which makes it useful for transmitting information efficiently and securely, and thus could not only be used for telecommunication but the army as well. They also mention that previously, different receivers were required for different scenarios, however, they aim to build a system that has a single receiver that can adapt to any situation.
Previously, in another move that addressed the issue of misinformation in the army, the Ministry of Defence designated the Additional Directorate General of Strategic Communication in the Indian Army as the authorised officer to issue take-down notices regarding instances of posts consisting of illegal content and misinformation concerning the Army.
Recommendations
Here are a few policy implications and deliberations one can explore with respect to innovations geared toward tackling misinformation within the army:
- Research and Development: In this context, investment and research in better communication through institutes have enabled a system that ensures encrypted and secure communication, which helps with ways to combat misinformation for the army.
- Strategic Deployment: Relevant innovations can focus on having separate pilot studies testing sensitive data in the military areas to assess their effectiveness.
- Standardisation: Once tested, a set parameter of standards regarding the intelligence communication systems used can be encouraged.
- Cybersecurity integration: As misinformation is largely spread online, innovation in such fields can encourage further exploration with regard to integration with Cybersecurity.
Conclusion
The spread of misinformation during modern warfare can have severe repercussions. Sensitive and clear data is crucial for safe and efficient communication as a lot is at stake. Innovations that are geared toward combating such issues must be encouraged, for they not only ensure efficiency and security with matters related to defence but also combat misinformation as a whole.
References
- https://timesofindia.indiatimes.com/city/indore/iit-indore-unveils-groundbreaking-intelligent-receivers-for-enhanced-6g-and-military-communication-security/articleshow/115265902.cms
- https://www.hindustantimes.com/technology/6g-technology-and-intelligent-receivers-will-ease-way-for-army-intelligence-operations-iit-official-101731574418660.html
.webp)
Introduction
The Digital Personal Data Protection (DPDP) Act, of 2023, introduces a framework for the protection of personal data in India. Data fiduciaries are the entity that essentially determines the purpose and means of processing of personal data. The small-scale industries also fall within the ambit of the term. Startups/Small companies and Micro, Small, and Medium Enterprises (MSMEs) while determining the purpose of processing of personal data in the capacity of ‘data fiduciary’ are also required to comply with the DPDP Act provisions. The obligations set for the data fiduciary will apply to them unilaterally, though compliance with this Act and can be challenging due to resource constraints and limited expertise in data protection.
DPDP Act, 2023 Section 17(3) gives power to the Central Government to exempt Startups from being obligated to comply with the Act, taking into account the volume and nature of personal data processed. It is the nation's first standalone law on data protection and privacy, which sets forth strict rules on how data fiduciaries can collect and process personal data, focusing on consent-based mechanisms and personal data protection. Small-scale industries are given more time to comply with the DPDP Act. The detailed provisions to be notified in further rulemaking called ‘DPDP rules’.
Obligations on Data Fiduciary under the DPDP Act, 2023
The DPDP Act focuses on processing digital personal data in a manner that recognizes both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes and for matters connected therewith or incidental thereto. Hence, small-scale industries also need to comply with provisions aimed at protecting digital personal data.
The key requirements to be considered:
- Data Processing Principles: Ensuring that data processing is done lawfully, fairly, and transparently. Further, the collection and processing of personal data is only for specific, clear, and legitimate purposes and only the data necessary for the stated purpose. Ensuring that the data is accurate and up to date is also necessary. An important part is that the data is not retained longer than necessary and appropriate security measures are taken to protect the said data.
- Consent Management: Clear and informed consent should be obtained from individuals before collecting their personal data. Further, individuals have the option to withdraw their consent easily.
- Rights of Data Principals: Data principals (individuals) whose data is being collected have the right to Information, the right to correction and erasure of data, the right to grievance redressa, Right to nominate.the right to access, correct, and delete their personal data. Data fiduciaries need to be mindful of mechanisms to handle requests from data principals regarding their concerns.
- Data Breach Notifications: Data fiduciaries are required to notify the data protection board and the affected individuals in case a data breach has occurred.
- Appropriate technical and organisational measures: A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.Cross-border Data Transfers: Compliance with regulations in relation to the transfer of personal data outside of India should be ensured.
Challenges for Small Scale Industries for the DPDP Act Compliance
While small-scale industries have high aims for their organisational growth and now in the digital age they also need to place reliance on online security measures and handling of personal data, with the DPDP act in the picture it becomes an obligation to consider and comply with. As small-scale industries including MSMEs, they might face certain challenges in fulfilling these obligations but digital data protection measures will also boost the competitive market and customer growth in their business. Bringing reforms in methods aimed at better data governance in today's digital era is significant.
One of the major challenges for small-scale industries could be ensuring a skilled workforce that understands and educates internal stakeholders about the DPDP Act compliances. This could undoubtedly become an additional burden.
Further, the limited resources can make the implementation of data protection, which is oftentimes complex for a layperson in the case of a small-scale industry, difficult to implement. Limitations in resources are often financial or human resources.
Cybersecurity, cyber awareness, and protection from cyber threats need some form of expertise, which is lacking in small enterprises. The outsourcing of such expertise is a decision that is sometimes taken too late, and some form of harm can take place between the periods by which an incident can occur.
Investment in the core business or enterprise many times doesn't include technology other than the basic requirements to run the business, nor towards ensuring that the data is secure and all compliances are met. However, in the fast-moving digital world, all industries need to be mindful of their efforts to protect personal data and proper data governance.
Recommendations
To ensure the proper and effective personal data handling practices as per the provisions of the act, the small companies/startups need to work backend and frontend and ensure that they take adequate measures to comply with the act. While such industries have been given more time to ensure compliance, there are some suggestions for them to be compliant with the new law.
Small companies can ensure compliance with the DPDP Act by implementing robust data protection policies, investing in and providing employee training on data privacy, using age-verification mechanisms, and adopting privacy-by-design principles. Conduct a gap analysis to identify areas where current practices fall short of DPDP Act requirements. Regular audits, secure data storage solutions, and transparent communication with users about data practices are also essential. Use cost-effective tools and technologies for data protection and management.
Conclusion
Small-scale industries must take proactive steps to align with the DPDP Act, 2023 provisions. By understanding the requirements, leveraging external expertise, and adopting best practices, small-scale industries can ensure compliance and protect personal data effectively. In the long run, complying with the new law would lead to greater trust and better business for the enterprises, resulting in a larger revenue share for them.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1959161
- https://www.financialexpress.com/business/digital-transformation-dpdp-act-managing-data-protection-compliance-in-businesses-3305293/
- https://economictimes.indiatimes.com/tech/technology/big-tech-coalition-seeks-12-18-month-extension-to-comply-with-indias-dpdp-act/articleshow/104726843.cms?from=mdr