#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
India's broadcasting sector has undergone significant changes in recent years with technological advancements such as the introduction of new platforms like Direct-to-Home (DTH), Internet Protocol television (IPTV), Over-The-Top (OTT), and integrated models. Platform changes, emerging technologies and advancements in the advertising space have all necessitated the need for new governing laws that take these developments into account.
The Union Government and concerned ministry have realised there is a pressing need to develop a robust regulatory framework for the Indian broadcasting sector in the country and consequently, a draft Broadcasting Services (Regulation) Bill, 2023, was released in November 2023 and the Union Ministry of Information and Broadcasting (MIB) had invited feedback and comments from different stakeholders. The draft Bill aims to establish a unified framework for regulating broadcasting services in the country, replacing the current Cable Television Networks (Regulation) Act, 1995 and other policy guidelines governing broadcasting.
Recently a new draft of an updated ‘Broadcasting Services (Regulation) Bill, 2024,’ was shared with selected broadcasters, associations, streaming services, and tech firms, each marked with their identifier to prevent leaks.
Key Highlights of the Updated Broadcasting Bill
As per the recent draft of the Broadcasting Services (Regulation) Bill, 2024, social media accounts could be identified as ‘Digital News Broadcasters’ and can be classified within the ambit of the regulation. Some of the major aspects of the new bill were first reported by Hindustan Times.
The new draft of the Broadcasting Services (Regulation) Bill, 2024, proposes that individuals who regularly upload videos to social media, make podcasts, or write about current affairs online could be classified as Digital News Broadcasters. This entails that YouTubers and Instagrammers who receive a share of advertising revenue or monetize their social media presence through affiliate activities will be regulated as Digital News Broadcasters. This includes channels, podcasts, and blogs that cover news and utilise Google AdSense. They must comply with a Programme Code and Advertising Code.
Online content creators who do not provide news or current affairs but provide programming and curated programs beyond a certain threshold will be treated as OTT broadcasters in case they provide content licensed or live through a website or social media platform.
The new version also introduces new obligations for intermediaries and social media intermediaries related to streaming services and digital news broadcasters, and, in contrast to the last version circulated in 2023, the latest also carries provisions targeting online advertising. In the context of streaming services, OTT broadcasting services are no longer a part of the definition of "internet broadcasting services." The definition of OTT broadcasting service has also been revised, allowing content creators who regularly upload their content to social media to be considered as OTT broadcasting services.
The new definition of an 'intermediary' includes social media intermediaries, advertisement intermediaries, internet service providers, online search engines, and online marketplaces.
The new Bill allows the government to prescribe different due diligence guidelines for social media platforms and online advertisement intermediaries and requires all intermediaries to provide appropriate information, including information pertaining to the OTT broadcasters and Digital News Broadcasters on their platform, to the central government to ensure compliance with the act. This entails the liability provisions for social media intermediaries which do not provide information “pertaining to OTT Broadcasters and Digital News Broadcasters” on its platforms for compliance. This suggests that when information is sought about a YouTube, Instagram or X/Twitter user, the platform will need to provide this information to the Indian government.
A new draft bill contains specific provisions governing ‘Online Advertising’ and to do so it creates the category of 'advertising intermediaries'. These intermediaries enable the buying or selling of advertisement space on the internet or placing advertisements on online platforms without endorsing the advertisement.
Final Words
The Indian Ministry of Information and Broadcasting (MIB) is making efforts to propose robust regulatory changes to the country's new-age broadcast sector, which would cover the specific provisions for Digital News Broadcasters, OTT Broadcasters and Intermediaries. The proposed bill defining the scope and obligation of each.
However, these changes will have significant implications for press and creative freedom. The changes in the new version of the updated bill from its previous draft expanded the applicability of the bill to a larger number of key actors, this move brought ‘content creators’ under the definition of OTT or digital news broadcasters, which raises concerns about overly rigid provisions and might face criticism from media representative perspectives.
According to recent media reports, the Broadcasting Services (Regulation) Bill, 2024 version has been withdrawn by the I&B ministry facing criticism from relevant stakeholders.
The ministry must take due consideration and feedback from concerned stakeholders and place reliance on balancing individual rights while promoting a healthy regulated landscape considering the needs of the new-age broadcasting sector.
References:
- https://www.medianama.com/2024/07/223-india-broadcast-bill-online-creators/#:~:text=Online%20content%20creators%20that%20do,or%20a%20social%20media%20platform.
- https://www.hindustantimes.com/india-news/new-draft-of-broadcasting-bill-news-influencers-may-be-classified-as-broadcasters-101721961764666.html
- https://www.hindustantimes.com/india-news/broadcasting-bill-still-in-drafting-stage-mib-tells-rs-101722058753083.html
- https://www.newslaundry.com/2024/07/29/indias-new-broadcast-bill-now-has-compliance-requirements-for-youtubers-and-instagrammers
- https://m.thewire.in/article/media/social-media-videos-text-digital-news-broadcasting-bill
- https://mib.gov.in/sites/default/files/Public%20Notice_07.12.2023.pdf
- https://news.abplive.com/news/india/centre-withdraws-draft-of-broadcasting-services-regulation-bill-1709770

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.

Introduction
Recently, in April 2025, security researchers at Oligo Security exposed a substantial and wide-ranging threat impacting Apple's AirPlay protocol and its use via third-party Software Development Kit (SDK). According to the research, the recently discovered set of vulnerabilities titled "AirBorne" had the potential to enable remote code execution, escape permissions, and leak private data across many different Apple and third-party AirPlay-compatible devices. With well over 2.35 billion active Apple devices globally and tens of millions of third-party products that incorporate the AirPlay SDK, the scope of the problem is enormous. Those wireless-based vulnerabilities pose not only a technical threat but also increasingly an enterprise- and consumer-level security concern.
Understanding AirBorne: What’s at Stake?
AirBorne is the title given to a set of 23 vulnerabilities identified in the AirPlay communication protocol and its related SDK utilised by third-party vendors. Seventeen have been given official CVE designations. The most severe among them permit Remote Code Execution (RCE) with zero or limited user interaction. This provides hackers the ability to penetrate home networks, business environments, and even cars with CarPlay technology onboard.
Types of Vulnerabilities Identified
AirBorne vulnerabilities support a range of attack types, including:
- Zero-Click and One-Click RCE
- Access Control List (ACL) bypass
- User interaction bypass
- Local arbitrary file read
- Sensitive data disclosure
- Man-in-the-middle (MITM) attacks
- Denial of Service (DoS)
Each vulnerability can be used individually or chained together to escalate access and broaden the attack surface.
Remote Code Execution (RCE): Key Attack Scenarios
- MacOS – Zero-Click RCE (CVE-2025-24252 & CVE-2025-24206) These weaknesses enable attackers to run code on a MacOS system without any user action, as long as the AirPlay receiver is enabled and configured to accept connections from anyone on the same network. The threat of wormable malware propagating via corporate or public Wi-Fi networks is especially concerning.
- MacOS – One-Click RCE (CVE-2025-24271 & CVE-2025-24137) If AirPlay is set to "Current User," attackers can exploit these CVEs to deploy malicious code with one click by the user. This raises the level of threat in shared office or home networks.
- AirPlay SDK Devices – Zero-Click RCE (CVE-2025-24132) Third-party speakers and receivers through the AirPlay SDK are particularly susceptible, where exploitation requires no user intervention. Upon compromise, the attackers have the potential to play unauthorised media, turn microphones on, or monitor intimate spaces.
- CarPlay Devices – RCE Over Wi-Fi, Bluetooth, or USB CVE-2025-24132 also affects CarPlay-enabled systems. Under certain circumstances, the perpetrators around can take advantage of predictable Wi-Fi credentials, intercept Bluetooth PINs, or utilise USB connections to take over dashboard features, which may distract drivers or listen in on in-car conversations.
Other Exploits Beyond RCE
AirBorne also opens the door for:
- Sensitive Information Disclosure: Exposing private logs or user metadata over local networks (CVE-2025-24270).
- Local Arbitrary File Access: Letting attackers read restricted files on a device (CVE-2025-24270 group).
- DoS Attacks: Exploiting NULL pointer dereferences or misformatted data to crash processes like the AirPlay receiver or WindowServer, forcing user logouts or system instability (CVE-2025-24129, CVE-2025-24177, etc.).
How the Attack Works: A Technical Breakdown
AirPlay sends on port 7000 via HTTP and RTSP, typically encoded in Apple's own plist (property list) form. Exploits result from incorrect treatment of these plists, especially when skipping type checking or assuming invalid data will be valid. For instance, CVE-2025-24129 illustrates how a broken plist can produce type confusion to crash or execute code based on configuration.
A hacker must be within the same Wi-Fi network as the targeted device. This connection might be through a hacked laptop, public wireless with shared access, or an insecure corporate connection. Once in proximity, the hacker has the ability to use AirBorne bugs to hijack AirPlay-enabled devices. There, bad code can be released to spy, gain long-term network access, or spread control to other devices on the network, perhaps creating a botnet or stealing critical data.
The Espionage Angle
Most third-party AirPlay-compatible devices, including smart speakers, contain built-in microphones. In theory, that leaves the door open for such devices to become eavesdropping tools. While Oligo did not show a functional exploit for the purposes of espionage, the risk suggests the gravity of the situation.
The CarPlay Risk Factor
Besides smart home appliances, vulnerabilities in AirBorne have also been found for Apple CarPlay by Oligo. Those vulnerabilities, when exploited, may enable attackers to take over an automobile's entertainment system. Fortunately, the attacks would need pairing directly through USB or Bluetooth and are much less practical. Even so, it illustrates how networks of connected components remain at risk in various situations, ranging from residences to automobiles.
How to Protect Yourself and Your Organisation
- Immediate Actions:
- Update Devices: Ensure all Apple devices and third-party gadgets are upgraded to the latest software version.
- Disable AirPlay Receiver: If AirPlay is not in use, disable it in system settings.
- Restrict AirPlay Access: Use firewalls to block port 7000 from untrusted IPs.
- Set AirPlay to “Current User” to limit network-based attack.
- Organisational Recommendations:
- Communicate the patch urgency to employees and stakeholders.
- Inventory all AirPlay-enabled hardware, including in meeting rooms and vehicles.
- Isolate vulnerable devices on segmented networks until updated.
Conclusion
The AirBorne vulnerabilities illustrate that even mature systems such as Apple's are not immune from foundational security weaknesses. The extensive deployment of AirPlay across devices, industries, and ecosystems makes these vulnerabilities a systemic threat. Oligo's discovery has served to catalyse immediate response from Apple, but since third-party devices remain vulnerable, responsibility falls to users and organisations to install patches, implement robust configurations, and compartmentalise possible attack surfaces. Effective proactive cybersecurity hygiene, network segmentation, and timely patches are the strongest defences to avoid these kinds of wormable, scalable attacks from becoming large-scale breaches.
References
- https://www.oligo.security/blog/airborne
- https://www.wired.com/story/airborne-airplay-flaws/
- https://thehackernews.com/2025/05/wormable-airplay-flaws-enable-zero.html
- https://www.securityweek.com/airplay-vulnerabilities-expose-apple-devices-to-zero-click-takeover/
- https://www.pcmag.com/news/airborne-flaw-exposes-airplay-devices-to-hacking-how-to-protect-yourself
- https://cyberguy.com/security/hackers-breaking-into-apple-devices-through-airplay/