#FactCheck - Deepfake Alert: Virat Kohli's Alleged Betting App Endorsement Exposed
Executive Summary
A viral video allegedly featuring cricketer Virat Kohli endorsing a betting app named ‘Aviator’ is being shared widely across the social platform. CyberPeace Research Team’s Investigations revealed that the same has been made using the deepfake technology. In the viral video, we found some potential anomalies that can be said to have been created using Synthetic Media, also no genuine celebrity endorsements for the app exist, we have also previously debunked such Deep Fake videos of cricketer Virat Kohli regarding the misuse of deep fake technology. The spread of such content underscores the need for social media platforms to implement robust measures to combat online scams and misinformation.

Claims:
The claim made is that a video circulating on social media depicts Indian cricketer Virat Kohli endorsing a betting app called "Aviator." The video features an Indian News channel named India TV, where the journalist reportedly endorses the betting app followed by Virat Kohli's experience with the betting app.

Fact Check:
Upon receiving the news, we thoroughly watched the video and found some featured anomalies that are usually found in regular deep fake videos such as the lip sync of the journalist is not proper, and if we see it carefully the lips do not match with the audio that we can hear in the Video. It’s the same case when Virat Kohli Speaks in the video.

We then divided the video into keyframes and reverse searched one of the frames from the Kohli’s part, we found a video similar to the one spread, where we could see Virat Kohli wearing the same brown jacket in that video, uploaded on his verified Instagram handle which is an ad promotion in collaboration with American Tourister.

After going through the entire video, it is evident that Virat Kohli is not endorsing any betting app, rather he is talking about an ad promotion collaborating with American Tourister.
We then did some keyword searches to see if India TV had published any news as claimed in the Viral Video, but we didn’t find any credible source.
Therefore, upon noticing the major anomalies in the video and doing further analysis found that the video was created using Synthetic Media, it's a fake and misleading one.
Conclusion:
The video of Virat Kohli promoting a betting app is fake and does not actually feature the celebrity endorsing the app. This brings up many concerns regarding how Artificial Intelligence is being used for fraudulent activities. Social media platforms need to take action against the spread of fake videos like these.
Claim: Video surfacing on social media shows Indian cricket star Virat Kohli promoting a betting application known as "Aviator."
Claimed on: Facebook
Fact Check: Fake & Misleading
Related Blogs

Introduction
The Australian Parliament has passed the world’s first legislation regarding a ban on social media for children under 16. This was done citing risks to the mental and physical well-being of children and the need to contain misogynistic influence on them. The debate surrounding the legislation is raging strong, as it is the first proposal of its kind and would set precedence for how other countries can assess their laws regarding children and social media platforms and their priorities.
The Legislation
Currently trailing an age-verification system (such as biometrics or government identification), the legislation mandates a complete ban on underage children using social media, setting the age limit to 16 or above. Further, the law does not provide exemptions of any kind, be it for pre-existing accounts or parental consent. With federal elections approaching, the law seeks to address parental concerns regarding measures to protect their children from threats lurking on social media platforms. Every step in this regard is being observed with keen interest.
The Australian Prime Minister, Anthony Albanese, emphasised that the onus of taking responsible steps toward preventing access falls on the social media platforms, absolving parents and their children of the same. Social media platforms like TikTok, X, and Meta Platforms’ Facebook and Instagram all come under the purview of this legislation.
CyberPeace Overview
The issue of a complete age-based ban raises a few concerns:
- It is challenging to enforce digitally as children might find a way to circumnavigate such restrictions. An example would be the Cinderella Law, formally known as the Shutdown Law, which the Government of South Korea had implemented back in 2011 to reduce online gaming and promote healthy sleeping habits among children. The law mandated the prohibition of access to online gaming for children under the age of 16 between 12 A.M. to 6 A.M. However, a few drawbacks rendered it less effective over time. Children were able to use the login IDs of adults, switch to VPN, and even switch to offline gaming. In addition, parents also felt the government was infringing on the right to privacy and the restrictions were only for online PC games and did not extend to mobile phones. Consequently, the law lost relevance and was repealed in 2021.
- The concept of age verification inherently requires collecting more personal data and inadvertently opens up concerns regarding individual privacy.
- A ban is likely to reduce the pressure on tech and social media companies to develop and work on areas that would make their services a safe child-friendly environment.
Conclusion
Social media platforms can opt for an approach that focuses on how to create a safe environment online for children as they continue to deliberate on restrictions. An example of an impactful-yet-balanced step towards the protection of children on social media while respecting privacy is the U.K.'s Age-Appropriate Design Code (UK AADC). It is the U.K.’s implementation of the European Union’s General Data Protection Regulation (GDPR), prepared by the ICO (Information Commissioner's Office), the U.K. data protection regulator. It follows a safety-by-design approach for children. As we move towards a future that is predominantly online, we must continue to strive and create a safe space for children and address issues in innovative ways.
References
- https://indianexpress.com/article/technology/social/australia-proposes-ban-on-social-media-for-children-under-16-9657544/
- https://www.thehindu.com/opinion/op-ed/should-children-be-barred-from-social-media/article68661342.ece
- https://forumias.com/blog/debates-on-whether-children-should-be-banned-from-social-media/
- https://timesofindia.indiatimes.com/education/news/why-banning-kids-from-social-media-wont-solve-the-youth-mental-health-crisis/articleshow/113328111.cms
- https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code
- https://www.techinasia.com/s-koreas-cinderella-law-finally-growing-up-teens-may-soon-be-able-to-play-online-after-midnight-again
- https://wp.towson.edu/iajournal/2021/12/13/video-gaming-addiction-a-case-study-of-china-and-south-korea/
- https://www.dailysabah.com/world/asia-pacific/australia-passes-worlds-1st-total-social-media-ban-for-children

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/