#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A viral online video claims Billionaire and Founder of Tesla & SpaceX Elon Musk of promoting Cryptocurrency. The CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Elon’s facial expressions and voice through the use of relevant, reputed and well verified AI tools and applications to arrive at the above conclusion for the same. The original footage had no connections to any cryptocurrency, BTC or ETH apportion to the ardent followers of crypto-trading. The claim that Mr. Musk endorses the same and is therefore concluded to be false and misleading.

Claims:
A viral video falsely claims that Billionaire and founder of Tesla Elon Musk is endorsing a Crypto giveaway project for the crypto enthusiasts which are also his followers by consigning a portion of his valuable Bitcoin and Ethereum stock.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Mr. Elon Musk but none of them included any promotion of any cryptocurrency giveaway. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 99.0% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Mr. Musk revealed no mention of any such giveaway. No credible reports were found linking Elon Musk to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Elon Musk promotes a crypto giveaway is a deep fake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Elon Musk conducting giving away Cryptocurrency viral on social media.
- Claimed on: X(Formerly Twitter)
- Fact Check: False & Misleading

The Rise of Tech Use Amongst Children
Technology today has become an invaluable resource for children, as a means to research issues, be informed about events, gather data, and share views and experiences with others. Technology is no longer limited to certain age groups or professions: children today are using it for learning & entertainment, engaging with their friends, online games and much more. With increased digital access, children are also exposed to online mis/disinformation and other forms of cyber crimes, far more than their parents, caregivers, and educators were in their childhood or are, even in the present. Children are particularly vulnerable to mis/disinformation due to their still-evolving maturity and cognitive capacities. The innocence of the youth is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. They are active users of online resources and their presence on social media is an important factor of social, political and civic engagement but young people and children often lack the cognitive and emotional capacity needed to distinguish between reliable and unreliable information. As a result, they can be targets of mis/disinformation. ‘A UNICEF survey in 10 countries’[1] reveals that up to three-quarters of children reported feeling unable to judge the veracity of the information they encounter online.
Social media has become a crucial part of children's lives, with them spending a significant time on digital platforms such as Youtube, Facebook, Instagram and more. All these platforms act as source of news, educational content, entertainment, peer communication and more. These platforms host a variety of different kinds of content across a diverse range of subject matters, and each platform’s content and privacy policies are different. Despite age restrictions under the Children's Online Privacy Protection Act (COPPA), and other applicable laws, it is easy for children to falsify their birth date or use their parent's accounts to access content which might not be age-appropriate.
The Impact of Misinformation on Children
In virtual settings, inaccurate information can come in the form of text, images, or videos shared through traditional and social media channels. In this age, online misinformation is a significant cause for concern, especially with children, because it can cause anxiety, damage self-esteem, shape beliefs, and skewing their worldview/viewpoints. It can distort children's understanding of reality, hinder their critical thinking skills, and cause confusion and cognitive dissonance. The growing infodemic can even cause an overdose of information. Misinformation can also influence children's social interactions, leading to misunderstandings, conflicts, and mistrust among peers. Children from low literacy backgrounds are more susceptible to fabricated content. Mis/disinformation can exacerbate social divisions amongst peers and lead to unwanted behavioural patterns. Sometimes even children themselves can unwittingly spread/share misinformation. Therefore, it is important to educate & empower children to build cognitive defenses against online misinformation risks, promote media literacy skills, and equip them with the necessary tools to critically evaluate online information.
CyberPeace Policy Wing Recommendations
- Role of Parents & Educators to Build Cognitive Defenses
One way parents shape their children's values, beliefs and actions is through modelling. Children observe how their parents use technology, handle challenging situations, and make decisions. For example, parents who demonstrate honesty, encourage healthy use of social media and show kindness and empathy are more likely to raise children who hold these qualities in high regard. Hence parents/educators play an important role in shaping the minds of their young charges and their behaviours, whether in offline or online settings. It is important for parents/educators to realise that they must pay close attention to how online content consumption is impacting the cognitive skills of their child. Parents/educators should educate children about authentic sources of information. This involves instructing children on the importance of using reliable, credible sources to utilise while researching on any topic of study or otherwise, and using verification mechanisms to test suspected information., This may sound like a challenging ideal to meet, but the earlier we teach children about Prebunking and Debunking strategies and the ability to differentiate between fact and misleading information, the sooner we can help them build cognitive defenses so that they may use the Internet safely. Hence it becomes paramount important for parents/educators to require children to question the validity of information, verify sources, and critically analyze content. Developing these skills is essential for navigating the digital world effectively and making informed decisions.
- The Role of Tech & Social Media Companies to Fortify their Steps in Countering Misinformation
Is worth noting that all major tech/social media companies have privacy policies in place to discourage any spread of harmful content or misinformation. Social media platforms have already initiated efforts to counter misinformation by introducing new features such as adding context to content, labelling content, AI watermarks and collaboration with civil society organisations to counter the widespread online misinformation. In light of this, social media platforms must prioritise both the designing and the practical implementation aspects of policy development and deployment to counter misinformation strictly. These strategies can be further improved upon through government support and regulatory controls. It is recommended that social media platforms must further increase their efforts to counter increasing spread of online mis/disinformation and apply advanced techniques to counter misinformation including filtering, automated removal, detection and prevention, watermarking, increasing reporting mechanisms, providing context to suspected content, and promoting authenticated/reliable sources of information.
Social media platforms should consider developing children-specific help centres that host educational content in attractive, easy-to-understand formats so that children can learn about misinformation risks and tactics, how to spot red flags and how to increase their information literacy and protect themselves and their peers. Age-appropriate, attractive and simple content can go a long way towards fortifying young minds and making them aware and alert without creating fear.
- Laws and Regulations
It is important that the government and the social media platforms work in sync to counteract misinformation. The government must consult with the concerned platforms and enact rules and regulations which strengthen the platform’s age verification mechanisms at the sign up/ account creation stage whilst also respecting user privacy. Content moderation, removal of harmful content, and strengthening reporting mechanisms all are important factors which must be prioritised at both the regulatory level and the platform operational level. Additionally, in order to promote healthy and responsible use of technology by children, the government should collaborate with other institutions to design information literacy programs at the school level. The government must make it a key priority to work with civil society organisations and expert groups that run programs to fight misinformation and co-create a safe cyberspace for everyone, including children.
- Expert Organisations and Civil Societies
Cybersecurity experts and civil society organisations possess the unique blend of large scale impact potential and technical expertise. We have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. True, sustainable solutions to any social concern only come about when capacity-building and empowerment are at the heart of the initiative. Programs that prioritise resilience, teach Prebunking and Debunking and are able to understand the unique concerns, needs and abilities of children and design solutions accordingly are the best suited to implement the administration’s mission to create a safe digital society.
Final Words
Online misinformation significantly impacts child development and can hinder their cognitive abilities, color their viewpoints, and cause confusion and mistrust. It is important that children are taught not just how to use technology but how to use it responsibly and positively. This education can begin at a very young age and parents, guardians and educators can connect with CyberPeace and other similar initiatives on how to define age-appropriate learning milestones. Together, we can not only empower children to be safe today, but also help them develop into netizens who make the world even safer for others tomorrow.
References:
- [1] Digital misinformation / disinformation and children
- [2] Children's Privacy | Federal Trade Commission

Biological data includes biometric information such as fingerprints, facial recognition, DNA sequences, and behavioral traits. Genetic data can be extracted from an individual’s remains long after their death and can continue to identify both that individual and an expanding pool of their living relatives. This persistent identification can significantly reduce privacy over time, revealing genetic characteristics and familial relationships across successive generations.
Key Developments in Privacy Protection for Biological Data:
Legal texts have been created relating to personal data protection and privacy broadly, and can sometimes prove to be poor adaptations specifically for ‘biometric data’ and its safety. Some examples are mentioned below:
- EU and UK- GDPR
GDPR focuses primarily on biometrics in Biological Data while deciphering the technology's immense potential. The EU describes “personal data” under the General Data Protection Regulation (GDPR) including any identifiable information about a particular person. For example, this can include names, identification numbers, location data, and other structured and unstructured data. In addition, the GDPR has more specific requirements around processing sensitive or “special categories of personal data.” These “special categories” include things like genetic and biometric data. For biometric security to work well, citizens' rights must be protected appropriately, and the data collected by private and public concerns must be managed carefully and sensibly.
- USA
California Consumer Privacy Act (CCPA) grants Californian consumers the right to protect their personal information and biometric data including the right to disclosure or access, the right to be forgotten, and data portability. The sale of personal information and the option of opt-out is also given to consumers. Additionally, it contains the right to take legal action, with penalties imposed for violations.
The California Privacy Rights Act was passed on November 3, 2020, and took effect on January 1, 2023, with a lookback period starting January 1, 2022. It introduces sensitive personal information which includes biometric data and other sensitive details.
Virginia's Consumer Data Protection Act, effective from January 1, 2023, designates genetic and biometric data as sensitive data that must be protected.
Illinois' Biometric Information Privacy Act is recognised as the most robust biometric privacy law in the United States. The significance of the Rosenbach v. Six Flags case lies in the Illinois Supreme Court's ruling that a plaintiff does not need to demonstrate additional harm to impose penalties on a BIPA violator. A mere loss of statutory biometric privacy rights is sufficient to warrant penalties.
- India
As per Rule 2(1)(b) of the SPDI Rules, Sensitive Personal Data or Information, including biometric data is included under its meaning. The term ‘biometric data’ has not been defined in the Digital Personal Data Protection Act, 2023. The need for data privacy under the DPDP Act emerges only if such data is subsequently digitised under extraction and manipulation, including notice and consent requirements and penalties.
The Biotech-PRIDE (Promotion of Research and Innovation through Data Exchange) Guidelines of 2021 are aimed at fostering an exchange of information which would thereby enhance research and innovation among various research groups nationwide. These guidelines do not deal with the generation of biological data but are a mechanism to share and exchange information and knowledge generated according to existing laws, rules, regulations and norms of the country. They will ensure data-sharing benefits, maximise use, avoid duplication, maximise integration, ownership of information, better decision-making and equity of access
How is Biological Data vulnerable?
- Biological data is often immutable, meaning it cannot be altered once compromised. Unlike other authentications that can be changed, compromised biometric data poses a permanent risk, making its protection paramount.
- The use of facial recognition technology by law enforcement agencies and the creation of databases by the same also highlights the urgent need for stringent privacy protections.
- Advances in technology, particularly AI and ML, make it easier to collect, analyse, and utilise biometric data by manipulating biometric data. This in turn is leading to new forms of identity theft and fraud that make it necessary to enhance security measures and ethical considerations to prevent abuse.
- Cross-border data transfers raise serious privacy concerns, especially as countries have varying levels and standards of data protection.
- Wearable health-related biometric devices lack the required privacy protections which ends up making the data they collect vulnerable to misuse and breaches.
Future Outlook
With the growing use of biological data, there is likely to be increased pressure on regulatory bodies to strengthen privacy protections. This necessitates a need for enhanced security measures to protect users' identities and further prevent any form of unauthorised access. Future developments should be aimed at including strict consent requirements, and enhanced data security measures, especially for wearable devices. A new legal framework specifically designed to address the challenges posed by biometric data would be welcome. Biological data protection is an emerging need in the digital environment that we live in today.
References
- https://www.cnbc.com/2024/08/17/new-privacy-battle-is-underway-as-tech-gadgets-capture-our-brain-waves.html
- https://www.snrlaw.in/sense-and-sensitivity-sensitive-information-under-indias-new-data-regime/
- https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/biometrics/biometric-data
- https://www.business-standard.com/article/economy-policy/govt-releases-guideline-to-provide-framework-for-sharing-of-biological-data-121073001467_1.html