#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
In the past few decades, technology has rapidly advanced, significantly impacting various aspects of life. Today, we live in a world shaped by technology, which continues to influence human progress and culture. While technology offers many benefits, it also presents certain challenges. It has increased dependence on machines, reduced physical activity, and encouraged more sedentary lifestyles. The excessive use of gadgets has contributed to social isolation. Different age groups experience the negative aspects of the digital world in distinct ways. For example, older adults often face difficulties with digital literacy and accessing information. This makes them more vulnerable to cyber fraud. A major concern is that many older individuals may not be familiar with identifying authentic versus fraudulent online transactions. The consequences of such cybercrimes go beyond financial loss. Victims may also experience emotional distress, reputational harm, and a loss of trust in digital platforms.
Why Senior Citizens Are A Vulnerable Target
Digital exploitation involves a variety of influencing tactics, such as coercion, undue influence, manipulation, and frequently some sort of deception, which makes senior citizens easy targets for scammers. Senior citizens have been largely neglected in research on this burgeoning type of digital crime. Many of our parents and grandparents grew up in an era when politeness and trust were very common, making it difficult for them to say “no” or recognise when someone was attempting to scam them. Seniors who struggle with financial stability may be more likely to fall for scams promising financial relief or security. They might encounter obstacles in learning to use new technologies, mainly due to unfamiliarity. It is important to note that these factors do not make seniors weak or incapable. Rather, it is the responsibility of the community to recognise and address the unique vulnerabilities of our senior population and work to prevent them from falling victim to scams.
Senior citizens are the most susceptible to social engineering attacks. Scammers may impersonate people, such as family members in distress, government officials, and deceive seniors into sending money or sharing personal information. Some of them are:
- The grandparent scam
- Tech support scam
- Government impersonation scams
- Romance scams
- Digital arrest
Protecting Senior Citizens from Digital Scams
As a society, we must focus on educating seniors about common cyber fraud techniques such as impersonation of family members or government officials, the use of fake emergencies, or offers that seem too good to be true. It is important to guide them on how to verify suspicious calls and emails, caution them against sharing personal information online, and use real-life examples to enhance their understanding.
Larger organisations and NGOs can play a key role in protecting senior citizens from digital scams by conducting fraud awareness training, engaging in one-on-one conversations, inviting seniors to share their experiences through podcasts, and organising seminars and workshops specifically for individuals aged 60 and above.
Safety Tips
In today's digital age, safeguarding oneself from cyber threats is crucial for people of all ages. Here are some essential steps everyone should take at a personal level to remain cyber secure:
- Ensuring that software and operating systems are regularly updated allows users to benefit from the latest security fixes, reducing their vulnerability to cyber threats.
- Avoiding the sharing of personal information online is also essential. Monitoring bank statements is equally important, as it helps in quickly identifying signs of potential cybercrime. Reviewing financial transactions and reporting any unusual activity to the bank can assist in detecting and preventing fraud.
- If suspicious activity is suspected, it is advisable to contact the company directly using a different phone line. This is because cybercriminals can sometimes keep the original line open, leading individuals to believe they are speaking with a legitimate representative. In such cases, attackers may impersonate trusted organisations to deceive users and gain sensitive information.
- If an individual becomes a victim of cybercrime, they should take immediate action to protect their personal information and seek professional guidance.
- Stay calm and respond swiftly and wisely. Begin by collecting and preserving all evidence—this includes screenshots, suspicious messages, emails, or any unusual activity. Report the incident immediately to the police or through an official platform like www.cybercrime.gov.in and the helpline number 1930.
- If financial information is compromised, the affected individual must alert their bank or financial institution without delay to secure their accounts. They should also update passwords and implement two-factor authentication as additional safeguards.
Conclusion: Collective Action for Cyber Dignity and Inclusion
Elder abuse in the digital age is an invisible crisis. It’s time we bring it into the spotlight and confront it with education, empathy, and collective action. Safeguarding senior citizens from cybercrime necessitates a comprehensive approach that combines education, vigilance, and technological safeguards. By fostering awareness and providing the necessary tools and support, we can empower senior citizens to navigate the digital world safely and confidently. Let us stand together to support these initiatives, to be the guardians our elders deserve, and to ensure that the digital world remains a place of opportunity, not exploitation.
REFERENCES -
- https://portal.ct.gov/ag/consumer-issues/hot-scams/the-grandparents-scam
- https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/tech-support-scams
- https://consumer.ftc.gov/articles/how-avoid-government-impersonation-scam
- https://www.jpmorgan.com/insights/fraud/fraud-mitigation/helping-your-elderly-and-vulnerable-loved-ones-avoid-the-scammers
- https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/romance-scams
- https://www.jpmorgan.com/insights/fraud/fraud-mitigation/helping-your-elderly-and-vulnerable-loved-ones-avoid-the-scammers

Introduction
Google is committed to supporting the upcoming elections in India by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. Google will connect voters to helpful information through enhanced features, collaborating with the Election Commission of India (ECI) to provide voting information in both English and Hindi. Emphasis is also placed on showcasing authoritative information on YouTube. YouTube will highlight authoritative news sources and offer context on topics prone to misinformation. YouTube also appends information panels directing viewers to the Election Commission of India's FAQs. This support will help millions of eligible voters navigate the electoral process and ensure a fair and transparent election process.
Key Highlights of Google’s Approach
The step taken by Google will support the democratic process during the upcoming General Election in India. The initiative focuses on three main pillars: disseminating information, tackling misinformation, and navigating AI-generated content. Google is enhancing its Search and YouTube features to provide essential election-related information, including voter registration, polling guidelines, and candidate profiles. Google is also addressing the challenges posed by AI-generated content by offering clarity on content origins, particularly for election-related ads and YouTube videos. Google has strict policies and restrictions regarding who can run election-related advertising on its platforms, including identity verification, pre-certificates, and in-ad disclosures. Additionally, Google is utilising tools and policies like Ads disclosures, content labels on YouTube, and digital watermarking to help users to identify AI-generated content.
Google has joined hands with ECI
The tech giant Google is partnering with the Election Commission of India (ECI) to provide voting information on Google Search in both English and Hindi. YouTube will feature election information panels, including candidate profiles and registration guidelines, ensuring users have access to authoritative sources. Google's recommendation system will display content from trusted publishers on election-related topics. Protecting the integrity of elections is a top priority, and the company is employing advanced AI models and machine learning techniques to identify and remove content that violates its policies at scale. A dedicated team of local experts across major Indian languages is assigned to provide relevant context and ensure swift action against emerging threats. Google is also tightening up who can advertise on its platforms, requiring advertisers to undergo an identity verification process and obtain a pre-certificate from the ECI or authorised entities for each election ad they wish to run.
Tackling Electoral Misinformation
Google is enhancing its platform security measures to prevent misinformation. It is using AI models and human expertise to identify and address policy violations, while stringent verification processes and disclosures are being implemented to maintain user trust.
Collaborations to promote reliable information
Google is supporting the Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact checkers to detect online misinformation, including deepfakes. The project will provide news entities and fact checkers with essential training in fact-checking methodologies, deepfake detection, and the latest Google tools to streamline verification processes, as stated in Google’s blog post.
Conclusion
Google has taken proactive steps to ensure a secure electoral process during the upcoming general elections in India. These include preventing the misuse of false information by helping voters navigate AI-generated content and safeguarding its platforms from abuse. Google India has built faster and more adaptable enforcement systems with recent advances in its Large Language Models (LLMs), enabling the company to remain nimble and take action quickly when new threats emerge. Google is dedicated to collaborating with government, industry, and civil society to provide voters with reliable and trustworthy online information. Google is implementing a comprehensive strategy to empower voters, safeguard its platforms, and combat misinformation in India's upcoming general elections. Google’s step is commendable and aims to ensure a secure electoral process, empowering millions of citizens to exercise their democratic rights.
References:
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://inc42.com/buzz/following-gemini-row-google-strengthens-checks-on-ai-generated-content-before-elections/#:~:text=In%20an%20effort%20to%20ensure,safeguarding%20its%20platforms%20from%20abuse
- https://www.indiatvnews.com/technology/news/google-introduces-enhanced-tools-for-supporting-elections-in-india-2024-03-12-921096
- https://economictimes.indiatimes.com/news/elections/lok-sabha/india/google-ties-up-with-eci-to-prevent-spread-of-false-information/articleshow/108431021.cms?from=mdr
- https://www.businesstoday.in/technology/news/story/google-joins-hands-with-election-commission-of-india-to-help-voters-via-search-youtube-421112-2024-03-12
- https://indianexpress.com/article/technology/tech-news-technology/google-2024-general-elections-support-9209588/

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf