#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/

Introduction:
Digital Forensics, as the term goes, “It is the process of collecting, preserving, identifying, analyzing, and presenting digital evidence in a way that the evidence is legally admitted.”
It is like a detective work in the digital realm, where investigators use various specific methods to find deleted files and to reveal destroyed messages.
The reason why Digital Forensics is an important field is because with the advancement of technology and the use of digital devices, the role of Digital Forensics in preserving the evidence and protecting our data from cybercrime is becoming more and more crucial.
Digital Forensics is used in various situations such as:
- Criminal Investigations: Digital Forensics enables investigators to trace back cyber threat actors and further identify victims of the crime to gather evidence needed to punish criminals.
- Legal issues: Digital Forensics might aid in legal matters involving intellectual property infringement and data breaches etc.
Types of Digital Data in Digital Forensics:
1.Persistent (Non-volatile) Data :-
- This type of Data Remains Intact When The Computer Is Turned Off.
- ex. Hard-disk, Flash-drives
2. Volatile Data :-
- These types of Data Would Be Lost When The Computer Is Turned Off.
- ex. Temp. Files, Unsaved OpenFiles, etc.
The Digital Forensics Process
The process is as follows

- Evidence Acquisition: This process involves making an exact copy (forensic image) of the storage devices such as hard drives, SSD or mobile devices. The goal is to preserve the original data without changing it.
- Data Recovery: After acquiring the forensic image, the analysts use tools to recover deleted, hidden or the encrypted data inside the device .
- Timeline Analysis: Analysts use timestamp information from files, and system logs to reconstruct the timeline of activities on a device. This helps in understanding how an incident spanned out and who was involved in it.
- Malware Analysis: In cases involving security breaches, analysts analyze malware samples to understand their behavior, impact, and origins. various reverse engineering techniques are used to analyze the malicious code.
Types of tools:
- Faraday Bags: Faraday bags are generally the first step in digital evidence capture. These bags are generally made of conductive materials, which are used to shield our electronic devices from external waves such as WiFi, Bluetooth, and mobile cellular signals, which in turn protects the digital evidence from external tampering.
- Data recovery : These types of software are generally used for the recovery of deleted files and their associated data. Ex. Magnet Forensics, Access data, X-Ways
- Disk imaging and analysis :These types of softwares are Generally used to replicate the data storage devices and then perform further analysis on it ex. FTKImager, Autopsy, and, Sleuth Kit
- File carving tools: They are generally used to extract information from the embedded files in the image made. Ex.Foremost, Binwalk, Scalpel
Some common tools:
- EnCase: It is a tool for acquiring, analyzing, and reporting digital evidence.
- Autopsy: It is an open-source platform generally used for analyzing hard drives and smartphones.
- Volatility: It is a framework used generally for memory forensics to analyze volatile memory dumps and extract info.
- Sleuth Kit: It is a package of CLI tools for investigating disk images and its associated file systems.
- Cellebrite UFED: It is a tool generally used for mobile forensics.
Challenges in the Field:
- Encryption: Encryption plays a major challenge as the encrypted data requires specialized techniques and tools for decryption.
- Anti-Forensic Techniques: Anti-Forensics techniques play a major challenge as the criminals often use anti-forensic methods to cover their tracks, making it challenging to get the digital evidence.
- Data Volume and Complexity: The large volume of digital data and the diversity of various devices create challenges in evidence collection and analysis.
The Future of Digital Forensics: A Perspective
With the growth of technology and the vast presence of digital data, the challenges and opportunities in Digital Forensics keep on updating themselves. Due to the onset of new technology and the ever growing necessity of cloud storage, mobile devices, and the IoT (Internet of Things), investigators will have to develop new strategies and should be ready to adapt and learn from the new shaping of the tech world.
Conclusion:
Digital Forensics is an essential field in the recent era for ensuring fairness in the digital era. By collecting, inspecting, and analyzing the digital data, the Digital Forensics investigators can arrive lawfully at the prosecution of criminals and the settlement of civil disputes. Nowadays with technology on one hand progressing continuously, the discipline of Digital Forensics will certainly become even more pivotal in the case of investigations in the years to come.

Introduction
Betting has long been associated with sporting activities and has found a growing presence in online gaming and esports globally. As the esports industry continues to expand, Statista has projected that it will reach a market value of $5.9 billion by 2029. As such, associated markets have also seen significant growth. In 2024, this segment accounted for an estimated $2.5 billion globally. While such engagement avenues are popular among international audiences, they also bring attention to concerns around regulation, integrity, and user protection. As esports builds its credibility and reach, especially among younger demographics, these aspects become increasingly important to address in policy and practice.
What Does Esports Betting Involve?
Much like traditional sports, esports engagement in some regions includes the practice of wagering on teams, players, or match outcomes. But it is inherently more complex. The accurate valuation of odds in online gaming and esports can be complicated by frequently updated game titles, changing teams, and shifting updates to game mechanics (called metas- most effective strategies). Bets can be placed using real money, virtual items like skins (digital avatars), or increasingly, cryptocurrency.
Esports and Wagering: Emerging Issues and Implications
- Legal Grey Areas: While countries like South Korea and some USA states have dedicated regulations for esports betting and licensed bookmaking, most do not. This creates legal grey areas for betting service providers to access unregulated markets, increasing the risk of fraud, money laundering, and exploitation of bettors in those regions.
- The Skill v/s Chance Dilemma: Most gambling laws across the world regulate betting based on the distinction between ‘games of skill’ and ‘games of chance’. Betting on the latter is typically illegal, since winning depends on chance. But the definitions of ‘skill’ and ‘chance’ may vary by jurisdiction. Also, esports betting often blurs into gambling. Outcomes may depend on player skill, but in-game economies like skin betting and unpredictable gameplay introduce elements of chance, complicating regulation and making enforcement difficult.
- Underage Gambling and Addiction Risks: Players are often minors and are exposed to the gambling ecosystem due to gamified betting through reward systems like loot boxes. These often mimic the mechanics of betting, normalising gambling behaviours among young users before they fully understand the risks. This can lead to the development of addictive behaviours.
- Match-Fixing and Loss of Integrity: Esports are particularly susceptible to match-fixing because of weak regulation, financial pressures, and the anonymity of online betting. Instances like the Dota 2 Southeast Asia Scandals (2023) and Valorant match-fixing in North America (2021) can jeopardise audience trust and sponsorships. This affects the trustworthiness of minor tournaments, where talent is discovered.
- Cybersecurity and Data Risks: Esports betting apps collect sensitive user data, making them an attractive target for cybercrime. Bettors are susceptible to identity theft, financial fraud, and data breaches, especially on unlicensed platforms.
Way Forward
To strengthen trust, ensure user safety, and protect privacy within the esports ecosystem, responsible management of betting practices can be achieved through targeted interventions focused on:
- National-Level Regulations: Countries like India have a large online gaming and esports market. It will need to create a regulatory authority along the lines of the UK’s Gambling Commission and update its gambling laws to protect consumers.
- Protection of Minors: Setting guardrails such as age verification, responsible advertising, anti-fraud mechanisms, self-exclusion tools, and spending caps can help to keep a check on gambling by minors.
- Harmonizing Global Standards: Since esports is inherently global, aligning core regulatory principles across jurisdictions (such as through multi-country agreements or voluntary industry codes of conduct) can help create consistency while avoiding overregulation.
- Co-Regulation: Governments, esports organisers, betting platforms, and player associations should work closely to design effective, well-informed policies. This can help uphold the interests of all stakeholders in the industry.
Conclusion
Betting in esports is inevitable. But the industry faces a double dilemma- overregulating on the one hand, or letting gambling go unchecked, on the other. Both can be detrimental to its growth. This is why there is a need for industry actors like policymakers, platforms and organisers to work together to harmonise legal inconsistencies, protect vulnerable users and invest in forming data security. Forming industry-wide ethics boards, promoting regional regulatory dialogue, and instating transparency measures for betting operators can be a step in this direction to ensure that esports evolves into a mature, trusted global industry.