#FactCheck - Viral Video of US President Biden Dozing Off during Television Interview is Digitally Manipulated and Inauthentic
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.

Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.


Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”

The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.

Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr

Introduction
Misinformation regarding health is sensitive and can have far-reaching consequences. These include its effect on personal medical decisions taken by individuals, lack of trust in conventional medicine, delay in seeking treatments, and even loss of life. The fast-paced nature and influx of information on social media can aggravate the situation further. Recently, a report titled Health Misinformation Vectors in India was presented at the Health of India Summit, 2024. It provided certain key insights into health-related misinformation circulating online.
The Health Misinformation Vectors in India Report
The analysis was conducted by the doctors at First Check, a global health fact-checking initiative alongside DataLEADS, a Delhi-based digital media and technology company. The report covers health-related social media content that was posted online from October 2023 to November 2024. It mentions that among all the health scares, misinformation regarding reproductive health, cancer, vaccines, and lifestyle diseases such as diabetes and obesity is the most prominent type that is spread through social media. Misinformation regarding reproductive health includes illegal abortion methods that often go unchecked and even tips on conceiving a male child, among other things.
In order to combat this misinformation, the report encourages stricter regulations regarding health-related content on digital media, inculcating technology for health literacy and misinformation management in public health curricula and recommending tech platforms to work on algorithms that prioritise credible information and fact-checks. Doctors state that people affected by life-threatening diseases are particularly vulnerable to such misinformation, as they are desperate to seek options for treatment for themselves and their family members to have a chance at life. In a diverse society, with the lack of clear and credible information, limited access to or awareness about tools that cross-check content, and low digital literacy, people gravitate towards alternate sources of information which also fosters a sense of disengagement among the public overall. The diseases mentioned in the report, which are prone to misinformation, are life-altering and require attention from healthcare professionals.
CyberPeace Outlook
Globally, there are cases of medically-unqualified social media influencers who disperse false/mis- information regarding various health matters. The topics covered are mostly associated with stigma and are still undergoing research. This gap allows for misinformation to be fostered. An example is misinformation regarding PCOS( Polycystic Ovary Syndrome) which is circulating online.
In the midst of all of this, YouTube has released a new feature that aligns with combating health misinformation, trying to bridge the gap between healthcare professionals and Indians who look for trustworthy health-related information online. The initiative includes a feature that allows doctors, nurses, and other healthcare professionals to sign up for a health information source license. This would help by labeling all their informative videos, as addressed- from a healthcare professional. Earlier, this feature was available only for health organisations including a health source information panel and health content shelves, but this step broadens the scope for verification of licenses of individual healthcare professionals.
As digital literacy continues to grow, methods of seeking credible information, especially regarding sensitive topics such as health, require a combined effort on the part of all the stakeholders involved. We need a robust strategy for battling health-related misinformation online, including more awareness programmes and proactive participation from the consumers as well as medical professionals regarding such content.
References
- https://timesofindia.indiatimes.com/india/misinformation-about-cancer-reproductive-health-is-widespread-in-india-impacting-medical-decisions-says-report/articleshow/115931612.cms
- https://www.ndtv.com/india-news/cancer-misinformation-prevalent-in-india-trust-in-medicine-crucial-report-7165458
- https://www.newindian.in/ai-driven-health-misinformation-poses-threat-to-indias-public-health-report/
- https://www.etvbharat.com/en/!health/youtube-latest-initiative-combat-health-misinformation-india-enn24121002361
- https://blog.google/intl/en-in/products/platforms/new-ways-for-registered-healthcare-professionals-in-india-to-reach-people-on-youtube/
- https://www.bbc.com/news/articles/ckgz2p0999yo

Introduction
China is on the verge of unveiling a new policy that will address how Artificial Intelligence (AI) influences employment. On January 27, 2026, the Ministry of Human Resources and Social Security (MOHRSS) announced it would publish a paper on the contribution of AI to the labour and employment markets. The policy will include provisions to help impacted industries, expand assistance to young workers and graduates, and come up with interdisciplinary training programmes to equip individuals with jobs in an AI-enabled economy. The authorities have stressed that AI does not kill jobs but changes them, and education will be needed to assist employees in adjusting to the changes.
This announcement reflects a more proactive policy on AI-based changes in labour, showing that China intends to sustain economic modernisation through AI, as well as social stability. It also depicts wider international issues concerning the rate of automation and the necessity of considering labour and training policy.
AI and the Changing Nature of Work
AI is transforming work content and nature in industries. AI systems enhance the productivity of various functions, including data processing, logistics, and customer service, although they alter the nature of tasks carried out by humans. Extant studies indicate that although AI can automate routine activities, new occupations that require complex thinking, management of artificial intelligence, and skills related to people, including empathy, creativity, and problem-solving, may be generated.
This is the key nuance in the policy framing of China. Authorities point out that AI does not always result in massive unemployment. Instead, it transforms jobs and necessitates workers to change to new task profiles. This perspective is in line with the recent reports of the world research organisations, which predict the effects of AI as transformational and not necessarily destructive. As an example, the World Economic Forum Future Jobs Report 2023 observes that the change in technology will introduce new jobs that were not there 10 years ago, and retraining and upskilling will be instrumental in accessing those opportunities.
Key Components of China’s Policy Response
China’s forthcoming policy is expected to focus on three main areas that address both current workforce needs and future readiness.
Support for Key Industries
The policy will offer targeted assistance to sectors where artificial intelligence is gaining pace. Industries like advanced manufacturing, high-tech services, and online logistics will also get specialised assistance to assist companies in using AI to complement human labour and not just to replace it. The Chinese government tries to balance industrial upgrading with employment by channelling resources to the growth areas.
Assistance for Youth and Graduates
The youth and the recent graduates are entering a labour market that is changing rapidly. The policy aims to increase the support services to this population by career counselling, internships, and training programmes correlated with changing employer demands. According to a study by McKinsey Global Institute, the young workforce all over the globe can face disproportionate disruption in case the prospects of training are scarce, making initial career backing imperative.
Interdisciplinary Talent Development
The Chinese strategy focuses on interdisciplinary training that blends knowledge of domains and AI literacy and digital illiteracy. This is indicative of the realisation that hybrid skills are required in the future. The Organisation for Economic Cooperation and Development suggests that workers who can make it through the technical and non-technical elements of work will stand a better chance of winning in the AI age.
These components show that China’s strategy is not simply to protect existing jobs but to help workers transition to roles that leverage AI’s strengths.
Economy, Stability and Strategic Modernisation
The policy is an attempt to control technological transition as part of wider economic planning. It is an indication that the government regards AI as a structural change rather than an external shock that can be predicted and influenced by policy.
This is in contrast to some other reactions to labour markets in other countries, where the reactionary approach has been seen as a reaction to the job losses that have already become reality. The initiative by China implies that there should be a change in the manner in which one can expect change instead of reacting to change.
Global Comparisons and Shared Challenges
Governments worldwide are testing the options to adapt to the work effects of AI. The European Union is considering the individual learning account and portable training benefits, which would assist workers to gain access to reskilling opportunities in the course of their careers. In the US, there is a concerted effort by the public-private partnerships to match the development of the workforce with technological implementation.
The strategy of China has some of these components, but it stands out due to its incorporation with national planning processes. China wants the adoption of AI to help it achieve the common good and not division by connecting the workforce policy to the overall innovation and economic purpose.
Meanwhile, the issue of balancing the supply of labour with the demand of technology is a challenge of its own to countries with older populations and relatively smaller working forces. The timing and design of policy are particularly significant in China, as there is a large labour force and continuous changes in demography.
Practical Challenges and Risks
The success of China’s emerging policy will depend on effective implementation. Several practical issues will require careful attention:
Ensuring Equitable Access to Training
The labour force in China is diversified, and it goes through technology zones in cities and other rural areas. It will be paramount to make sure that the opportunity of upskilling is extended to all workers across the spectrum to prevent the further worsening of regional inequalities. Research conducted on reskilling across the globe shows that rural and low-income groups tend to lack access to training, despite the availability of programmes.
Aligning Training with Labour Demand
The programme of upskilling should be related to the market requirements. Disconnected training is prone to resulting in the production of skills that are obsolete or not applicable in actual work settings. Experience in emerging economies indicates that the involvement of employers in the training design enhances placement success on the part of the learner.
Private Sector Participation
The policy needs to be translated into employment outcomes with the help of private companies. Incentives to make firms invest in worker training, internships, and apprenticeships will enable workers to shift to AI-augmented jobs with ease.
A Model for AI Workforce Policy
The Chinese policy can serve as an example for other countries that want to balance technological advancement and labour market security. It acknowledges the fact that the effect of AI on employment is not only a technical or an economic problem but also a social challenge. Through foregrounding training, support, and coordinated action, China aims to create a future where people are ready to change and not lose their jobs to this change.
This strategy can be agreed with the suggestions of international organisations like the World Bank and the OECD, which insist on the idea of lifelong learning and flexibility of labour markets, as well as proactive investment in human capital as the main aspects of the labour policy in the future.
Conclusion
Artificial intelligence will continue to reshape work around the world. China’s forthcoming policy, which emphasises support, training and strategic integration of AI into labour markets, reflects a proactive and holistic view of technological transition. Other countries could benefit from studying this approach, especially in terms of linking workforce development with innovation goals.
By anticipating disruption and investing in people as well as technology, policymakers can help ensure that AI becomes a driver of shared economic opportunity rather than a source of exclusion. The balance between innovation and employment will shape not only economic outcomes but also social cohesion in the years ahead.
References