#FactCheck - Viral Video of US President Biden Dozing Off during Television Interview is Digitally Manipulated and Inauthentic
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.

Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.


Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”

The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.

Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
On March 12, the Ministry of Corporate Affairs (MCA) proposed the Bill to curb anti-competitive practices of tech giants through ex-ante regulation. The Draft Digital Competition Bill is to apply to ‘Core Digital Services,’ with the Central Government having the authority to update the list periodically. The proposed list in the Bill encompasses online search engines, online social networking services, video-sharing platforms, interpersonal communications services, operating systems, web browsers, cloud services, advertising services, and online intermediation services.
The primary highlight of the Digital Competition Law Report created by the Committee on Digital Competition Law presented to the Parliament in the 2nd week of March 2024 involves a recommendation to introduce new legislation called the ‘Digital Competition Act,’ intended to strike a balance between certainty and flexibility. The report identified ten anti-competitive practices relevant to digital enterprises in India. These are anti-steering, platform neutrality/self-preferencing, bundling and tying, data usage (use of non-public data), pricing/ deep discounting, exclusive tie-ups, search and ranking preferencing, restricting third-party applications and finally advertising Policies.
Key Take-Aways: Digital Competition Bill, 2024
- Qualitative and quantitative criteria for identifying Systematically Significant Digital Enterprises, if it meets any of the specified thresholds.
- Financial thresholds in each of the immediately preceding three financial years like turnover in India, global turnover, gross merchandise value in India, or global market capitalization.
- User thresholds in each of the immediately preceding 3 financial years in India like the core digital service provided by the enterprise has at least 1 crore end users, or it has at least 10,000 business users.
- The Commission may make the designation based on other factors such as the size and resources of an enterprise, number of business or end users, market structure and size, scale and scope of activities of an enterprise and any other relevant factor.
- A period of 90 days is provided to notify the CCI of qualification as an SSDE. Additionally, the enterprise must also notify the Commission of other enterprises within the group that are directly or indirectly involved in the provision of Core Digital Services, as Associate Digital Enterprises (ADE) and the qualification shall be for 3 years.
- It prescribes obligations for SSDEs and their ADEs upon designation. The enterprise must comply with certain obligations regarding Core Digital Services, and non-compliance with the same shall result in penalties. Enterprises must not directly or indirectly prevent or restrict business users or end users from raising any issue of non-compliance with the enterprise’s obligations under the Act.
- Avoidance of favouritism in product offerings by SSDE, its related parties, or third parties for the manufacture and sale of products or provision of services over those offered by third-party business users on the Core Digital Service in any manner.
- The Commission will be having the same powers as vested to a civil court under the Code of Civil Procedure, 1908 when trying a suit.
- Penalty for non-compliance without reasonable cause may extend to Rs 1 lakh for each day during which such non-compliance occurs (max. of Rs 10 crore). It may extend to 3 years or with a fine, which may extend to Rs 25 crore or with both. The Commission may also pass an order imposing a penalty on an enterprise (not exceeding 1% of the global turnover) in case it provides incorrect, incomplete, misleading information or fails to provide information.
Suggestions and Recommendations
- The ex-ante model of regulation needs to be examined for the Indian scenario and studies need to be conducted on it has worked previously in different jurisdictions like the EU.
- The Bill should be aimed at prioritising the fostering of fair competition by preventing monopolistic practices in digital markets exclusively. A clear distinction from the already existing Competition Act, 2002 in its functioning needs to be created so that there is no overlap in the regulations and double jeopardy is not created for enterprises.
- Restrictions on tying and bundling and data usage have been shown to negatively impact MSMEs that rely significantly on big tech to reduce operational costs and enhance customer outreach.
- Clear definitions of "dominant position" and "anti-competitive behaviour" are essential for effective enforcement in terms of digital competition need to be defined.
- Encouraging innovation while safeguarding consumer data privacy in consonance with the DPDP Act should be the aim. Promoting interoperability and transparency in algorithms can prevent discriminatory practices.
- Regular reviews and stakeholder consultations will ensure the law adapts to rapidly evolving technologies.
- Collaboration with global antitrust bodies which is aimed at enhancing cross-border regulatory coherence and effectiveness.
Conclusion
The need for a competition law that is focused exclusively on Digital Enterprises is the need of the hour and hence the Committee recommended enacting the Digital Competition Act to enable CCI to selectively regulate large digital enterprises. The proposed legislation should be restricted to regulate only those enterprises that have a significant presence and ability to influence the Indian digital market. The impact of the law needs to be restrictive to digital enterprises and it should not encroach upon matters not influenced by the digital arena. India's proposed Digital Competition Bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses prevalent in the digital business space. The Ministry of Corporate Affairs has received 41-page public feedback on the draft which is expected to be tabled next year in front of the Parliament.
References
- https://www.medianama.com/wp-content/uploads/2024/03/DRAFT-DIGITAL-COMPETITION-BILL-2024.pdf
- https://prsindia.org/files/policy/policy_committee_reports/Report_Summary-Digital_Competition_Law.pdf
- https://economictimes.indiatimes.com/tech/startups/meity-meets-india-inc-to-hear-out-digital-competition-law-concerns/articleshow/111091837.cms?from=mdr
- https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open
- https://www.barandbench.com/law-firms/view-point/digital-competition-laws-beginning-of-a-new-era
- https://www.linkedin.com/pulse/policy-explainer-digital-competition-bill-nimisha-srivastava-lhltc/
- https://www.lexology.com/library/detail.aspx?g=5722a078-1839-4ece-aec9-49336ff53b6c

Introduction
The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1 2024, urging platforms to prevent bias, discrimination, and threats to electoral integrity by using AI, generative AI, LLMs, or other algorithms. The advisory requires that AI models deemed unreliable or under-tested in India must obtain explicit government permission before deployment. While leveraging Artificial Intelligence models, Generative AI, software, or algorithms in their computer resources, Intermediaries and platforms need to ensure that they prevent bias, discrimination, and threats to electoral integrity. As Intermediaries are required to follow due diligence obligations outlined under “Information Technology (Intermediary Guidelines and Digital Media Ethics Code)Rules, 2021, updated as of 06.04.2023”. This advisory is issued to urge the intermediaries to abide by the IT rules and regulations and compliance therein.
Key Highlights of the Advisories
- Intermediaries and platforms must ensure that users of Artificial Intelligence models/LLM/Generative AI, software, or algorithms do not allow users to host, display, upload, modify, publish, transmit, store, update, or share unlawful content, as per Rule 3(1)(b) of the IT Rules.
- The government emphasises intermediaries and platforms to prevent bias or discrimination in their use of Artificial Intelligence models, LLMs, and Generative AI, software, or algorithms, ensuring they do not threaten the integrity of the electoral process.
- The government requires explicit permission to use deemed under-testing or unreliable AI models, LLMs, or algorithms on the Indian internet. Further, it must be deployed with proper labelling of potential fallibility or unreliability. Further, users can be informed through a consent popup mechanism.
- The advisory specifies that all users should be well informed about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. It entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform.
- The advisory also indicates measures advocating to combat deepfakes or misinformation. The advisory necessitates identifying synthetically created content across various formats, advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Rajeev Chandrasekhar, Union Minister of State for IT, specified that
“Advisory is aimed at the Significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups. Advisory is aimed at untested AI platforms from deploying on the Indian Internet. Process of seeking permission , labelling & consent based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers. Safety & Trust of India's Internet is a shared and common goal for Govt, users and Platforms.”
Conclusion
MeitY's advisory sets the stage for a more regulated Al landscape. The Indian government requires explicit permission for the deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet. Alongside intermediaries, the advisory also applies to digital platforms that incorporate Al elements. Advisory is aimed at significant platforms and will not apply to startups. This move safeguards users and fosters innovation by promoting responsible AI practices, paving the way for a more secure and inclusive digital environment.
References
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf
- https://economictimes.indiatimes.com/tech/technology/govts-ai-advisory-will-not-apply-to-startups-mos-it-rajeev-chandrasekhar/articleshow/108197797.cms?from=mdr
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds