#FactCheck - Viral Video of US President Biden Dozing Off during Television Interview is Digitally Manipulated and Inauthentic
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.

Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.


Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”

The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.

Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL

Executive Summary:
A misleading video has been widely shared online, falsely portraying Pandit Jawaharlal Nehru stating that he was not involved in the Indian independence struggle and he even opposed it. The video is a manipulated excerpt from Pandit Nehru’s final major interview in 1964 with American TV host Arnold Mich. The original footage available on India’s state broadcaster Prasar Bharati’s YouTube channel shows Pandit Nehru discussing about Muhammad Ali Jinnah, stating that Jinnah did not participate in the independence movement and opposed it. The viral video falsely edits Pandit Nehru’s comments to create a false narrative, which has been debunked upon reviewing the full, unedited interview.

Claims:
In the viral video, Pandit Jawaharlal Nehru states that he was not involved in the fight for Indian independence and even opposed it.




Fact check:
Upon receiving the posts, we thoroughly checked the video and then we divided the video into keyframes using the inVid tool. We reverse-searched one of the frames of the video. We found a video uploaded by Prasar Bharati Archives official YouTube channel on 14 May 2019.

The description of the video reads, “Full video recording of what was perhaps Pandit Jawaharlal Nehru's last significant interview to American TV Host Arnold Mich Jawaharlal Nehru's last TV Interview - May 1964e his death. Another book by Chandrika Prasad provides a date of 18th May 1964 when the interview was aired in New York, this is barely a few days before the death of Pandit Nehru on 27th May 1964.”
On reviewing the full video, we found that the viral clip of Pandit Nehru runs from 14:50 to 15:45. In this portion, Pandit Nehru is speaking about Muhammad Ali Jinnah, a key leader of the Muslim League.
At the timestamp 14:34, the American TV interviewer Arnold Mich says, “You and Mr. Gandhi and Mr. Jinnah, you were all involved at that point of Independence and then partition in the fight for Independence of India from the British domination.” Pandit Nehru replied, “Mr. Jinnah was not involved in the fight for independence at all. In fact, he opposed it. Muslim League was started in about 1911 I think. It was started really by the British encouraged by them so as to create factions, they did succeed to some extent. And ultimately there came the partition.”
Upon thoroughly analyzing we found that the viral video is an edited version of the real video to misrepresent the actual context of the video.
We also found the same interview uploaded on a Facebook page named Nehru Centre for Social Research on 1 December 2021.

Hence, the viral claim video is misleading and fake.
Hence, the viral video is fake and misleading and netizens must be careful while believing in such an edited video.
Conclusion:
In conclusion, the viral video claiming that Pandit Jawaharlal Nehru stated that he was not involved in the Indian independence struggle is found to be falsely edited. The original footage reveals that Pandit Nehru was referring to Muhammad Ali Jinnah's participation in the struggle, not his own. This explanation debunks the false story conveyed by the manipulated video.
- Claim: Pandit Jawaharlal Nehru stated that he was not involved in the struggle for Indian independence and even he opposed it.
- Claimed on: YouTube, LinkedIn, Facebook, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading

Introduction
As we delve deeper into the intricate, almost esoteric digital landscape of the 21st century, we are confronted by a new and troubling phenomenon that threatens the very bastions of our personal security. This is not a mere subplot in some dystopian novel but a harsh and palatable reality firmly rooted in today's technologically driven society. We must grapple with the consequences of the alarming evolution of cyber threats, particularly the sophisticated use of artificial intelligence in creating face swaps—a technique now cleverly harnessed by nefarious actors to undermine the bedrock of biometric security systems.
What is GoldPickaxe?
It was amidst the hum of countless servers and data centers that the term 'GoldPickaxe' began to echo, sending shivers down the spines of cybersecurity experts. Originating from the intricate web spun by a group of Chinese hackers as reported in Dark Reading. GoldPickaxe represents the latest in a long lineage of digital predators. It is an astute embodiment of the disguise, blending into the digital environment as a seemingly harmless government service app. But behind its innocuous facade, it bears the intent to ensnare and deceive, with the elderly demographic being especially susceptible to its trap.
Victims, unassuming and trustful, are cajoled into revealing their most sensitive information: phone numbers, private details, and, most alarmingly, their facial data. These virtual reflections, intended to be the safeguard of one's digital persona, are snatched away and misused in a perilous transformation. The attackers harness such biometric data, feeding it into the arcane furnaces of deepfake technology, wherein AI face-swapping crafts eerily accurate and deceptive facsimiles. These digital doppelgängers become the master keys, effortlessly bypassing the sentinel eyes of facial recognition systems that lock the vaults of Southeast Asia's financial institutions.
Through the diligent and unyielding work of the research team at Group-IB, the trajectory of one victim's harrowing ordeal—a Vietnamese individual pilfered of a life-altering $40,000—sheds light on the severity of this technological betrayal. The advancements in deep face technology, once seen as a marvel of AI, now present a clear and present danger, outpacing the mechanisms meant to deter unauthorized access, and leaving the unenlightened multitude unaware and exposed.
Adding weight to the discussion, experts, a potentate in biometric technology, commented with a somber tone: 'This is why we see face swaps as a tool of choice for hackers. It gives the threat actor this incredible level of power and control.' This chilling testament to the potency of digital fraudulence further emphasizes that even seemingly impregnable ecosystems, such as that of Apple’s, are not beyond the reach of these relentless invaders.
New Threat
Emerging from this landscape is the doppelgänger of GoldPickaxe specifically tailored for the iOS landscape—GoldDigger's mutation into GoldPickaxe for Apple's hallowed platform is nothing short of a wake-up call. It engenders not just a single threat but an evolving suite of menaces, including its uncanny offspring, 'GoldDiggerPlus,' which is wielding the terrifying power to piggyback on real-time communications of the affected devices. Continuously refined and updated, these threats become chimeras, each iteration more elusive, more formidable than its predecessor.
One ingenious and insidious tactic exploited by these cyber adversaries is the diversionary use of Apple's TestFlight, a trusted beta testing platform, as a trojan horse for their malware. Upon clampdown by Apple, the hackers, exhibiting an unsettling level of adaptability, inveigle users to endorse MDM profiles, hitherto reserved for corporate device management, thereby chaining these unknowing participants to their will.
How To Protect
Against this stark backdrop, the question of how one might armor oneself against such predation looms large. It is a question with no simple answer, demanding vigilance and proactive measures.
General Vigilance : Aware of the Trojan's advance, Apple is striving to devise countermeasures, yet individuals can take concrete steps to safeguard their digital lives.
Consider Lockdown Mode: It is imperative to exhibit discernment with TestFlight installations, to warily examine MDM profiles, and seriously consider embracing the protective embrace of Lockdown Mode. Activating Lockdown Mode on an iPhone is akin to drawing the portcullis and manning the battlements of one's digital stronghold. The process is straightforward: a journey to the settings menu, a descent into privacy and security, and finally, the sanctification of Lockdown Mode, followed by a device restart. It is a curtailment of convenience, yes, but a potent defense against the malevolence lurking in the unseen digital thicket.
As 'GoldPickaxe' insidiously carves its path into the iOS realm—a rare and unsettling occurrence—it flags the possible twilight of the iPhone's vaunted reputation for tight security. Should these shadow operators set their sights beyond Southeast Asia, angling their digital scalpels towards the U.S., Canada, and other English-speaking enclaves, the consequences could be dire.
Conclusion
Thus, it is imperative that as digital citizens, we fortify ourselves with best practices in cybersecurity. Our journey through cyberspace must be cautious, our digital trails deliberate and sparse. Let the specter of iPhone malware serve as a compelling reason to arm ourselves with knowledge and prudence, the twin guardians that will let us navigate the murky waters of the internet with assurance, outwitting those who weave webs of deceit. In heeding these words, we preserve not only our financial assets but the sanctity of our digital identities against the underhanded schemes of those who would see them usurped.
References
- https://www.timesnownews.com/technology-science/new-ios-malware-stealing-face-id-data-bank-infos-on-iphones-how-to-protect-yourself-article-107761568
- https://www.darkreading.com/application-security/ios-malware-steals-faces-defeat-biometrics-ai-swaps
- https://www.tomsguide.com/computing/malware-adware/first-ever-ios-trojan-discovered-and-its-stealing-face-id-data-to-break-into-bank-accounts