#FactCheck - "Deepfake Video Falsely Claims of Elon Musk conducting give away for Cryptocurrency”
Executive Summary:
A viral online video claims Billionaire and Founder of Tesla & SpaceX Elon Musk of promoting Cryptocurrency. The CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Elon’s facial expressions and voice through the use of relevant, reputed and well verified AI tools and applications to arrive at the above conclusion for the same. The original footage had no connections to any cryptocurrency, BTC or ETH apportion to the ardent followers of crypto-trading. The claim that Mr. Musk endorses the same and is therefore concluded to be false and misleading.

Claims:
A viral video falsely claims that Billionaire and founder of Tesla Elon Musk is endorsing a Crypto giveaway project for the crypto enthusiasts which are also his followers by consigning a portion of his valuable Bitcoin and Ethereum stock.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Mr. Elon Musk but none of them included any promotion of any cryptocurrency giveaway. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 99.0% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Mr. Musk revealed no mention of any such giveaway. No credible reports were found linking Elon Musk to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Elon Musk promotes a crypto giveaway is a deep fake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Elon Musk conducting giving away Cryptocurrency viral on social media.
- Claimed on: X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.

Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.


Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”

The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.

Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading

Introduction
Misinformation spreads faster than a pimple before your best friend's wedding, and these viral skincare hacks on social media can do more harm than good if smeared on without a second thought. The unverified skin care tips, exaggerated results, and product endorsements lacking proper dermatological backing can often lead to breakouts and serious damage.
The Allure and Risks of Online Skincare Trends
In the age of social media, beauty advice is easily accessible, but not all trending skincare hacks are beneficial. Influencers lacking professional dermatological knowledge often endorse "medical grade" skincare products, which may not be suitable for all skin types. The viral DIY skincare hacks, such as natural remedies like multani mitti (Fuller's earth), have found a new audience online. However, suppose such skincare tips are approached without due care and caution regarding their suitability for different skin types, or without the proper formulation of ingredients. In that case, they can result in skin problems. It is crucial to approach online skincare advice with a critical eye, as not all trends are backed by scientific research.
CyberPeace Recommendations
- Influencer Responsibility and Ethical Endorsements in Skincare
Influencers play a crucial role in shaping public perception in the skincare and lifestyle industries. However, they must exercise due diligence before endorsing skincare products or practices, as misinformation can lead to financial loss and health consequences. Influencers should only promote products they have personally tested or vetted by dermatologists or skincare professionals. They should also research the brand's credibility, check ingredients for safety, and understand the product's target audience.
- Strengthening Digital Literacy in Skincare Spaces
CyberPeace highlights that improving digital literacy is one of the best strategies to stop the spread of false information about skincare. Users nowadays, particularly young people, are continuously exposed to a deluge of wellness and beauty-related content. Many people are duped by overstated claims, pseudoscientific cures, and influencer-driven marketing masquerading as sound advice if they lack the necessary digital literacy. We recommend supporting digital literacy initiatives that teach users how to evaluate sources, think critically, and comprehend how algorithms promote content. Long-term impact is thought to be achieved through influencer partnerships, gamified learning modules, and community workshops that promote media literacy.
- Recommendation for Users to Prioritise Research and Critical Thinking
Users should prioritise research and critical thinking when engaging with skincare content online. It's crucial to distinguish between valid advice and misinformation. Thorough research, including expert reviews, ingredient checks, and scientific sources, is essential. Questioning endorsements and relying on trusted platforms and dermatologists can help ensure a skincare routine based on sound practices.
- Mandating Transparency from Influencers and Brands
Enforcing stronger transparency laws for influencers and skincare companies is a key suggestion. Social media influencers frequently neglect to reveal sponsored collaborations or paid advertisements, giving followers the impression that the skincare advice is based on the creators' own experience and objective judgment. This dishonest practice frequently promotes goods with little to no scientific support and feeds false information. The social media companies need to be proactive in identifying and removing content that violates disclosure and advertising guidelines.
- Creating a Verified Registry for Skincare Professionals
Increasing the voices of real experts is one of the most important strategies to build credibility and trust online. The establishment of a publicly available, validated registry of certified dermatologists, cosmetologists, and skincare scientists is suggested by cybersecurity experts and medical professionals. These experts could then receive a "verified expert" badge from social media companies, making it easier for users to discern between content created by unqualified people and genuine, evidence-based advice. Algorithms that promote such verified content would inevitably limit the dissemination of false information.
- Enforcing Platform Accountability and Reporting System
There needs to be platform-level accountability and safeguard mechanisms in case of any false information about skincare. Platforms should monitor repeat offenders and implement a tiered penalty system that includes content removal and temporary or permanent bans on such malicious user profiles.
References

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/