#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx

Introduction:
CDR is a term that refers to Call detail records, The Telecom Industries holds the call details data of the users. As it amounts to a large amount of data, the telecom companies retain the data for a period of 6 months. CDR plays a significant role in investigations and cases in the courts. It can be used as pivotal evidence in court proceedings to prove or disprove certain facts & circumstances. Power of Interception of Call detail records is allowed for reasonable grounds and only by the authorized authority as per the laws.
Admissibility of CDR’s in Courts:
Call Details Records (CDRs) can be used as effective pieces of evidence to assist the court in ascertaining the facts of the particular case and inquiring about the commission of an offence, and according to the judicial pronouncements, it is made clear that CDRs can be used supporting or secondary evidence in the court. However, it cannot be the sole basis of the conviction. Section 92 of the Criminal Procedure Code 1973 provides procedure and empowers certain authorities to apply for court or competent authority intervention to seek the CDR.
Legal provisions to obtain CDR:
The CDR can be obtained under the statutory provisions of law contained in section 92 Criminal Procedure Code, 1973. Or under section 5(2) of Indian Telegraph Act 1885, read with rule 419(A) Indian Telegraph Amendment rule 2007. The guidelines were also issued in 2016 by Ministry of Ministry of Home Affairs for seeking Call details records (CDRs)
How long is CDR stored with telecom Companies (Data Retention)
Call Data is retained by telecom companies for a period of 6 months. As the data amounts to high storage, almost several Petabytes per year, telecom companies store the call details data for a period of 6 months and archive the rest of it to tapes.
New Delhi 25Cr jewellery heist
Recently, an incident took place where a 25-crore jewellery theft was carried out in a jewellery shop in Delhi, It was planned and executed by a man from Chhattisgarh. After committing the crime, the criminal went back to Chhattisgarh. It was a case of a 25Cr heist, and the police started their search & investigation. Police used technology and analysed the mobile numbers which were active at the crime scene. Delhi police used advanced software to analyse data. The police were able to trace the mobile number of thieves or suspects active at the crime scene. They discovered suspected contacts who were active within the range of the crime scene, and it helped in the arrest of the main suspects. From around 5,000 mobile numbers active around the crime scene, police have used advanced software that analyses huge data, and then police found a number registered outside of Delhi. The surveillance on the number has revealed that the suspected criminal has moved to the MP from Delhi, then moved further to Bhilai Chattisgarh. Police have successfully arrested the suspected criminal. This incident highlights how technology or call data can assist law enforcement agencies in investigating and finding the real culprits.
Conclusion:
CDR refers to call detail records retained by telecom companies for a period of 6 months, it can be obtained through lawful procedure and by competent authorities only. CDR can be helpful in cases before the court or law enforcement agencies, to assist the court and law enforcement agencies in ascertaining the facts of the case or to prove or disprove certain things. It is important to reiterated that unauthorized seeking of CDR is not allowed; the intervention of the court or competent authority is required to seek the CDR from the telecom companies. CDRs cannot be unauthorizedly obtained, and there has to be a directive from the court or competent authority to do so.
References:
- https://indianlegalsystem.org/cdr-the-wonder-word/#:~:text=CDR%20is%20admissible%20as%20secondary,the%20Indian%20Evidence%20Act%2C%201872.
- https://timesofindia.indiatimes.com/city/delhi/needle-in-a-haystack-how-cops-scanned-5k-mobile-numbers-to-crack-rs-25cr-heist/articleshow/104055687.cms?from=mdr
- https://www.ndtv.com/delhi-news/just-one-man-planned-executed-rs-25-crore-delhi-heist-another-thief-did-him-in-4436494

Introduction
The United Nations General Assembly (UNGA) has unanimously adopted the first global resolution on Artificial Intelligence (AI), encouraging countries to take into consideration human rights, keeping personal data safe, and further monitoring the threats associated with AI. This non-binding resolution proposed by the United States and co-sponsored by China and over 120 other nations advocates the strengthening of privacy policies. This step is crucial for governments across the world to shape how AI grows because of the dangers it carries that could undermine the protection, promotion, and right to human dignity and fundamental freedoms. The resolution emphasizes the importance of respecting human rights and fundamental freedoms throughout the life cycle of AI systems, highlighting the benefits of digital transformation and safe AI systems.
Key highlights
● This is indeed a landmark move by the UNGA, which adopted the first global resolution on AI. This resolution encourages member countries to safeguard human rights, protect personal data, and monitor AI for risks.
● Global leaders have shown their consensus for safe, secure, trustworthy AI systems that advance sustainable development and respect fundamental freedom.
● Resolution is the latest in a series of initiatives by governments around the world to shape AI. Therefore, AI will have to be created and deployed through the lens of humanity and dignity, Safety and Security, human rights and fundamental freedoms throughout the life cycle of AI systems.
● UN resolution encourages global cooperation, warns against improper AI use, and emphasizes the issues of human rights.
● The resolution aims to protect from potential harm and ensure that everyone can enjoy its benefits. The United States has worked with over 120 countries at the United Nations, including Russia, China, and Cuba, to negotiate the text of the resolution adopted.
Brief Analysis
AI has become increasingly prevalent in recent years, with chatbots such as the Chat GPT taking the world by storm. AI has been steadily attempting to replicate human-like thinking and solve problems. Furthermore, machine learning, a key aspect of AI, involves learning from experience and identifying patterns to solve problems autonomously. The contemporary emergence of AI has, however, raised questions about its ethical implications, potential negative impact on society, and whether it is too late to control it.
While AI is capable of solving problems quickly and performing various tasks with ease, it also has its own set of problems. As AI continues to grow, global leaders have called for regulations to prevent significant harm due to the unregulated AI landscape to the world and encourage the use of trustworthy AI. The European Union (EU) has come up with an AI act called the “European AI Act”. Recently, a Senate bill called “The AI Consent Bill” was introduced in the US. Similarly, India is also proactively working towards setting the stage for a more regulated Al landscape by fostering dialogues and taking significant measures. Recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory on AI, which requires explicit permission to deploy under-testing or unreliable AI models related to India's Internet. The following advisory also indicates measures advocating to combat deepfakes or misinformation.
AI has thus become a powerful tool that has raised concerns about its ethical implications and the potential negative influence on society. Governments worldwide are taking action to regulate AI and ensure that it remains safe and effective. Now, the groundbreaking move of the UNGA, which adopted the global resolution on AI, with the support of all 193 U.N. member nations, shows the true potential of efforts by countries to regulate AI and promote safe and responsible use globally.
New AI tools have emerged in the public sphere, which may threaten humanity in an unexpected direction. AI is able to learn by itself through machine learning to improve itself, and developers often are surprised by the emergent abilities and qualities of these tools. The ability to manipulate and generate language, whether with words, images, or sounds, is the most important aspect of the current phase of the ongoing AI Revolution. In the future, AI can have several implications. Hence, it is high time to regulate AI and promote the safe, secure and responsible use of it.
Conclusion
The UNGA has approved its global resolution on AI, marking significant progress towards creating global standards for the responsible development and employment of AI. The resolution underscores the critical need to protect human rights, safeguard personal data, and closely monitor AI technologies for potential hazards. It calls for more robust privacy regulations and recognises the dangers associated with improper AI systems. This profound resolution reflects a unified stance among UN member countries on overseeing AI to prevent possible negative effects and promote safe, secure and trustworthy AI.
References