#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Executive Summary:
The internet is a nest of scams and there's much need to be careful with predatory ideas that prey on the naïve people. Within the recent days, a malicious campaign has emerged falsely alleging 28 day free recharge by courtesy of the Prime Minister Narendra Modi. This blog seeks to analyze the tactics used by this scam in luring the victims and give an overview on how one can identify and keep away from such fraudulent activities.
Claim:
In view of the increasing support for the BJP 2024 election, a rumor has allegedly claimed that the Prime Minister Narendra Modi offering a free recharge with a validity period of up-to twenty eight days at cost of ₹239 to all Indian users. The message encourages the users to click on a given link in order to redeem the free recharge, pointing out that this offer is valid until January 26th of 2024.
The Deceptive Journey:
- Insecure Links:The research begins with a suspicious link (http://offerintro[.]com/BJP2024), without any credibility that honest sites use to protect the user information. We should keep in mind that the links which aren’t secure may easily lead to phishing and other cyber threats.
- Multiple Redirects:When users click the link, they are immediately directed through a series of links. This common tactic used by scammers is designed to hide the true origin of their fraudulent scheme, making it difficult for users' efforts to identify the malicious activity.
- False Promises and Fake Comments:The landing page has a banner of the Prime Minister Narendra Modi that makes it look like this is an official channel and hence authentic. Further, false comments can be also included to compliment the alleged initiative. But remember that genuine government announcements are made through legal channels, not by the shady websites.
- Mobile Number Request:As the next step, the users enter their mobile numbers in the specified field. True initiatives never really need the personal information to pass through unofficial lines. This is actually a trick that scammers use to acquire the important information.
- Share to Activate:Once a user has entered the mobile number, he/she is prompted to share the link with others in order to “activate” promised free recharge. This method is most often used by scammers for spreading their fraudulent message beyond the targeted victim.
- Fake Progress Display:When the users have done their part by sharing the link, a false recharge in progress bar is shown to make them believe that it has started. But the consumers are unwittingly playing a part in the fraud.
- Recharge Completion Pop-up:The last stage of fraud includes a pop-up saying that the recharge is done; leaving users with the false belief that they have benefited from a legitimate government initiative.
What we Analyze :
- It is important to note that at this particular point, there has not been any official declaration or a proper confirmation of an offer made by the Prime Minister or from their government. So, people must be very careful when encountering such messages because they are often employed as lures in phishing attacks or misinformation campaigns. Before engaging or transmitting such claims, it is always advisable to authenticate the information from trustworthy sources in order to protect oneself online and prevent the spread of wrongful information.
- The campaign is hosted on a third party domain instead of any official Government Website, this raised suspicion. Also the domain has been registered in very recent times.

- Domain Name: offerintro[.]com
- Registry Domain ID: 2791466714_DOMAIN_COM-VRSN
- Registrar WHOIS Server: whois.godaddy[.]com
- Registrar URL: https://www.godaddy[.]com
- Registrar: GoDaddy[.]com, LLC
- Registrar IANA ID: 146
- Updated Date: 2023-06-18T20:37:20Z
- Creation Date: 2023-06-18T20:37:20Z
- Registrar Registration Expiration Date: 2024-06-18T20:37:20Z
- Name Server: ANAHI.NS.CLOUDFLARE.COM
- Name Server: GARRETT.NS.CLOUDFLARE.COM
CyberPeace Advisory:
- Stay Informed: Beware of the scams and keep yourself updated through authentic government platforms.
- Verify Website Security: Do not get engaged with any insecure HTTP links but focus on URLs that have secure encryption (HTTPS).
- Protect Personal Information: However, be cautious when sharing personal information – especially in a non-official channel.
- Report Suspicious Activity: If you discover any scams or fraudulent activities, report it and the relevant sites to help avoid others from being defrauded of their hard earned money.
Conclusion:
Summing up, Prime Minister Narendra Modi Free Recharge fraud is an excellent illustration that there is always some danger within cyberspace. The way of the method, from insecure links and also multiple redirects to false promises and really data collection make it clear that internet users should be more careful. The importance of staying up-to-date with what is happening in this new digital world, verifying credibility and also privacy are paramount. By being cautiously aware, the people can keep themselves safe from such fraudulent acts and also play a role in ensuring security even for an online world. Remember that an offer which is in a perfect world should be illegal. Therefore, after doing a thorough research we found this campaign to be fake.

Introduction
The Ministry of Electronics and Information Technology recently released the IT Intermediary Guidelines 2023 Amendment for social media and online gaming. The notification is crucial when the Digital India Bill’s drafting is underway. There is no denying that this bill, part of a series of bills focused on amendments and adding new provisions, will significantly improve the dynamics of Cyberspace in India in terms of reporting, grievance redressal, accountability and protection of digital rights and duties.
What is the Amendment?
The amendment comes as a key feature of cyberspace as the bill introduces fact-checking, a crucial aspect of relating information on various platforms prevailing in cyberspace. Misformation and disinformation were seen rising significantly during the Covid-19 pandemic, and fact-checking was more important than ever. This has been taken into consideration by the policymakers and hence has been incorporated as part of the Intermediary guidelines. The key features of the guidelines are as follows –
- The phrase “online game,” which is now defined as “a game that is offered on the Internet and is accessible by a user through a computer resource or an intermediary,” has been added.
- A clause has been added that emphasises that if an online game poses a risk of harm to the user, intermediaries and complaint-handling systems must advise the user not to host, display, upload, modify, publish, transmit, store, update, or share any data related to that risky online game.
- A proviso to Rule 3(1)(f) has been added, which states that if an online gaming intermediary has provided users access to any legal online real money game, it must promptly notify its users of the change, within 24 hours.
- Sub-rules have been added to Rule 4 that focus on any legal online real money game and require large social media intermediaries to exercise further due diligence. In certain situations, online gaming intermediaries:
- Are required to display a demonstrable and obvious mark of verification of such online game by an online gaming self-regulatory organisation on such permitted online real money game
- Will not offer to finance themselves or allow financing to be provided by a third party.
- Verification of real money online gaming has been added to Rule 4-A.
- The Ministry may name as many self-regulatory organisations for online gaming as it deems necessary for confirming an online real-money game.
- Each online gaming self-regulatory body will prominently publish on its website/mobile application the procedure for filing complaints and the appropriate contact information.
- After reviewing an application, the self-regulatory authority may declare a real money online game to be a legal game if it is satisfied that:
- There is no wagering on the outcome of the game.
- Complies with the regulations governing the legal age at which a person can engage into a contract.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have a new rule 4-B (Applicability of certain obligations after an initial period) that states that the obligations of the rule under rules 3 and 4 will only apply to online games after a three-month period has passed.
- According to Rule 4-C (Obligations in Relation to Online Games Other Than Online Real Money Games), the Central Government may direct the intermediary to make necessary modifications without affecting the main idea if it deems it necessary in the interest of India’s sovereignty and integrity, the security of the State, or friendship with foreign States.
- Intermediaries, such as social media companies or internet service providers, will have to take action against such content identified by this unit or risk losing their “safe harbour” protections under Section 79 of the IT Act, which let intermediaries escape liability for what third parties post on their websites. This is problematic and unacceptable. Additionally, these notified revisions can circumvent the takedown order process described in Section 69A of the IT Act, 2000. They also violated the ruling in Shreya Singhal v. Union of India (2015), which established precise rules for content banning.
- The government cannot decide if any material is “fake” or “false” without a right of appeal or the ability for judicial monitoring since the power to do so could be abused to thwart examination or investigation by media groups. Government takedown orders have been issued for critical remarks or opinions posted on social media sites; most of the platforms have to abide by them, and just a few, like Twitter, have challenged them in court.
Conclusion
The new rules briefly cover the aspects of fact-checking, content takedown by Govt, and the relevance and scope of sections 69A and 79 of the Information Technology Act, 2000. Hence, it is pertinent that the intermediaries maintain compliance with rules to ensure that the regulations are sustainable and efficient for the future. Despite these rules, the responsibility of the netizens cannot be neglected, and hence active civic participation coupled with such efficient regulations will go a long way in safeguarding the Indian cyber ecosystem.