#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore, false and misleading.
Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs
.webp)
Introduction
The link between social media and misinformation is undeniable. Misinformation, particularly the kind that evokes emotion, spreads like wildfire on social media and has serious consequences, like undermining democratic processes, discrediting science, and promulgating hateful discourses which may incite physical violence. If left unchecked, misinformation propagated through social media has the potential to incite social disorder, as seen in countless ethnic clashes worldwide. This is why social media platforms have been under growing pressure to combat misinformation and have been developing models such as fact-checking services and community notes to check its spread. This article explores the pros and cons of the models and evaluates their broader implications for online information integrity.
How the Models Work
- Third-Party Fact-Checking Model (formerly used by Meta) Meta initiated this program in 2016 after claims of extraterritorial election tampering through dis/misinformation on its platforms. It entered partnerships with third-party organizations like AFP and specialist sites like Lead Stories and PolitiFact, which are certified by the International Fact-Checking Network (IFCN) for meeting neutrality, independence, and editorial quality standards. These fact-checkers identify misleading claims that go viral on platforms and publish verified articles on their websites, providing correct information. They also submit this to Meta through an interface, which may link the fact-checked article to the social media post that contains factually incorrect claims. The post then gets flagged for false or misleading content, and a link to the article appears under the post for users to refer to. This content will be demoted in the platform algorithm, though not removed entirely unless it violates Community Standards. However, in January 2025, Meta announced it was scrapping this program and beginning to test X’s Community Notes Model in the USA, before rolling it out in the rest of the world. It alleges that the independent fact-checking model is riddled with personal biases, lacks transparency in decision-making, and has evolved into a censoring tool.
- Community Notes Model ( Used by X and being tested by Meta): This model relies on crowdsourced contributors who can sign up for the program, write contextual notes on posts and rate the notes made by other users on X. The platform uses a bridging algorithm to display those notes publicly, which receive cross-ideological consensus from voters across the political spectrum. It does this by boosting those notes that receive support despite the political leaning of the voters, which it measures through their engagements with previous notes. The benefit of this system is that it is less likely for biases to creep into the flagging mechanism. Further, the process is relatively more transparent than an independent fact-checking mechanism since all Community Notes contributions are publicly available for inspection, and the ranking algorithm can be accessed by anyone, allowing for external evaluation of the system by anyone.
CyberPeace Insights
Meta’s uptake of a crowdsourced model signals social media’s shift toward decentralized content moderation, giving users more influence in what gets flagged and why. However, the model’s reliance on diverse agreements can be a time-consuming process. A study (by Wirtschafter & Majumder, 2023) shows that only about 12.5 per cent of all submitted notes are seen by the public, making most misleading content go unchecked. Further, many notes on divisive issues like politics and elections may not see the light of day since reaching a consensus on such topics is hard. This means that many misleading posts may not be publicly flagged at all, thereby hindering risk mitigation efforts. This casts aspersions on the model’s ability to check the virality of posts which can have adverse societal impacts, especially on vulnerable communities. On the other hand, the fact-checking model suffers from a lack of transparency, which has damaged user trust and led to allegations of bias.
Since both models have their advantages and disadvantages, the future of misinformation control will require a hybrid approach. Data accuracy and polarization through social media are issues bigger than an exclusive tool or model can effectively handle. Thus, platforms can combine expert validation with crowdsourced input to allow for accuracy, transparency, and scalability.
Conclusion
Meta’s shift to a crowdsourced model of fact-checking is likely to have bigger implications on public discourse since social media platforms hold immense power in terms of how their policies affect politics, the economy, and societal relations at large. This change comes against the background of sweeping cost-cutting in the tech industry, political changes in the USA and abroad, and increasing attempts to make Big Tech platforms more accountable in jurisdictions like the EU and Australia, which are known for their welfare-oriented policies. These co-occurring contestations are likely to inform the direction the development of misinformation-countering tactics will take. Until then, the crowdsourcing model is still in development, and its efficacy is yet to be seen, especially regarding polarizing topics.
References
- https://www.cyberpeace.org/resources/blogs/new-youtube-notes-feature-to-help-users-add-context-to-videos
- https://en-gb.facebook.com/business/help/315131736305613?id=673052479947730
- http://techxplore.com/news/2025-01-meta-fact.html
- https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
- https://communitynotes.x.com/guide/en/about/introduction
- https://blogs.lse.ac.uk/impactofsocialsciences/2025/01/14/do-community-notes-work/?utm_source=chatgpt.com
- https://www.techpolicy.press/community-notes-and-its-narrow-understanding-of-disinformation/
- https://www.rstreet.org/commentary/metas-shift-to-community-notes-model-proves-that-we-can-fix-big-problems-without-big-government/
- https://tsjournal.org/index.php/jots/article/view/139/57

Introduction
Misinformation spreads faster than a pimple before your best friend's wedding, and these viral skincare hacks on social media can do more harm than good if smeared on without a second thought. The unverified skin care tips, exaggerated results, and product endorsements lacking proper dermatological backing can often lead to breakouts and serious damage.
The Allure and Risks of Online Skincare Trends
In the age of social media, beauty advice is easily accessible, but not all trending skincare hacks are beneficial. Influencers lacking professional dermatological knowledge often endorse "medical grade" skincare products, which may not be suitable for all skin types. The viral DIY skincare hacks, such as natural remedies like multani mitti (Fuller's earth), have found a new audience online. However, suppose such skincare tips are approached without due care and caution regarding their suitability for different skin types, or without the proper formulation of ingredients. In that case, they can result in skin problems. It is crucial to approach online skincare advice with a critical eye, as not all trends are backed by scientific research.
CyberPeace Recommendations
- Influencer Responsibility and Ethical Endorsements in Skincare
Influencers play a crucial role in shaping public perception in the skincare and lifestyle industries. However, they must exercise due diligence before endorsing skincare products or practices, as misinformation can lead to financial loss and health consequences. Influencers should only promote products they have personally tested or vetted by dermatologists or skincare professionals. They should also research the brand's credibility, check ingredients for safety, and understand the product's target audience.
- Strengthening Digital Literacy in Skincare Spaces
CyberPeace highlights that improving digital literacy is one of the best strategies to stop the spread of false information about skincare. Users nowadays, particularly young people, are continuously exposed to a deluge of wellness and beauty-related content. Many people are duped by overstated claims, pseudoscientific cures, and influencer-driven marketing masquerading as sound advice if they lack the necessary digital literacy. We recommend supporting digital literacy initiatives that teach users how to evaluate sources, think critically, and comprehend how algorithms promote content. Long-term impact is thought to be achieved through influencer partnerships, gamified learning modules, and community workshops that promote media literacy.
- Recommendation for Users to Prioritise Research and Critical Thinking
Users should prioritise research and critical thinking when engaging with skincare content online. It's crucial to distinguish between valid advice and misinformation. Thorough research, including expert reviews, ingredient checks, and scientific sources, is essential. Questioning endorsements and relying on trusted platforms and dermatologists can help ensure a skincare routine based on sound practices.
- Mandating Transparency from Influencers and Brands
Enforcing stronger transparency laws for influencers and skincare companies is a key suggestion. Social media influencers frequently neglect to reveal sponsored collaborations or paid advertisements, giving followers the impression that the skincare advice is based on the creators' own experience and objective judgment. This dishonest practice frequently promotes goods with little to no scientific support and feeds false information. The social media companies need to be proactive in identifying and removing content that violates disclosure and advertising guidelines.
- Creating a Verified Registry for Skincare Professionals
Increasing the voices of real experts is one of the most important strategies to build credibility and trust online. The establishment of a publicly available, validated registry of certified dermatologists, cosmetologists, and skincare scientists is suggested by cybersecurity experts and medical professionals. These experts could then receive a "verified expert" badge from social media companies, making it easier for users to discern between content created by unqualified people and genuine, evidence-based advice. Algorithms that promote such verified content would inevitably limit the dissemination of false information.
- Enforcing Platform Accountability and Reporting System
There needs to be platform-level accountability and safeguard mechanisms in case of any false information about skincare. Platforms should monitor repeat offenders and implement a tiered penalty system that includes content removal and temporary or permanent bans on such malicious user profiles.
References

On June 5th, the world comes together to reflect on how the way we live impacts the environment. We discuss conserving water, cutting back on plastic, and planting trees, but how often do we think about the environmental impact of our digital lives?
The internet is ubiquitous but invisible in a world that is becoming more interconnected by the day. It drives our communications, meetings, and recollections. However, there is a price for this digital convenience: carbon emissions.
A Digital Carbon Footprint: What Is It?
Electricity is necessary for every video we stream, email we send, and file we store on the cloud. But almost 60% of the electricity produced today is generated from burning fossil fuels. The digital world uses an incredible amount of energy, from the energy-hungry data centres that house our information to the networks that send it. Thus, the greenhouse gas emissions produced by our use of digital tools and services are referred to as our "digital carbon footprint."
To put it in perspective:
- Up to 150–200 grams of CO₂ can be produced by streaming an hour-long HD video on your phone.
- A typical email sent can release about 4 grams of CO₂, and more if it contains attachments.
- Comparable to the airline industry, the internet as a whole accounts for 1.5% to 4% of global greenhouse gas emissions.
Why It Matters
Ironically, despite the fact that digital life frequently feels "clean" and weightless, it is backed by enormous, power-hungry infrastructures. Additionally, our online activity is growing at a rapid pace as digital penetration increases. Plus, with the advent of AI and big data, the demand for energy is only going to rise. The harms of air, water, and soil degradation, and biodiversity loss are already upon us. It's high time we reconsider how we use technology on World Environment Day.
What Can You Do?
The good news is that even minor adjustments to our online conduct can have an impact.
🗑️ Clear out your digital clutter by getting rid of unnecessary emails, apps, and files.
📥 Unsubscribe from mailing lists that you no longer use.
📉 When HD is not required, stream videos with lower quality.
⚡ Make use of energy-saving gadgets and disconnect them when not in use.
🌐 Make the move to renewable energy-powered, environmentally friendly cloud providers.
🗳️ Support informed policy by engaging with your elected representatives and advocating for greener tech policies. Knowing your digital rights and responsibilities can help shape smarter policies and a healthier planet.
We at the CyberPeace Foundation think that cyberspace needs to be sustainable. An eco-friendly digital world is also a safer one, where all communities can thrive in harmony. We must promote digital responsibility, including its environmental component, as we work towards digital equity and resilience.
On this World Environment Day, let's go one step further and work towards a greener internet as well as a greener planet.