#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore, false and misleading.
Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html

As Generative AI continues to make strides by creating content through user prompts, the increasing sophistication of language models widens the scope of the services they can deliver. However, they have their own limitations. Recently, alerts by Apple Intelligence on the iPhone’s latest version have come under fire for misrepresenting news by news agencies.
The new feature was introduced with the aim of presenting an effective way to group and summarise app notifications in a single alert on a user’s lock screen. This was to enable an easier scan for important details amongst a large number of notifications, doing away with overwhelming updates for the user. This, however, resulted in the misrepresentation of news channels and reporting of fake news such as the arrest of Israeli Prime Minister Benjamin Netanyahu, Luke Litter winning the PDC World Darts Championship even before the competition, tennis Player Rafael Nadal coming out as gay, among other news alerts. Following false alerts, BBC had complained about its journalism being misrepresented. In response, Apple’s proposed solution was to clarify to the user that when the text summary is displayed in the notifications, it is clearly stated to be a product of notification Apple Intelligence and not of the news agency. It also claimed the complexity of having to compress content into short summaries which resulted in fallacious alerts. Further comments revealed that the AI alert feature was in beta and is continuously being worked on depending on the user’s feedback. Owing to the backlash, Apple has suspended this service and announced that an improved version of the feature is set to be released in the near future, however, no dates have been set.
CyberPeace Insights
The rush to release new features often exacerbates the problem, especially when AI-generated alerts are responsible for summarising news reports. This can significantly damage the credibility and trust that brands have worked hard to build. The premature release of features that affect the dissemination, content, and public comprehension of information carries substantial risks, particularly in the current environment where misinformation is widespread. Timely action and software updates, which typically require weeks to implement, are crucial in mitigating these risks. The desire to be ahead in the game and bring out competitive features must not resolve the responsibility of providing services that are secure and reliable. This aforementioned incident highlights the inherent nature of generative AI, which operates by analysing the data it was trained on to deliver the best possible responses based on user prompts. However, these responses are not always accurate or reliable. When faced with prompts beyond its scope, AI systems often produce untrustworthy information, underlining the need for careful oversight and verification. A question to deliberate on is whether we require such services at all, which in practice, do save our time, but do so at the risk of the spread of false tidbits.
References
- https://www.theguardian.com/technology/2025/jan/07/apple-update-ai-inaccurate-news-alerts-bbc-apple-intelligence-iphone
- https://www.firstpost.com/tech/apple-intelligence-hallucinates-falsely-credits-bbc-for-fake-news-broadcaster-lodges-complaint-13845214.html
- https://www.cnbc.com/2025/01/08/apple-ai-fake-news-alerts-highlight-the-techs-misinformation-problem.html
- https://news.sky.com/story/apple-ai-feature-must-be-revoked-over-notifications-misleading-users-say-journalists-13288716
- https://www.hindustantimes.com/world-news/apple-to-pay-95-million-in-user-privacy-violation-lawsuit-on-siri-101735835058198.html
- https://www.hindustantimes.com/business/apple-denies-claims-of-siri-violating-user-privacy-after-95-million-class-action-suit-settlement-101736445941497.html#:~:text=Apple%20denies%20claims%20of%20Siri,action%20suit%20settlement%20%2D%20Hindustan%20Times
- https://www.google.com/search?q=apple+AI+alerts+misinformation&oq=apple+AI+alerts+misinformation+&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigATIHCAQQIRigAdIBCTEyMzUxajBqN6gCALACAA&sourceid=chrome&ie=UTF-8
- https://www.fastcompany.com/91261727/apple-intelligence-news-summaries-mistakes
- https://timesofindia.indiatimes.com/technology/tech-news/siris-secret-listening-costs-apple-95m/articleshow/116906209.cms
- https://www.theguardian.com/technology/2025/jan/17/apple-suspends-ai-generated-news-alert-service-after-bbc-complaint

Introduction
We consume news from various sources such as news channels, social media platforms and the Internet etc. In the age of the Internet and social media, the concern of misinformation has become a common issue as there is widespread misinformation or fake news on the Internet and social media platforms.
Misinformation on social media platforms
The wide availability of user-provided content on online social media platforms facilitates the spread of misinformation. With the vast population on social media platforms, the information gets viral and spreads all over the internet. It has become a serious concern as such misinformation, including rumours, morphed images, unverified information, fake news, and planted stories, spread easily on the internet, leading to severe consequences such as public riots, lynching, communal tensions, misconception about facts, defamation etc.
Platform-centric measures to mitigate the spread of misinformation
- Google introduced the ‘About this result’ feature’. This allows the users to help with better understand the search results and websites at a glance.
- During the covid-19 pandemic, there were huge cases of misinformation being shared. Google, in April 2020, invested $6.5 million in funding to fact-checkers and non-profits fighting misinformation around the world, including a check on information related to coronavirus or on issues related to the treatment, prevention, and transmission of Covid-19.
- YouTube also have its Medical Misinformation Policy which prevents the spread of information or content which is in contravention of the World Health Organization (WHO) or local health authorities.
- At the time of the Covid-19 pandemic, major social media platforms such as Facebook and Instagram have started showing awareness pop-ups which connected people to information directly from the WHO and regional authorities.
- WhatsApp has a limit on the number of times a WhatsApp message can be forwarded to prevent the spread of fake news. And also shows on top of the message that it is forwarded many times. WhatsApp has also partnered with fact-checking organisations to make sure to have access to accurate information.
- On Instagram as well, when content has been rated as false or partly false, Instagram either removes it or reduces its distribution by reducing its visibility in Feeds.
Fight Against Misinformation
Misinformation is rampant all across the world, and the same needs to be addressed at the earliest. Multiple developed nations have synergised with tech bases companies to address this issue, and with the increasing penetration of social media and the internet, this remains a global issue. Big tech companies such as Meta and Google have undertaken various initiatives globally to address this issue. Google has taken up the initiative to address this issue in India and, in collaboration with Civil Society Organisations, multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground.
How to prevent the spread of misinformation?
Conclusion
In the digital media space, there is a widespread of misinformative content and information. Platforms like Google and other social media platforms have taken proactive steps to prevent the spread of misinformation. Users should also act responsibly while sharing any information. Hence creating a safe digital environment for everyone.