#FactCheck - AI-Generated Video of Peacock ‘Rescue’ Falsely Shared as Real
Executive Summary:
A video showing a peacock allegedly trapped in ice has been going viral on social media. In the clip, the peacock appears to be frozen in a snow-covered area. Moments later, a man is seen approaching with a hammer and breaking the ice to rescue the bird. Social media users are sharing the video as a real-life incident, praising the peacock’s resilience and describing the scene as inspiring. However, CyberPeace research found the viral claim to be misleading. Our research revealed that the video was created using Artificial Intelligence (AI) and is being falsely circulated as a real incident.
Claim:
Facebook user ‘Ras Bihari Pathak’ shared the viral video on January 25, 2026, with the caption: “This peacock is not standing on ice, but on courage. It reminds us that no matter how harsh the circumstances are, hope always returns in colours.” The archived version of the post can be accessed here.

Fact Check:
To verify the claim, we first conducted a keyword search on Google to check whether any such real incident involving a peacock trapped in ice had been reported. However, no credible or verified media reports were found. Next, we closely examined the viral video. Upon observation, the peacock’s movements and reactions appeared unnatural and artificial. The motion lacked realistic physical behaviour, raising suspicion that the video might have been digitally generated. To confirm this, we analysed the clip using the AI video detection tool Hive Moderation, which indicated a 99 per cent or higher likelihood that the video was AI-generated.

Conclusion:
CyberPeace research confirms that the viral video showing a peacock allegedly trapped in ice is not real. The clip has been created using Artificial Intelligence and is being shared on social media with a false and misleading claim.
Related Blogs

Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.