#FactCheck - Viral Videos of Mutated Animals Debunked as AI-Generated
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/

Introduction
Twitter Inc.’s appeal against barring orders for specific accounts issued by the Ministry of Electronics and Information Technology was denied by a single judge on the Karnataka High Court. Twitter Inc. was also given an Rs. 50 lakh fine by Justice Krishna Dixit, who claimed the social media corporation had approached the court defying government directives.
As a foreign corporation, Twitter’s locus standi had been called into doubt by the government, which said they were ineligible to apply Articles 19 and 21 to their situation. Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
The Issue
In accordance with Section 69A of the Information Technology Act, the Ministry issued the directives. Nevertheless, Twitter had argued in its appeal that the orders “fall foul of Section 69A both substantially and procedurally.” Twitter argued that in accordance with 69A, account holders were to be notified before having their tweets and accounts deleted. However, the Ministry failed to provide these account holders with any notices.
On June 4, 2022, and again on June 6, 2022, the government sent letters to Twitter’s compliance officer requesting that they come before them and provide an explanation for why the Blocking Orders were not followed and why no action should be taken against them.
Twitter replied on June 9 that the content against which it had not followed the blocking orders does not seem to be a violation of Section 69A. On June 27, 2022, the Government issued another notice stating Twitter was violating its directions. On June 29, Twitter replied, asking the Government to reconsider the direction on the basis of the doctrine of proportionality. On June 30, 2022, the Government withdrew blocking orders on ten account-level URLs but gave an additional list of 27 URLs to be blocked. On July 10, more accounts were blocked. Compiling the orders “under protest,” Twitter approached the HC with the petition challenging the orders.
Legality
Additionally, the government claimed that because Twitter was only designed to serve as an intermediary, there was no “jural relationship” between Twitter and its users.
Government attorney Additional Solicitor General R Sankaranarayanan argued that tweets mentioning “Indian Occupied Kashmir” and the survival of LTTE commander Velupillai Prabhakaran were serious enough to undermine the integrity of the nation.
Twitter, on the other hand, claimed that its users have pushed for these rights. Additionally, Twitter maintained that under Article 14 of the Constitution, even as a foreign company, they were entitled to certain rights, such as the right to equality. They also argued that the reason for the account blocking in each case was not stated and that Section 69a’s provision for blocking a URL should only apply to the offending URL rather than the entire account because blocking the entire account would prevent the creation of information while blocking the offending tweet only applied to already-created information.
Conclusion
The evolution of cyberspace has been substantiated by big tech companies like Facebook, Google, Twitter, Amazon and many more. These companies have been instrumental in leading the spectrum of emerging technologies and creating a blanket of ease and accessibility for users. Compliance with laws and policies is of utmost priority for the government, and the new bills and policies are empowering the Indian cyberspace. Non Compliance will be taken very seriously, and the same is legalised under the Intermediary Guidelines 2021 and 2022 by Meity. Referring to Section 79 of the Information Technology Act, which pertains to an exemption from liability of intermediary in some instances, it was said, “Intermediary is bound to obey the orders which the designate authority/agency which the government fixes from time to time.”

Introduction
The Indian Computer Emergency Response Team, CERT-In, is the national statutory agency that responds to Cybersecurity Incidents under the Ministry of Electronics and Information Technology (MeitY) of the Government of India. CERT-In and Information Sharing and Analysis Center (ISAC) have joined hands to develop a focused pool of Cybersecurity Leaders through the National Cyber Security Scholar Program (NCSSP). This National Cyber Security Scholar Program is to create a pool of credible and ethical cybersecurity leaders in the country who prioritise national cyber security in their professional endeavours. This program allows both organisations to jointly issue joint certifications for Cohort 6 of the National Cyber Security Scholar Program (NCSSP). This certification is provided to cybersecurity professionals who complete one of the world’s leading cybersecurity management programs.
About the Program
The National Cybersecurity Scholar (NCSS) is a comprehensive 18-week, 160-hour Instructor-led program for emerging cybersecurity leaders. The ISAC will conduct the program with CERT-IN and KDEM as knowledge partners. This Cyber Security Scholar program aims to provide an extraordinary opportunity, for scholars, to gain hands-on experience in real-world scenarios through activities such as war games. It will allow scholars to acquaint themselves with roles such as that of stakeholders, including attackers, Security Operations Centre (SOC) teams, Forensicators, Chief Information Security Officers (CISOs), and CEOs, and engage in tabletop exercises that simulate a cyber crisis. This program would allow scholars to understand how responses to cyber crises impact the financial performance of an organisation, including, stock prices and sales. It offers a treasure trove of insights into the economic impact of cybersecurity decisions and the importance of proactive risk management.
The program invites applications from various scholars including Mid to senior-level leaders, diplomats and diplomatic corps officers, mid to senior-level government officials involved in homeland and cybersecurity operations, experienced executives from Managed Security Services Providers (MSSPs), faculty members who specialise in new and emerging technologies, cybersecurity professionals in CII sectors and post-doctoral or research scholars in cybersecurity.
CyberPeace Outlook
The National Cyber Security Scholar Program subsumes several key dimensions working towards building a resilient cybersecurity ecosystem for India.
- The program focuses on skill development and enhancing scholars’ knowledge in domains of network security, ethical hacking, cyber forensics, incident response, malware analysis, and threat intelligence.
- The partnership between CERT-In and ISAC, government and Industry entities, ensures that scholars are exposed to different policy-level frameworks and technical expertise, offering a unique blend of perspectives that cater to the country's national security goals and industry best practices.
- The scholar program encourages the development of new methodologies, tools, and frameworks that could be instrumental in tackling future cyber challenges and advancing India's position as a global leader in cybersecurity research and development. Research and innovation in cybersecurity are critical to the program.
- It plays a significant role in providing opportunities for career development by further providing networking platforms with professionals, researchers, and thought leaders in the cybersecurity field, giving them exposure to internships, job placements, and further academic pursuits.
This program aims to support upskilling India’s broader cyber defence strategy through the creation of highly skilled professionals. The scholars are expected to contribute actively to national cybersecurity efforts, whether through roles in government, private sector, or academia, helping to create a more secure and resilient cyberspace. The National Cyber Security Scholar Program is a major advancement in strengthening cybersecurity resilience in India. In a digital world where cyber threats crossing boundaries, such programs are essential for maintaining our national security and economic stability.
References
- https://theprint.in/ani-press-releases/cert-in-and-isac-collaborate-to-develop-focussed-pool-of-cybersecurity-leaders-through-the-national-cyber-security-scholar-program-ncssp/2318021/
- https://isacfoundation.org/national-cyber-security-scholar/
- https://cyberversefoundation.org/national-cyber-security-scholar/