#FactCheck - Viral Videos of Mutated Animals Debunked as AI-Generated
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In today's digital age protecting your personal information is of utmost importance. The bad actors are constantly on the lookout for ways to misuse your sensitive or personal data. The Aadhaar card is a crucial document that is utilised by all of us for various aspects. It is considered your official government-verified ID and is used for various purposes such as for verification purposes, KYC purposes, and even for financial transactions. Your Aadhaar card is used in so many ways such as flight tickets booked by travel agents, check-in in hotels, verification at educational institutions and more. The bad actors can target and lure the victims by unauthorized access to your Aadhaar data and commit cyber frauds such as identity theft, unauthorized access, and financial fraud. Hence it is significantly important to protect your personal information and Aadhaar card details and prevent the misuse of your personal information.
What is fingerprint cloning?
Cybercrooks have been exploiting the Aadhaar Enabled Payment System (AePS). These scams entail cloning individuals' Aadhaar-linked biometrics through silicon fingerprints and unauthorized biometric devices, subsequently siphoning money from their bank accounts. Fingerprint cloning also known as fingerprint spoofing is a technique or a method where an individual tries to replicate someone else's fingerprint for unauthorized use. This is done for various reasons, including gaining unauthorized access to data, unlocking data or committing identity theft. The process of fingerprint cloning includes collection and creation.
The recent case of Aadhaar Card fingerprint cloning in Nawada
Nawada Cyber Police unit has arrested two perpetrators who were engaged in fingerprint cloning fraud. The criminals are accused of duping consumers of money from their bank accounts by cloning their fingerprints. Among the two perpetrators, one of them runs the Common Service Centre (CSC) whereas the second is a sweeper at the DBGB branch bank. The criminals are accused of duping consumers of money from their bank accounts by cloning their fingerprints. According to the police, an organized gang of cyber criminals had been defrauding the consumers for the last two years with the help of a CSC operator and were embezzling money from the accounts of consumers by cloning their fingerprints and taking Aadhaar numbers. The operator used to collect the Aadhaar number from the consumers by putting their thumb impression on a register. Among these two perpetrators, one was accused of withdrawing more money from the consumer's account and making less payment and sometimes not making the payment after withdrawing the money. Whereas the second perpetrator stole the data of consumers from the DBGB branch bank and prepared their fingerprint clone. During the investigation of a case related to fraud, the Special Investigation Team (SIT) of Cyber Police conducted raids in Govindpur and Roh police station areas on the basis of technical surveillance and available evidence and arrested them.
Safety measures for the security of your Aadhaar Card data
- Locking your biometrics: One way to save your Aadhaar card and prevent unauthorized access is by locking your biometrics. To lock & unlock your Aadhaar biometrics you can visit the official website of UIDAI or its official portal. So go to UIDAI’s and select the “Lock/Unlock Biometrics” from the Aadhar service section. Then enter the 12-digit Aadhaar number and security code and click on the OTP option. An OTP will be sent to your registered mobile number with Aadhaar. Once the OTP is received enter the OTP and click on the login button that will allow you to lock your biometrics. Enter the 4-digit security code mentioned on the screen and click on the “Enable” button. Your biometrics will be locked and you will have to unblock them in case you want to access them again. The official website of UIDAI is “https://uidai.gov.in/” and there is a dedicated Aadhar helpline 1947.
- Use masked Aadhaar Card: A masked Aadhaar card is a different rendition of an Aadhaar card that is designed to amplify the privacy and security of an individual Aadhaar number. In a masked Aadhaar card, the first eight digits of the twelve digits Aadhaar number are replaced by XXXX- XXXX and only the last four digits are visible. This adds an additional layer of protection to an individual Aadhaar’s number. To download a masked Aadhaar card you visit the government website of UIDAI and on the UIDAI homepage, you will see a "Download Aadhaar" option. Click on it. In the next step, you will be required to enter your 12-digit Aadhaar number along with the security code displayed on the screen. After entering your Aadhaar number, click on the Send OTP. You will receive an OTP on your registered phone number. Enter the OTP received in the provided field and click on the “Submit” button. You will be asked to select the format of your Aadhaar card, You can choose the masked Aadhaar card option. This will replace the first eight digits of your Aadhaar number with "XXXX-XXXX" on the downloaded Aadhaar card. Once the format is selected, click on the “Download Aadhaar” button and your masked Aadhaar card will be downloaded. So if any organisation requires your Aadhaar for verification you can share your masked Aadhar card which only shows the last 4 digits of your Aadhaar card number. Just the way you keep your bank details safe you should also keep your Aadhaar number secure otherwise people can misuse your identity and use it for fraud.
- Monitoring your bank account transactions: Regularly monitor your bank account statements for any suspicious activity and you can also configure transaction alerts with your bank account transactions.
Conclusion:
It is important to secure your Aadhaar card data effectively. The valuable security measure option of locking biometrics provides an additional layer of security. It safeguards your identity from potential scammers. By locking your biometrics you can secure your biometric data and other personal information preventing unauthorized access and any misuse of your Aadhaar card data. In today's evolving digital landscape protecting your personal information is of utmost importance. The cyber hygiene practices, safety and security measures must be adopted by all of us hence establishing cyber peace and harmonizing cyberspace.
References:
- https://www.livehindustan.com/bihar/story-cyber-crime-csc-operator-and-bank-sweeper-arrested-in-nawada-cheating-by-cloning-finger-prints-8913667.html
- https://www.indiatoday.in/news-analysis/story/cloning-fingerprints-fake-shell-entities-is-your-aadhaar-as-safe-as-you-may-think-2398596-2023-06-27
%20(1).webp)
Introduction
The global food industry is vast and complex, influencing consumer behaviour, policy, and health outcomes worldwide. However, misinformation within this sector is pervasive, with significant consequences for public health and market dynamics. Misinformation can arise from various sources, including misleading marketing campaigns, unsubstantiated health claims, and misrepresentation of food production practices through public endorsement or otherwise. Nutrition misinformation is one such example. The promotion of false or unproven products for profit can lead to mislead consumers and affect their interests. Misleading claims and inaccurate information about the nutritional value of food products and processes are common claims. The misinformation created about food on the global stage distorts public understanding of nutrition, food safety, and environmental impacts, leading to significant consequences for public health, consumer trust, and the economy.
Rise of Nutritional Misinformation and Consumer Distrust
Health and nutrition-related misinformation is one of the most prevalent types in the food sector. Businesses frequently advertise their products as "natural" or "healthy" without providing sufficient data to back up these claims, tricking customers into buying goods that might be heavy in fat, sugar, or salt. Words like "superfood" are frequently used without supporting evidence from science, giving the impression that they are healthier.
Misinformation also impacts the sustainability and ethics of food production. Claims of "sustainable" or "ethical" sourcing are frequently exaggerated or fabricated, leaving consumers unaware of the true environmental and social costs associated with certain products.
This lack of clarity is not only observed in general food trends but also within organisations meant to provide trustworthy information. There has been significant criticism, directed at the International Food Information Council (IFIC), for their alleged promotion of nutrition-based misinformation to safeguard the interests of large food corporations, resulting in potentially compromising public health. The preemptive claims that IFIC made about the nutritive claims have been questioned by the National Institutes of Health, USA in November 2022. They reported in their study that IFIC promotes food and beverage company interests and undermines the accurate dissemination of scientific evidence related to diet and health. This was in support of the objective of the study, which was to determine whether, there have been many claims that the nutritional value of certain foods or diets may be manipulated to favour business goals, leaving consumers misinformed about what constitutes a truly healthy diet.
Another source of misinformation is the growing ‘Free-From’ fad. The “free-from” label in the US is a food category of products that claim to be free from certain ingredients or chemicals. It has been steadily growing by 7% annually. These labels often tout products as healthier due to a simpler ingredient list. Although seemingly harmless, transparency in ingredient disclosure is often obscured in the 'free-from' trend. This can lead to consumer distrust in the long run, making them hesitant.
The Harmful Effects of Food Misinformation
The effects of misinformation about nutrition and food safety can directly affect public health.
Consumers unknowingly may accept false claims or avoid certain foods without scientific basis and adopt harmful dietary habits, potentially leading to malnutrition or other health problems. By the time the realisation sets in about being misled, their trust is eroded not only towards such companies but also towards the regulators. This distrust can lead to declining consumer confidence and disrupt market stability.
Some food-related misinformation downplays the environmental impact that certain food production practices have. An example of such a situation is the promotion of meat alternatives as being entirely eco-friendly without considering all environmental factors. This can mislead consumers and obscure the complex environmental effects of food production systems.
Misinformation can distort consumer purchasing habits, potentially leading to a reduced demand for certain products and unfair competition. The sufferers in this case are the small-scale producers who suffer disproportionately, while the large corporations might use this misinformation to maintain their dominance in the market. Regulatory checks, open communication, and public education campaigns are needed to combat mis/disinformation in the global food sector and enable consumers to make decisions that are sustainable, healthful and informed.
CyberPeace Recommendations
- Unfair trade practices like providing misleading information or unchecked claims on food products should be better addressed by the regulators. Companies must provide clear, transparent and accurate information about their products as mandated under the Food Safety and Standards (Advertising and Claims) Regulations, 2018. This information should include the true origins, production methods, and nutritional content on their labels.
- Promotions of initiatives and investments by public health organisations and food authorities towards educating consumers and improving food literacy should encouraged.
- Regulating social media endorsement is also crucial to prevent the spread of misinformation and unchecked claims. Without proper due diligence on product details, influencers may unknowingly mislead their audience, causing potential harm.
- The Social Media Platforms can partner with nutritionists, dietitians, and other health professionals who are content creators, as they can help in understanding and promoting accurate, science-based nutrition information and debunk any misleading claims.
- Campaigns should be encouraged to spread public awareness about the harms of food-related misleading claims or trends. Emphasis should be on evidence-based nutritional guidance. The ongoing research towards food safety, nutrition, and true information should be actively communicated to keep the public informed. Combating food misinformation requires more robust regulations, improved transparency, and heightened consumer awareness and vigilance.
References
- https://timesofindia.indiatimes.com/india/label-claims-on-packaged-food-could-be-misleading-icmr/articleshow/110053363.cms
- https://www.outlookindia.com/hub4business/empowering-change-freedom-food-alliance-takes-on-global-food-industry-misinformation
- https://insightsnow.com/misinformation-hurting-food-business/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9618198/pdf/12992_2022_Article_884.pdf

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/