#FactCheck - Viral Videos of Mutated Animals Debunked as AI-Generated
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The role of ‘Small and Medium Enterprises’ (SMEs) in the economic and social development of the country is well established. The SME sector is often driven by individual creativity and innovation. With its contribution at 8% of the country’s GDP, and 45% of the manufactured output and 40% of its exports, SMEs provide employment to about 60 million persons through over 26 million enterprises producing over six thousand products.
It would be an understatement to say that the SMEs sector in India is highly heterogeneous in terms of the size of the enterprises, variety of products and services produced and the levels of technology employed. With the SME sector booming across the country, these enterprises are contributing significantly to local, state, regional and national growth and feeding into India’s objectives of inclusive, sustainable development.
As the digital economy expands, SMEs cannot be left behind and must integrate online to be able to grow and prosper. This development is not without its risks and cybersecurity concerns and digital threats like misinformation are fast becoming a pressing pain point for the SME sector. The unique challenge posed to SMEs by cyber threats is that while the negative consequences of digital risks are just as damaging for the SMEs as they are for larger industries, the former’s ability to counter these threats is not at par with the latter, owing to the limited nature of resources at their disposal. The rapid development of emerging technologies like artificial intelligence makes it easier for malicious actors to develop bots, deepfakes, or other forms of manipulated content that can steer customers away from small businesses and the consequences can be devastating.
Misinformation is the sharing of inaccurate and misleading information, and the act can be both deliberate and unintentional. Malicious actors can use fake reviews, rumours, or false images to promote negative content or create backlash against a business’ brand and reputation. For a fledgling or growing enterprise, its credibility is a critical asset and any threat to the same is as much a cause for concern as any other operational hindrance.
Relationship Building to Counter Misinformation
We live in a world that is dominated by brands. A brand should ideally inspire trust. It is the single most powerful and unifying characteristic that embodies an organisation's culture and values and once well-established, can create incremental value. Businesses report industry rumours where misinformation resulted in the devaluation of a product, sowing mistrust among customers, and negatively impacting the companies’ revenue. Mitigating strategies to counter these digital downsides can include implementing greater due diligence and basic cyber hygiene practices, like two-factor or multi-factor authentication, as well as open communication of one’s experiences in the larger professional and business networks.
The loss of customer trust can be fatal for a business, and for an SME, the access to the scale of digital and other resources required to restore reputations may simply not be a feasible option. Creating your brand story is not just the selling pitch you give to customers and investors, but is also about larger qualitative factors such as your own motivation for starting the enterprise or the emotional connection your audience base enjoys with your organisation. The brand story is a mosaic of multiple tangible and intangible elements that all come together to determine how the brand is perceived by its various stakeholders. Building a compelling and fortified brand story which resonates deeply with people is an important step in developing a robust reputation. It can help innoculate against several degrees of misinformation and malicious attempts and ensure that customers continue to place their faith in the brand despite attempts to hurt this dynamic.
Engaging with the target audience, ie, the customer base is part of an effective marketing tool and misinformation inoculation strategy. SMEs should also continuously assess their strategies, adapt to market changes, and remain agile in their approach to stay competitive and relevant in today's dynamic business environment. These strategies will lead to greater customer engagement through the means of feedback, reviews and surveys which help in building trust and loyalty. Innovative and dynamic customer service engages the target audience and helps in staying in the competition and being relevant.
Crisis Management and Response
Having a crisis management strategy is an important practice for all SMEs and should be mandated for better policy implementation. Businesses need greater due diligence and basic cyber hygiene practices, like two-factor authentication, essential compliances, strong password protocols, transparent disclosure, etc.
The following steps should form part of a crisis management and response strategy:
- Assessing the damage by identifying the misinformation spread and its impact is the first step.
- Issuing a response in the form of a public statement by engaging the media should precede legal action.
- Two levels of communication need to take place in response to a misinformation attack. The first tier is internal, to the employees and it should clarify the implications of the incident and the organisation’s response plan. The other is aimed at customers via direct outreach to clarify the situation and provide accurate information in regard to the matter. If required the employees can be provided training related to the handling of the customer enquiries regarding the misinformation.
- The digital engagement of the enterprise should be promptly updated and social media platforms and online communications must address the issue and provide clarity and factual information.
- Immediate action must include a plan to rebuild reputations and trust by ensuring customers of the high quality of products and services. The management should seek customer feedback and show commitment to improving processes and transparency. Sharing positive testimonials and stories of satisfied customers can also help at this stage.
- Engaging with the community and collaborating with organisations is also an important part of crisis management.
While these steps are for rebuilding and crisis management, further steps also need to be taken:
- Monitoring customer sentiment and gauging the effectiveness of the efforts taken is also necessary. And if required, strategic adjustments can be made in response to the evolving circumstances.
- Depending on the severity of the impact, management may choose to engage the professional help of PR consultants and crisis management experts to develop comprehensive recovery plans and help navigate the situation.
- A long-term strategy which focuses on building resilience against future attacks is important. Along with this, engaging in transparency and proactive communication with stakeholders is a must.
Legal and Ethical Considerations
SMEs administrators must prioritise ethical market practices and appreciate that SMEs are subject to laws which deal with defamation, intellectual property rights- trademark and copyright infringement in particular, data protection and privacy laws and consumer protection laws. Having the knowledge of these laws and ensuring that there is no infringement upon the rights of other enterprises or their consumers is integral in order to continue engaging in business legally.
Ethical and transparent business conduct includes clear and honest communication and proactive public redressal mechanisms in the event of misinformation or mistakes. These efforts go a long way towards building trust and accountability.
Proactive public engagement is an important step in building relationships. SMEs can engage with the community where they conduct their business through outreach programs and social media engagement. Efforts to counter misinformation through public education campaigns that alert customers and other stakeholders about misinformation serve the dual purpose of countering misinformation and creating deep community ties. SME administrators should monitor content and developments in their markets and sectors to ensure that their marketing practices are ethical and not creating or spreading misinformation, be it in the form of active sensationalising of existing content or passive dissemination of misinformation created by others. Fact-checking tools and expert consultations can help address and prevent a myriad of problems and should be incorporated into everyday operations.
Conclusion
Developing strong cybersecurity protocols, practising basic digital hygiene and ensuring regulatory compliances are crucial to ensure that a business not only survives but also thrives. Therefore, a crisis management plan and trust-building along with ethical business and legal practices go a long way in ensuring the future of SMEs. In today's digital landscape, misinformation is pervasive, and trust has become a cornerstone of successful business operations. It is the bedrock of a resilient and successful SME. By implementing and continuously improving trust-building efforts, businesses can not only navigate the challenges of misinformation but also create lasting value for their customers and stakeholders. Prioritising trust ensures long-term growth and sustainability in an ever-evolving digital landscape.
References
- https://SME.gov.in/sites/default/files/SME-Strategic-Action-Plan.pdf
- https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide?lang=en
- https://dcSME.gov.in/Report%20of%20Expert%20Committee%20on%20SMEs%20-%20The%20U%20K%20Sinha%20Committee%20constitutes%20by%20RBI.pdf

Over The Top (OTT)
OTT messaging platforms have taken the world by storm; everyone across the globe is working on OTT platforms, and they have changed the dynamics of accessibility and information speed forever. Whatsapp is one of the leading OTT messaging platforms under the tech giant Meta as of 2013. All tasks, whether personal or professional, can be performed over Whatsapp, and as of today, Whatsapp has 2.44 billion users worldwide, with 487.5 Million users in India alone[1]. With such a vast user base, it is pertinent to have proper safety and security measures and mechanisms on these platforms and active reporting options for the users. The growth of OTT platforms has been exponential in the previous decade. As internet penetration increased during the Covid-19 pandemic, the following factors contributed towards the growth of OTT platforms –
- Urbanisation and Westernisation
- Access to Digital Services
- Media Democratization
- Convenience
- Increased Internet Penetration
These factors have been influential in providing exceptional content and services to the consumers, and extensive internet connectivity has allowed people from the remotest part of the country to use OTT messaging platforms. But it is pertinent to maintain user safety and security by the platforms and abide by the policies and regulations to maintain accountability and transparency.
New Safety Features
Keeping in mind the safety requirements and threats coming with emerging technologies, Whatsapp has been crucial in taking out new technology and policy-based security measures. A number of new security features have been added to WhatsApp to make it more difficult to take control of other people’s accounts. The app’s privacy and security-focused features go beyond its assertion that online chats and discussions should be as private and secure as in-person interactions. Numerous technological advancements pertaining to that goal have focussed on message security, such as adding end-to-end encryption to conversations. The new features allegedly increase user security on the app.
WhatsApp announced that three new security features are now available to all users on Android and iOS devices. The new security features are called Account Protect, Device Verification, and Automatic Security Codes
- For instance, a new programme named “Account Protect” will start when users migrate an account from an old device to a new one. If users receive an unexpected alert, it may be a sign that someone is trying to access their account without their knowledge. Users may see an alert on their previous handset asking them to confirm that they are truly transitioning away from it.
- To make sure that users cannot install malware to access other people’s messages, another function called “Device Verification” operates in the background. Without the user’s knowledge, this feature authenticates devices in the background. In particular, WhatsApp claims it is concerned about unlicensed WhatsApp applications that contain spyware made explicitly for this use. Users do not need to take any action due to the company’s new checks that help authenticate user accounts to prevent this.
- The final feature is dubbed “automatic security codes,” It builds on an already-existing service that lets users verify that they are speaking with the person they believe they are. This is still done manually, but by default, an automated version will be carried out with the addition of a tool to determine whether the connection is secure.
While users can now view the code by visiting a user’s profile, the social media platform will start to develop a concept called “Key Transparency” to make it easier for its users to verify the validity of the code. Update to the most recent build if you use WhatsApp on Android because these features have already been released. If you use iOS, the security features have not yet been released, although an update is anticipated soon.
Conclusion
Digital safety is a crucial matter for netizens across the world; platforms like Whatsapp, which enjoy a massive user base, should lead the way in terms of OTT platforms’ cyber security by inculcating the use of emerging technologies, user reporting, and transparency in the principles and also encourage other platforms to replicate their security mechanisms to keep bad actors at bay. Account Protect, Device Verification, and Automatic Security Codes will go a long way in protecting the user’s interests while simultaneously maintaining convenience, thus showing us that the future with such platforms is bright and secure.
[1] https://verloop.io/blog/whatsapp-statistics-2023/#:~:text=1.,over%202.44%20billion%20users%20worldwide.

In an era defined by perpetual technological advancement, the hitherto uncharted territories of the human experience are progressively being illuminated by the luminous glow of innovation. The construct of privacy, once a straightforward concept involving personal secrets and solitude, has evolved into a complex web of data protection, consent, and digital rights. This notion of privacy, which often feels as though it elusively ebbs and flows like the ghost of a bygone epoch, is now confronted with a novel intruder – neurotechnology – which promises to redefine the very essence of individual sanctity.
Why Neuro Rights
At the forefront of this existential conversation lie ventures like Elon Musk's Neuralink. This company, which finds itself at the confluence of fantastical dreams and tangible reality, teases a future where the contents of our thoughts could be rendered as accessible as the words we speak. An existence where machines not only decipher our mental whispers but hold the potential to echo back, reshaping our cognitive landscapes. This startling innovation sets the stage for the emergence of 'neurorights' – a paradigm aimed at erecting a metaphorical firewall around the synapses and neurons that compose our innermost selves.
At institutions such as the University of California, Berkeley, researchers, under the aegis of cognitive scientists like Jack Gallant, are already drawing the map of once-inaccessible territories within the mind. Gallant's landmark study, which involved decoding the brain activity of volunteers as they absorbed visual stimuli, opened Pandora's box regarding the implications of mind-reading. The paper published a decade ago, was an inchoate step toward understanding the narrative woven within the cerebral cortex. Although his work yielded only a rough sketch of the observed video content, it heralded an era where thought could be translated into observable media.
The Growth
This rapid acceleration of neuro-technological prowess has not gone unnoticed on the sociopolitical stage. In a pioneering spirit reminiscent of the robust legislative eagerness of early democracies, Chile boldly stepped into the global spotlight in 2021 by legislating neurorights. The Chilean senate's decision to constitutionalize these rights sent ripples the world over, signalling an acknowledgement that the evolution of brain-computer interfaces was advancing at a daunting pace. The initiative was spearheaded by visionaries like Guido Girardi, a former senator whose legislative foresight drew clear parallels between the disruptive advent of social media and the potential upheaval posed by emergent neurotechnology.
Pursuit of Regulation
Yet the pursuit of regulation in such an embryonic field is riddled with intellectual quandaries and ethical mazes. Advocates like Allan McCay articulate the delicate tightrope that policy-makers must traverse. The perils of premature regulation are as formidable as the risks of a delayed response – the former potentially stifling innovation, the latter risking a landscape where technological advances could outpace societal control, engendering a future fraught with unforeseen backlashes.
Such is the dichotomy embodied in the story of Ian Burkhart, whose life was irrevocably altered by the intervention of neurotechnology. Burkhart's experience, transitioning from quadriplegia to digital dexterity through sheer force of thought, epitomizes the utopic potential of neuronal interfaces. Yet, McCay issues a solemn reminder that with great power comes great potential for misuse, highlighting contentious ethical issues such as the potential for the criminal justice system to over extend its reach into the neural recesses of the human psyche.
Firmly ensconced within this brave new world, the quest for prudence is of paramount importance. McCay advocates for a dyadic approach, where privacy is vehemently protected and the workings of technology proffered with crystal-clear transparency. The clandestine machinations of AI and the danger of algorithmic bias necessitate a vigorous, ethical architecture to govern this new frontier.
As legal frameworks around the globe wrestle with the implications of neurotechnology, countries like India, with their burgeoning jurisprudence regarding privacy, offer a vantage point into the potential shape of forthcoming legislation. Jurists and technology lawyers, including Jaideep Reddy, acknowledge ongoing protections yet underscore the imperativeness of continued discourse to gauge the adequacy of current laws in this nascent arena.
Conclusion
The dialogue surrounding neurorights emerges, not merely as another thread in our social fabric, but as a tapestry unto itself – intricately woven with the threads of autonomy, liberty, and privacy. As we hover at the edge of tomorrow, these conversations crystallize into an imperative collective undertaking, promising to define the sanctity of cognitive liberty. The issue at hand is nothing less than a societal reckoning with the final frontier – the safeguarding of the privacy of our thoughts.
References: