#FactCheck - "AI-Generated Image of UK Police Officers Bowing to Muslims Goes Viral”
Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
Related Blogs

Over The Top (OTT)
OTT messaging platforms have taken the world by storm; everyone across the globe is working on OTT platforms, and they have changed the dynamics of accessibility and information speed forever. Whatsapp is one of the leading OTT messaging platforms under the tech giant Meta as of 2013. All tasks, whether personal or professional, can be performed over Whatsapp, and as of today, Whatsapp has 2.44 billion users worldwide, with 487.5 Million users in India alone[1]. With such a vast user base, it is pertinent to have proper safety and security measures and mechanisms on these platforms and active reporting options for the users. The growth of OTT platforms has been exponential in the previous decade. As internet penetration increased during the Covid-19 pandemic, the following factors contributed towards the growth of OTT platforms –
- Urbanisation and Westernisation
- Access to Digital Services
- Media Democratization
- Convenience
- Increased Internet Penetration
These factors have been influential in providing exceptional content and services to the consumers, and extensive internet connectivity has allowed people from the remotest part of the country to use OTT messaging platforms. But it is pertinent to maintain user safety and security by the platforms and abide by the policies and regulations to maintain accountability and transparency.
New Safety Features
Keeping in mind the safety requirements and threats coming with emerging technologies, Whatsapp has been crucial in taking out new technology and policy-based security measures. A number of new security features have been added to WhatsApp to make it more difficult to take control of other people’s accounts. The app’s privacy and security-focused features go beyond its assertion that online chats and discussions should be as private and secure as in-person interactions. Numerous technological advancements pertaining to that goal have focussed on message security, such as adding end-to-end encryption to conversations. The new features allegedly increase user security on the app.
WhatsApp announced that three new security features are now available to all users on Android and iOS devices. The new security features are called Account Protect, Device Verification, and Automatic Security Codes
- For instance, a new programme named “Account Protect” will start when users migrate an account from an old device to a new one. If users receive an unexpected alert, it may be a sign that someone is trying to access their account without their knowledge. Users may see an alert on their previous handset asking them to confirm that they are truly transitioning away from it.
- To make sure that users cannot install malware to access other people’s messages, another function called “Device Verification” operates in the background. Without the user’s knowledge, this feature authenticates devices in the background. In particular, WhatsApp claims it is concerned about unlicensed WhatsApp applications that contain spyware made explicitly for this use. Users do not need to take any action due to the company’s new checks that help authenticate user accounts to prevent this.
- The final feature is dubbed “automatic security codes,” It builds on an already-existing service that lets users verify that they are speaking with the person they believe they are. This is still done manually, but by default, an automated version will be carried out with the addition of a tool to determine whether the connection is secure.
While users can now view the code by visiting a user’s profile, the social media platform will start to develop a concept called “Key Transparency” to make it easier for its users to verify the validity of the code. Update to the most recent build if you use WhatsApp on Android because these features have already been released. If you use iOS, the security features have not yet been released, although an update is anticipated soon.
Conclusion
Digital safety is a crucial matter for netizens across the world; platforms like Whatsapp, which enjoy a massive user base, should lead the way in terms of OTT platforms’ cyber security by inculcating the use of emerging technologies, user reporting, and transparency in the principles and also encourage other platforms to replicate their security mechanisms to keep bad actors at bay. Account Protect, Device Verification, and Automatic Security Codes will go a long way in protecting the user’s interests while simultaneously maintaining convenience, thus showing us that the future with such platforms is bright and secure.
[1] https://verloop.io/blog/whatsapp-statistics-2023/#:~:text=1.,over%202.44%20billion%20users%20worldwide.

The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
References
- https://medium.com/@haileyq/my-experience-with-studio-ghibli-style-ai-art-ethical-debates-in-the-gpt-4o-era-b84e5a24cb60
- https://www.bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement

Introduction
Governments worldwide are enacting cybersecurity laws to enhance resilience and secure cyberspace against growing threats like data breaches, cyber espionage, and state-sponsored attacks in the digital landscape. As a response, the EU Council has been working on adopting new laws and regulations under its EU Cybersecurity Package- a framework to enhance cybersecurity capacities across the EU to protect critical infrastructure, businesses, and citizens. Recently, the Cyber Solidarity Act was adopted by the Council, which aims to improve coordination among EU member states for increased cyber resilience. Since regulations in the EU play a significant role in shaping the global regulatory environment, it is important to keep an eye on such developments.
Overview of the Cyber Solidarity Act
The Act sets up a European Cyber Security Alert System consisting of Cross-Border Cyber Hubs across Europe to collect intelligence and act on cyber threats by leveraging emerging technology such as Artificial Intelligence (AI) and advanced data analytics to share warnings on cyber threats with other cyber data centres across the national borders of the EU. This is expected to assist authorities in responding to cyber threats and incidents more quickly and effectively.
Further, it provides for the creation of a new Cybersecurity Emergency Mechanism to enhance incident response systems in the EU. This will include testing the vulnerabilities in critical sectors like transport, energy, healthcare, finance, etc., and creating a reserve of private parties to provide mutual technical assistance for incident response requests from EU member-states or associated third countries of the Digital Europe Programme in case of a large-scale incident.
Finally, it also provides for the establishment of a European Cybersecurity Incident Review Mechanism to monitor the impact of the measures under this law.
Key Themes
- Greater Integration: The success of this Act depends on the quality of cooperation and interoperability between various governmental stakeholders across defence, diplomacy, etc. with regard to data formats, taxonomy, data handling and data analytics tools. For example, Cross-Border Cyber Hubs are mandated to take the interoperability guidelines set by the European Union Agency for Cybersecurity (ENISA) as a starting point for information-sharing principles with each other.
- Public-Private Collaboration: The Act provides a framework to govern relationships between stakeholders such as the public sector, the private sector, academia, civil society and the media, identifying that public-private collaboration is crucial for strengthing EUs cyber resilience. In this regard, National Cyber Hubs are proposed to carry out the strengthening of information sharing between public and private entities.
- Centralized Regulation: The Act aims to strengthen all of the EU's cyber solidarity by outlining dedicated infrastructure for improved coordination and intelligence-sharing regarding cyber events among member states. Equal matching contribution for procuring the tools, infrastructure and services is to be made by each selected member state and the European Cybersecurity Competence Centre, a body tasked with funding cybersecurity projects in the EU.
- Setting a Global Standard: The underlying rationale behind strengthening cybersecurity in the EU is not just to protect EU citizens from cyber-threats to their fundamental rights but also to drive norms for world-class standards for cybersecurity for essential and critical services, an initiative several countries rely on.
Conclusion
In the current digital landscape, governments, businesses, critical sectors and people are increasingly interconnected through information and network connection systems and are using emerging technologies like AI, exposing them to multidimensional vulnerabilities in cyberspace. The EU in this regard continues to be a leader in setting standards for the safety of participants in the digital arena through regulations regarding cybersecurity. The Cyber Solidarity Act’s design including cross-border cooperation, public-private collaboration, and proactive incident-monitoring and response sets a precedent for a unified approach to cybersecurity. As the EU’s Cybersecurity Package continues to evolve, it will play a crucial role in ensuring a secure and resilient digital future for all.
Sources
- https://www.consilium.europa.eu/en/press/press-releases/2024/12/02/cybersecurity-package-council-adopts-new-laws-to-strengthen-cybersecurity-capacities-in-the-eu/
- https://data.consilium.europa.eu/doc/document/PE-94-2024-INIT/en/pdf
- https://digital-strategy.ec.europa.eu/en/policies/cybersecurity-strategy
- https://www.weforum.org/stories/2024/10/cybersecurity-regulation-changes-nis2-eu-2024/