#FactCheck - "AI-Generated Image of UK Police Officers Bowing to Muslims Goes Viral”
Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.
Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.
Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.
We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.
We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
Related Blogs
Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/
Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations
Introduction
In India, the population of girls and adolescents is 253 million, as per the UNICEF report, and the sex ratio at birth is 929 per 1000 male children as of 2023. Cyberspace has massively influenced the daily aspects of our lives, and hence the safety aspect of cyberspace cannot be ignored any more. The social media platforms play a massive role in information dissemination and sharing. The data trail created by the use of such platforms is often used by cyber criminals to target innocent girls and children.
On Ground Stats
Of the six million crimes police in India recorded between 1 January and 31 December last year, 428,278 cases involved crimes against women. It’s a rise of 26.35% over six years – from 338,954 cases in 2016. A majority of the cases in 2021, the report said, were of kidnappings and abduction, rapes, domestic violence, dowry deaths and assaults. Also, 107 women were attacked with acid, 1,580 women were trafficked, 15 girls were sold, and 2,668 were victims of cybercrimes. With more than 56,000 cases, the northern state of Uttar Pradesh, which is India’s most populous with 240 million people, once again topped the list. Rajasthan followed it with 40,738 cases and Maharashtra with 39,526 cases. This shows the root of the problem and how deep this menace goes in our society. With various campaigns and initiatives by Government and the CSO, awareness is on the rise, but still, we need a robust prevention mechanism to address this issue critically.
Influence of Social Media Platforms
Social media platforms such as Facebook, Instagram and Twitter were created to bring people closer by eliminating geographical boundaries, which is strengthened by the massive internet connectivity network across the globe. Throughout 2022, on average, there are about 470.1 million active social media users in India on a monthly basis, with an annual growth rate of 4.2 % in 2021-22. This represents about 33.4 % of the total population. These social media users, on average, spend about 2.6 hours on social media, and each, on average, has accounts on 8.6 platforms.
The bad actors have also upskilled themselves and are now using these social platforms to commit cybercrimes. Some of these crimes against girls and women include – Impersonation, Identity theft, Cyberstalking, Cyber-Enabled human trafficking and many more. These crimes are on the rise post-pandemic, and instances of people using fake IDs to lure young girls into their traps are being reported daily. One such instance is when Imran Mansoori created an Instagram account in the name of Rahul Gujjar, username: rahul_gujjar_9010. Using social engineering and scoping out the vulnerabilities, he trapped a minor girl in a relationship & took her to a hotel in Moradabad. The hotel manager raised the suspicion of seeing a different ID & called the Police, Imran was then arrested. But many such crimes go unreported, and it is essential for all stakeholders to create a safeguard regarding girls’ and women’s safety.
Legal Remedies at our disposal
The Indian Legal system has been evolving with time towards the online safety of girls and women. The National Commission for Protection of Child Rights (NCPCR) and the National Commission for Women (NCW) have worked tirelessly to safeguard girls and women to create a wholesome, safe, secure environment. The Information Technology Act governs cyberspace and its associated rights and duties. The following provisions of the IT Act are focused towards safeguarding the rights –
- Violation of privacy – Section 66E
- Obscene material – Section 67
- Pornography & sexually explicit act – Section 67A
- Child pornography – Section 67B
- Intermediaries due diligence rules – Section 79
Apart from these provisions, acts like POCSO, IPC, and CrPC, draft the Digital Personal Data Protection Bill, Intermediary Guidelines on Social Media and Online Gaming and telecommunications bill.
Conclusion
The likelihood of becoming a victim of cybercrime is always growing due to increased traffic in the virtual world, which is especially true for women who are frequently viewed as easy targets. The types of cyber crimes that target women have grown, and the trend has not stopped in India. Cyber flaming, cyber eve-teasing, cyber flirting, and internet cheating are some new-generation crimes that are worth mentioning here. In India, women tend to be reluctant to speak up about issues out of concern that doing so might damage their reputations permanently. Without being fully aware of the dangers of the internet, women grow more susceptible the more time they spend online. Women should be more alert to protect themselves from targeted online attacks.