#FactCheck - "AI-Generated Image of UK Police Officers Bowing to Muslims Goes Viral”
Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Google India announced sachet loans on the Google Pay application to help small businesses in the country. Google India said that merchants in India often need smaller loans, hence, the tech giant launched sachet loans on the Gpay application. The company will provide loans to small businesses, which can be repaid in easier repayment instalments. To provide the load services, Google Pay has partnered with DMI Finance. This move comes at the Google for India, 2023, the flagship event to launch the Indian interventions planned by the big tech.
What is a Sachet Loan?
The loan system is the primary backbone of the global banking system. Since we have seen a massive transition towards the digital mode of transactions and banking operations, many online platforms have emerged. With the advent of QR codes, the Unified Payment Interface (UPI) has been rampantly used by Indians for making small or petty payments. Seeing this, Sachet loans made an advent as well, Sachet loans are essentially small-ticket loans ranging from Rs 10,000 to Rs 1 lakh, with repayment tenures between 7 days and 12 months. This nano-credit addresses immediate financial needs and is designed for swift approval and disbursement. Satchel loans are one of the most sought-after loan forms in the Western world. The ease of accessibility and easy repayment options have made it a successful form of money lending, which in turn has sparked the interest of the tech giant Google to execute similar operations in India.
Google Pay
Pertaining to the fact that UPI payments are the most preferred form of online payment, google came out with GPay in 2013 and now enjoys a user base of 67 million Indians. Google Pay has a 36.10% mobile application market share in India, and 26% of the UPI payments made have been through Google Pay. Google Pay adoption for in-store payments in India was higher in 2023 than it was in early 2019, signalling a growing use among consumers. The numbers shown here refer to the share of respondents who indicated they used Google Pay in the last 12 months, either for POS transactions with a mobile device in stores and restaurants or for online shopping. Eight out of 10 respondents from India indicated they had used Google Pay in a POS setting between April 2022 and March 2023, with an additional seven out of 10 saying they used Google Pay during this same time for online payments.
Pertaining to the Indian spectrum, the following aspects should be kept into consideration:
- PhonePe, Google Pay and Paytm accounted for nearly 96% of all UPI transactions by value in March
- PhonePe remained the top UPI app, processing 407.63 Cr transactions worth INR 7.07 Lakh Cr
- While Google Pay and Paytm retained second and third positions, respectively, Amazon Pay pushed CRED to the fifth spot in terms of the number of transactions
- Walmart-owned PhonePe, Google Pay and Paytm continued their dominance in India’s UPI payments space, together processing 94% of payments in March 2023.
- According to data from the National Payments Corporation of India (NPCI), the top three apps accounted for nearly 96% of all UPI transactions by value. This translates to about 841.91 Cr transactions worth INR 13.44 Lakh Cr between the three apps.
Conclusion
The big tech giant Google.org has been fundamental in creating and provisioning best-in-class services which are easily accessible to all the netizens. Satchel loans are the new services introduced by the platform and the widespread access of Gpay will go a long way in providing financial services and ease to the deprived and needy lot of the Indian population. This transition can also be seen by other payment portals like Paypal and Paytm, which clearly shows India's massive potential in leading the world of online banking and UPI transactions. As per stats, 40% of global online banking transactions take place in India. These aspects, coupled with the cores of Digital India and Make in India, clearly show how India is the global destination for investment in the current era.
References
- https://www.livemint.com/companies/news/google-enters-retail-loan-business-in-india-11697697999246.html
- https://www.statista.com/statistics/1389649/google-pay-adoption-in-india/#:~:text=Eight%20out%20of%2010%20respondents,same%20time%20for%20online%20payments
- https://playtoday.co/blog/stats/google-pay-statistics/#:~:text=67%20million%20active%20users%20of%20Google%20Pay%20are%20in%20India.&text=Google%20Pay%20users%20in%20India,in%2Dstore%20and%20online%20purchases.
- https://inc42.com/buzz/phonepe-google-pay-paytm-process-94-of-upi-transactions-march-2023/

Introduction
On 20th March 2024, the Indian government notified the Fact Check Unit (FCU) under the Press Information Bureau (PIB) of the Ministry of Information and Broadcasting as the Fact Check Unit (FCU) of the Central Government. This PIB FCU is notified under the provisions of Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
However, the next day, on 21st March 2024, the Supreme Court stayed the Centre's decision. The IT Amendment Rules of 2023 provide that the Ministry of Electronics and Information Technology (MeitY) can notify a fact-checking body to identify and tag what it considers fake news with respect to any activity of the Centre. The stay will be in effect till the Bombay High Court finally decides the challenges to the IT Rules amendment 2023.
The official notification dated 20th March 2024 read as follows:
“In exercise of the powers conferred by sub-clause (v) of clause (b) of sub-rule (1) of rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Central Government hereby notifies the Fact Check Unit under the Press Information Bureau of the Ministry of Information and Broadcasting as the fact check unit of the Central Government for the purposes of the said sub-clause, in respect of any business of the Central Government.”
Impact of the notification
The impact of notifying PIB’s FCU under Rule 3(1)(b)(v)will empower the PIB’s FCU to issue direct takedown directions to the concerned Intermediary. Any information posted on social media in relation to the business of the central government that has been flagged as fake or false by the FCU has to be taken down by the concerned intermediary. If it fails to do so, it will lose the 'safe harbour' immunity against legal proceedings arising out of such information posted offered under Section 79 of IT Act, 2000.
Safe harbour provision u/s 79 of IT Act, 2000
Section 79 of the IT Act, 2000 serves as a safe harbour provision for intermediaries. The provision states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used unlawfully. Furthermore, intermediaries are obliged to observe due diligence on their platforms.
Rule 3 (1)(b)(v) Under IT Amendment Rules 2023
Rule 3(1)(b)(v) of The Information Technology(Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [updated as on 6.4.2023] provides that all intermediaries [Including a social media intermediary, a significant social media intermediary and an online gaming intermediary], are required to make "reasonable efforts” or perform due diligence to ensure that their users do not "host, display, upload, modify, publish, transmit, store, update or share” any information that “deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government as the Ministry may, by notification published in the Official Gazette, specify”.
PIB - FCU
The PIB - Fact Check Unit(FCU) was established in November 2019 to prevent the spread of fake news and misinformation about the Indian government. It also provides an accessible platform for people to report suspicious or questionable information related to the Indian government. This FCU is responsible for countering misinformation on government policies, initiatives, and schemes. The FCU is tasked with addressing misinformation about government policies, initiatives, and schemes, either directly (Suo moto) or through complaints received. On 20th March 2024,via a gazetted notification, the Centre notified the Press Information Bureau's fact-check unit (FCU) as the nodal agency to flag fake news or misinformation related to the central government. However, The Supreme Court stayed the Centre's notification of the Fact-Check Unit under IT Amendment Rules 2023.
Concerns with IT Amendment Rules 2023
The Ministry of Electronics and Information Technology(MeitY) amended the IT Rules of 2021. The ‘Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023’ (IT Amendment Rules 2023) were notified by the Ministry of Electronics and Information Technology on 6 April 2023. The rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government” and also made other provisions pertaining to online gaming.
The Constitutional validity of IT Amendment Rules 2023 has been challenged through a writ petition challenging the IT Rules 2023 in the Bombay High Court. The contention is that the rules raise "serious constitutional questions," and Rule 3(1)(b)(v), as amended in 2023, impacts the fundamental right to freedom of speech and expression would fall for analysis by the High Court.
Supreme Court Stays Setting up of FCU
A bench comprising Chief Justice DY Chandra Hud, Justices JB Pardiwala and Manoj Misra convened to hear Special Leave Petitions filed by Kunal Kamra, the Editors Guild of India and the Association of Indian Magazines challenging the refusal of the Bombay High Court to stay the implementation of the IT Rules 2023. The Supreme Court has stayed the Union's notification of the Fact-Check Unit under the IT Amendment Rules 2023, pending the Bombay High Court's decision on the challenges to the IT Rules Amendment 2023.
Emphasizing Freedom of Speech in the Democratic Environment
The advent of advanced technology has also brought with it a new generation of threats and concerns: the misuse of said technology in the form of deepfakes and misinformation is one of the most pressing concerns plaguing society today. This realization has informed the critical need for stringent regulatory measures. The government is rightly prioritizing the need to immediately address digital threats, but there must be a balance between our digital security policies and the need to respect free speech and critical thinking. The culture of open dialogue is the bedrock of democracy. The ultimate truth is shaped through free trade in ideas within a competitive marketplace of ideas. The constitutional scheme of democracy places great importance on the fundamental value of liberty of thought and expression, which has also been emphasized by the Supreme Court in its various judgements.
The IT Rules, 2023,provide for creating a "fact check unit" to identify fake or false or misleading information “in relation to any business of the central government "This move raised concerns within the media fraternity, who argued that the determination of fake news cannot be placed solely in the hands of the government. It is also worth noting that if users post something illegal, they can still be punished under laws that already exist in the country.
We must take into account that freedom of speech under Article 19 of the Constitution is not an absolute right. Article 19(2) imposes restrictions on the Right to Freedom of Speech and expression. Hence, there has to be a balance between regulatory measures and citizens' fundamental rights.
Nowadays, the term ‘fake news’ is used very loosely. Additionally, there is a dearth of clearly established legal parameters that define what amounts to fake or misleading information. Clear definitions of the terms should be established to facilitate certainty as to what content is ‘fake news’ and what content is not. Any such restriction on speech must align with the exceptions outlined in Article19(2) of the Constitution.
Conclusion
Through a government notification, PIB - FCU was intended to act as a government-run fact-checking body to verify any information about the Central Government. However, the apex court of India stayed the Centre's notification. Now, the matter is sub judice, and we hope for the judicial analysis of the validity of IT Amendment Rules 2023.
Notably, the government is implementing measures to combat misinformation in the digital world, but it is imperative that we strive for a balance between regulatory checks and individual rights. As misinformation spreads across all sectors, a centralised approach is needed in order to tackle it effectively. Regulatory reforms must take into account the crucial roleplayed by social media in today’s business market: a huge amount of trade and commerce takes place online or is informed by digital content, which means that the government must introduce policies and mechanisms that continue to support economic activity. Collaborative efforts between the government and its agencies, technological companies, and advocacy groups are needed to deal with the issue better at a higher level.
References
- https://egazette.gov.in/(S(xzwt4b4haaqja32xqdiksbju))/ViewPDF.aspx
- https://pib.gov.in/PressReleasePage.aspx?PRID=2015792
- https://economictimes.indiatimes.com/tech/technology/govt-notifies-fact-checking-unit-under-pib-to-check-fake-news-misinformation-related-to-centre/articleshow/108653787.cms?from=mdr
- https://www.epw.in/journal/2023/43/commentary/it-amendment-rules-2023.html#:~:text=The%20Information%20Technology%20Amendment%20Rules,to%20be%20false%20or%20misleading
- https://www.livelaw.in/amp/top-stories/supreme-court-kunal-kamra-editors-guild-notifying-fact-check-unit-it-rules-2023-252998
- https://www.aljazeera.com/news/2024/3/21/india-top-court-stays-government-move-to-form-fact-check-unit-under-it-laws
- https://www.meity.gov.in/writereaddata/files/Information%20Technology 28Intermediary%20Guidelines%20and%20Digital% 20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- 2024 SCC On Line Bom 360

Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations