#FactCheck - Viral Post of Gautam Adani’s Public Arrest Found to Be AI-Generated
Executive Summary:
A viral post on X (formerly twitter) shared with misleading captions about Gautam Adani being arrested in public for fraud, bribery and corruption. The charges accuse him, his nephew Sagar Adani and 6 others of his group allegedly defrauding American investors and orchestrating a bribery scheme to secure a multi-billion-dollar solar energy project awarded by the Indian government. Always verify claims before sharing posts/photos as this came out to be AI-generated.

Claim:
An image circulating of public arrest after a US court accused Gautam Adani and executives of bribery.
Fact Check:
There are multiple anomalies as we can see in the picture attached below, (highlighted in red circle) the police officer grabbing Adani’s arm has six fingers. Adani’s other hand is completely absent. The left eye of an officer (marked in blue) is inconsistent with the right. The faces of officers (marked in yellow and green circles) appear distorted, and another officer (shown in pink circle) appears to have a fully covered face. With all this evidence the picture is too distorted for an image to be clicked by a camera.


A thorough examination utilizing AI detection software concluded that the image was synthetically produced.
Conclusion:
A viral image circulating of the public arrest of Gautam Adani after a US court accused of bribery. After analysing the image, it is proved to be an AI-Generated image and there is no authentic information in any news articles. Such misinformation spreads fast and can confuse and harm public perception. Always verify the image by checking for visual inconsistency and using trusted sources to confirm authenticity.
- Claim: Gautam Adani arrested in public by law enforcement agencies
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading

Introduction
In the advanced age of digitalization, the user base of Android phones is high. Our phones have become an integral part of our daily life activities from making online payments, booking cabs, playing online games, booking movie & show tickets, conducting online business activities, social networking, emailing and communication, we utilize our mobile phone devices. The Internet is easily accessible to everyone and offers various convenient services to users. People download various apps and utilize various services on the internet using their Android devices. Since it offers convenience, but in the growing digital landscape, threats and vulnerabilities have also emerged. Fraudsters find the vulnerabilities and target the users. Recently, various creepy online scams such as AI-based scams, deepfake scams, malware, spyware, malicious links leading to financial frauds, viruses, privacy breaches, data leakage, etc. have been faced by Android mobile users. Android mobile devices are more prone to vulnerabilities as compared to iOS. However, both Android and iOS platforms serve to provide safer digital space to mobile users. iOS offers more security features. but we have to play our part and be careful. There are certain safety measures which can be utilised by users to be safe in the growing digital age.
User Responsibility:
Law enforcement agencies have reported that they have received a growing number of complaints showing malware being used to compromise Android mobile devices. Both the platforms, Android and Google, have certain security mechanisms in place. However, cybersecurity experts emphasize that users must actively take care of safeguarding their mobile devices from evolving online threats. In this era of evolving cyber threats, being precautious and vigilant and personal responsibility for digital security is paramount.
Being aware of evolving scams
- Deepfake Scams: Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake.
- Voice cloning: To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
- Stalkerware or spyware: Stalkware or spyware is one of the serious threats to individual digital safety and personal information. Stalkware is basically software installed into your device without your consent or knowledge in order to track your activities and exploit your data. Stalkware, also referred to as spyware, is a type of malicious software secretly installed on your device without your knowledge. Its purpose is to track you or monitor your activities and record sensitive information such as passwords, text messages, GPS location, call history and access to your photos and videos. Cybercriminals and stalkers use this malicious software to unauthorisedly gain access to someone's phone devices.
Best practices or Cyber security tips:
- Keep your software up to date: Turn on automatic software updates for your device and make sure your mobile apps are up to date.
- Using strong passwords: Use strong passwords on your lock/unlock and on important apps on your mobile device.
- Using 2FA or multi-factor authentication: Two-factor authentication or multi-factor authentication provides extra layers of security. Be cautious before clicking on any link and downloading any app or file: Users are often led to click on malicious online links. Scammers may present such links to users through false advertisements on social media platforms, payment processes for online purchases, or in phone text messages. Through the links, victims are led either to phishing sites to give away personal data or to download harmful Android Package Kit (APK) files used to distribute and install apps on Android mobile phones.
- Secure Payments: Do not open any malicious links. Always make payments from secure and trusted payment apps. Use strong passwords for your payment apps as well. And secure your banking credentials.
- Safe browsing: Pay due care and attention while clicking on any link and downloading content. Ignore the links or attachments of suspicious emails which are from an unknown sender.
- Do not download third-party apps: Using an APK file to download a third-party app to an Android device is commonly known as sideloading. Be cautious and avoid downloading apps from third-party or dubious sites. Doing so may lead to the installation of malware in the device, which in turn may result in confidential and sensitive data such as banking credentials being stolen. Always download apps only from the official app store.
- App permissions: Review app permission and only grant permission which is necessary to use that app.
- Do not bypass security measures: Android offers more flexibility in the mobile operating system and in mobile settings. For example, sideloading of apps is disabled by default, and alerts are also in place to warn users. However, an unwitting user who may not truly understand the warnings may simply grant permission to an app to bypass the default setting.
- Monitoring: Regularly monitor your devices and system logs for security check-ups and for detecting any suspicious activity.
- Reporting online scams: A powerful resource available to victims of cybercrime is the National Cyber Crime Reporting Portal, equipped with a 24x7 helpline number, 1930. This portal serves as a centralized platform for reporting cybercrimes, including financial fraud.
Conclusion:
The era of digitalisation has transformed our lives, with Android phones becoming an integral part of our daily routines. While these devices offer convenience, they also expose us to online threats and vulnerabilities, such as scams like deepfake technology-based scams, voice clones, spyware, malware, and malicious links that can lead to significant financial and privacy breaches. Android devices might be more susceptible to such scams. By being aware of emerging scams like deepfakes, spyware, and other malicious activities, we can take proactive steps to safeguard our digital lives. Our mobile devices remain as valuable assets for us. However, they are also potential targets for cybercriminals. Users must remain proactive in protecting their devices and personal data from potential threats. By taking personal responsibility for our digital security and following these best practices, we can navigate the digital landscape with confidence, ensuring that our Android phones remain powerful tools for convenience and connection while keeping our data and privacy intact and staying safe from online threats and vulnerabilities.
References:

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.