#FactCheck - Digitally Altered Image Falsely Shows World Bank President Ajay Banga Holding Khalistani Flag
Executive Summary
A digitally manipulated image of World Bank President Ajay Banga has been circulating on social media, falsely portraying him as holding a Khalistani flag. The image was shared by a Pakistan-based X (formerly Twitter) user, who also incorrectly identified Banga as the President of the International Monetary Fund (IMF), thereby fuelling misleading speculation that he supports the Khalistani movement against India.
The Claim
On February 5, an X user with the handle @syedAnas0101010 posted an image allegedly showing Ajay Banga holding a Khalistani flag. The user misidentified him as the IMF President and captioned the post, “IMF president sending signals to INDIA.” The post quickly gained traction, amplifying false narratives and political speculation. Here is the link and archive link to the post, along with a screenshot:
Fact Check:
To verify the authenticity of the image, the CyberPeace Fact Check Desk conducted a detailed research . The image was first subjected to a reverse image search using Google Lens, which led to a Reuters news report published on June 13, 2023. The original photograph, captured by Reuters photojournalist Jonathan Ernst, showed Ajay Banga arriving at the World Bank headquarters in Washington, D.C., on June 2, 2023, marking his first day in office. In the authentic image, Banga is seen holding a coffee cup, not a flag.
Further analysis confirmed that the viral image had been digitally altered to replace the coffee cup with a Khalistani flag, thereby misrepresenting the context and intent of the original photograph. Here is the link to the report, along with a screenshot.

To strengthen the findings, the altered image was also analysed using the Hive Moderation AI detection tool. The tool’s assessment indicated a high likelihood that the image contained AI-generated or manipulated elements, reinforcing the conclusion that the image was not genuine. Below is a screenshot of the result.

Conclusion
The viral image claiming to show World Bank President Ajay Banga holding a Khalistani flag is fake. The photograph was digitally manipulated to spread misinformation and provoke political speculation. In reality, the original Reuters image from June 2023 shows Banga holding a coffee cup during his arrival at the World Bank headquarters. The claim that he supports the Khalistani movement is false and misleading.
Related Blogs
.webp)
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/