#FactCheck - Viral Photo of Modi and Rahul Gandhi in Parliament Found to Be AI-Generated
Executive Summary
An image showing Prime Minister Narendra Modi and Leader of Opposition in the Lok Sabha and Congress MP Rahul Gandhi standing face to face inside Parliament is going viral on social media. Several users are sharing the image claiming that the photograph was taken during the ongoing Budget Session, suggesting a direct face-off between the two leaders inside Parliament. However, research conducted by the CyberPeacehas found that the viral claim is false. The image in question is not real but has been generated using Artificial Intelligence (AI). The AI-generated image is now being shared on social media with a misleading claim.
Claim
A Facebook user named Madhu Davi shared the viral image on January 30, 2026, with the caption: “If this photo is from today and the Budget Session, it is commendable. RAGA Zindabad.”
(Archived version of the post available here.)
- https://www.facebook.com/photo/?fbid=759145877237871&set=a.110639115421887
- https://perma.cc/N2XD-TZ32?type=image

Fact Check:
To verify the viral claim, we first conducted a keyword search on Google to check whether any credible media outlet had reported such an incident during the Budget Session. However, no news reports supporting the claim were found. We then performed a reverse image search using Google Lens, but this too did not yield any reliable media reports or evidence confirming the authenticity of the image. This raised suspicion that the image might be AI-generated. To further verify, the image was analysed using the AI detection tool Hive Moderation. The tool indicated a probability of over 99 per cent that the image was generated using Artificial Intelligence.

Conclusion
CyberPeace research confirms that the image being circulated with the claim that Prime Minister Narendra Modi and Rahul Gandhi came face to face during the Budget Session is fake. The viral image has been created using AI and is being shared with a false and misleading narrative.
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Executive Summary:
A social media video claims that India's Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. According to official sources, the Udhampur base is still fully operational, and our research proves that the video was produced by artificial intelligence. The growing problem of AI-driven disinformation in the digital age is highlighted by this incident.

Claim:
A viral video alleges that Pakistan's JF-17 fighter jets successfully destroyed the Udhampur Air Force Base in India. The footage shows aircraft engulfed in flames, accompanied by narration claiming the base's destruction during recent cross-border hostilities.

Fact Check :
The Udhampur Air Force Station was destroyed by Pakistani JF-17 fighter jets, according to a recent viral video that has been shown to be completely untrue. The audio and visuals in the video have been conclusively identified as AI-generated based on a thorough analysis using AI detection tools such as Hive Moderation. The footage was found to contain synthetic elements by Hive Moderation, confirming that the images were altered to deceive viewers. Further undermining the untrue claims in the video is the Press Information Bureau (PIB) of India, which has clearly declared that the Udhampur Airbase is still fully operational and has not been the scene of any such attack.

Our analysis of recent disinformation campaigns highlights the growing concern that AI-generated content is being weaponized to spread misinformation and incite panic, which is highlighted by the purposeful misattribution of the video to a military attack.
Conclusion:
It is untrue that the Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. This claim is supported by an AI-generated video that presents irrelevant footage incorrectly. The Udhampur base is still intact and fully functional, according to official sources. This incident emphasizes how crucial it is to confirm information from reliable sources, particularly during periods of elevated geopolitical tension.
- Claim: Recent video footage shows destruction caused by Pakistani jets at the Udhampur Airbase.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
The world has been surfing the wave of technological advancements and innovations for the past decade, and it all pins down to one device – our mobile phone. For all mobile users, the primary choices of operating systems are Android and iOS. Android is an OS created by google in 2008 and is supported by most brands like – One+, Mi, OPPO, VIVO, Motorola, and many more and is one of the most used operating systems. iOS is an OS that was developed by Apple and was introduced in their first phone – The iPhone, in 2007. Both OS came into existence when mobile phone penetration was slow globally, and so the scope of expansion and advancements was always in favor of such operating systems.
The Evolution
iOS
Ever since the advent of the iPhone, iOS has seen many changes since 2007. The current version of iOs is iOS 16. However, in the course of creating new iOS and updating the old ones, Apple has come out with various advancements like the App Store, Touch ID & Face ID, Apple Music, Podcasts, Augmented reality, Contact exposure, and many more, which have later become part of features of Android phone as well. Apple is one of the oldest tech and gadget developers in the world, most of the devices manufactured by Apple have received global recognition, and hence Apple enjoys providing services to a huge global user base.
Android
The OS has been famous for using the software version names on the food items like – Pie, Oreo, Nougat, KitKat, Eclairs, etc. From Android 10 onwards, the new versions were demoted by number. The most recent Android OS is Android 13; this OS is known for its practicality and flexibility. In 2012 Android became the most popular operating system for mobile devices, surpassing Apple’s iOS, and as of 2020, about 75 percent of mobile devices run Android.
Android vs. iOS
1. USER INTERFACE
One of the most noticeable differences between Android and iPhone is their user interface. Android devices have a more customizable interface, with options to change the home screen, app icons, and overall theme. The iPhone, on the other hand, has a more uniform interface with less room for customization. Android allows users to customize their home screen by adding widgets and changing the layout of their app icons. This can be useful for people who want quick access to certain functions or information on their home screen. IOS does not have this feature, but it does allow users to organize their app icons into folders for easier navigation.
2. APP SELECTION
Another factor to consider when choosing between Android and iOS is the app selection. Both platforms have a wide range of apps available, but there are some differences to consider. Android has a larger selection of apps overall, including a larger selection of free apps. However, some popular apps, such as certain music streaming apps and games, may be released first or only available on iPhone. iOS also has a more curated app store, meaning that all apps must go through a review process before being accepted for download. This can result in a higher quality of apps overall, but it can also mean that it takes longer for new apps to become available on the platform. iPhone devices tend to have less processing power and RAM. But they are generally more efficient in their use of resources. This can result in longer battery life, but it may also mean that iPhones are slower at handling multiple tasks or running resource-intensive apps.
3. PERFORMANCE
When it comes to performance, both Android and iPhone have their own strengths and weaknesses. Android devices tend to have more processing power and RAM. This can make them faster and more capable of handling multiple tasks simultaneously. However, this can also lead to Android devices having shorter battery life compared to iPhones.
4. SECURITY
Security is an important consideration for any smartphone user, and Android and iPhone have their own measures to protect user data. Android devices are generally seen as being less secure than iPhones due to their open nature. Android allows users to install apps from sources other than the Google Play Store, which can increase the risk of downloading malicious apps. However, Android has made improvements in recent years to address this issue. Including the introduction of Google Play Protect, which scans apps for malware before they are downloaded. On the other hand, iPhone devices have a more closed ecosystem, with all apps required to go through Apple‘s review process before being available for download. This helps reduce the risk of downloading malicious apps, but it can also limit the platform’s flexibility.
Conclusion
The debate about the better OS has been going on for some time now, and it looks like it will get more comprehensive in the times to come, as netizens go deeper into cyberspace, they will get more aware and critical of their uses and demands, which will allow them to opt for the best OS for their convenience. Although the Andriod OS, due to its integration, stands more vulnerable to security threats as compared to iOS, no software is secure in today’s time, what is secure is its use and application hence the netizen and the platforms need to increase their awareness and knowledge to safeguard themselves and the wholesome cyberspace.