#FactCheck - Viral Images of Indian Army Eating Near Border area Revealed as AI-Generated Fabrication
Executive Summary:
The viral social media posts circulating several photos of Indian Army soldiers eating their lunch in the extremely hot weather near the border area in Barmer/ Jaisalmer, Rajasthan, have been detected as AI generated and proven to be false. The images contain various faults such as missing shadows, distorted hand positioning and misrepresentation of the Indian flag and soldiers body features. The various AI generated tools were also used to validate the same. Before sharing any pictures in social media, it is necessary to validate the originality to avoid misinformation.




Claims:
The photographs of Indian Army soldiers having their lunch in extreme high temperatures at the border area near to the district of Barmer/Jaisalmer, Rajasthan have been circulated through social media.




Fact Check:
Upon the study of the given images, it can be observed that the images have a lot of similar anomalies that are usually found in any AI generated image. The abnormalities are lack of accuracy in the body features of the soldiers, the national flag with the wrong combination of colors, the unusual size of spoon, and the absence of Army soldiers’ shadows.




Additionally it is noticed that the flag on Indian soldiers’ shoulder appears wrong and it is not the traditional tricolor pattern. Another anomaly, soldiers with three arms, strengtheness the idea of the AI generated image.
Furthermore, we used the HIVE AI image detection tool and it was found that each photo was generated using an Artificial Intelligence algorithm.


We also checked with another AI Image detection tool named Isitai, it was also found to be AI-generated.


After thorough analysis, it was found that the claim made in each of the viral posts is misleading and fake, the recent viral images of Indian Army soldiers eating food on the border in the extremely hot afternoon of Badmer were generated using the AI Image creation tool.
Conclusion:
In conclusion, the analysis of the viral photographs claiming to show Indian army soldiers having their lunch in scorching heat in Barmer, Rajasthan reveals many anomalies consistent with AI-generated images. The absence of shadows, distorted hand placement, irregular showing of the Indian flag, and the presence of an extra arm on a soldier, all point to the fact that the images are artificially created. Therefore, the claim that this image captures real-life events is debunked, emphasizing the importance of analyzing and fact-checking before sharing in the era of common widespread digital misinformation.
- Claim: The photo shows Indian army soldiers having their lunch in extreme heat near the border area in Barmer/Jaisalmer, Rajasthan.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/

Introduction
On 20th March 2024, the Indian government notified the Fact Check Unit (FCU) under the Press Information Bureau (PIB) of the Ministry of Information and Broadcasting as the Fact Check Unit (FCU) of the Central Government. This PIB FCU is notified under the provisions of Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
However, the next day, on 21st March 2024, the Supreme Court stayed the Centre's decision. The IT Amendment Rules of 2023 provide that the Ministry of Electronics and Information Technology (MeitY) can notify a fact-checking body to identify and tag what it considers fake news with respect to any activity of the Centre. The stay will be in effect till the Bombay High Court finally decides the challenges to the IT Rules amendment 2023.
The official notification dated 20th March 2024 read as follows:
“In exercise of the powers conferred by sub-clause (v) of clause (b) of sub-rule (1) of rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Central Government hereby notifies the Fact Check Unit under the Press Information Bureau of the Ministry of Information and Broadcasting as the fact check unit of the Central Government for the purposes of the said sub-clause, in respect of any business of the Central Government.”
Impact of the notification
The impact of notifying PIB’s FCU under Rule 3(1)(b)(v)will empower the PIB’s FCU to issue direct takedown directions to the concerned Intermediary. Any information posted on social media in relation to the business of the central government that has been flagged as fake or false by the FCU has to be taken down by the concerned intermediary. If it fails to do so, it will lose the 'safe harbour' immunity against legal proceedings arising out of such information posted offered under Section 79 of IT Act, 2000.
Safe harbour provision u/s 79 of IT Act, 2000
Section 79 of the IT Act, 2000 serves as a safe harbour provision for intermediaries. The provision states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used unlawfully. Furthermore, intermediaries are obliged to observe due diligence on their platforms.
Rule 3 (1)(b)(v) Under IT Amendment Rules 2023
Rule 3(1)(b)(v) of The Information Technology(Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [updated as on 6.4.2023] provides that all intermediaries [Including a social media intermediary, a significant social media intermediary and an online gaming intermediary], are required to make "reasonable efforts” or perform due diligence to ensure that their users do not "host, display, upload, modify, publish, transmit, store, update or share” any information that “deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government as the Ministry may, by notification published in the Official Gazette, specify”.
PIB - FCU
The PIB - Fact Check Unit(FCU) was established in November 2019 to prevent the spread of fake news and misinformation about the Indian government. It also provides an accessible platform for people to report suspicious or questionable information related to the Indian government. This FCU is responsible for countering misinformation on government policies, initiatives, and schemes. The FCU is tasked with addressing misinformation about government policies, initiatives, and schemes, either directly (Suo moto) or through complaints received. On 20th March 2024,via a gazetted notification, the Centre notified the Press Information Bureau's fact-check unit (FCU) as the nodal agency to flag fake news or misinformation related to the central government. However, The Supreme Court stayed the Centre's notification of the Fact-Check Unit under IT Amendment Rules 2023.
Concerns with IT Amendment Rules 2023
The Ministry of Electronics and Information Technology(MeitY) amended the IT Rules of 2021. The ‘Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023’ (IT Amendment Rules 2023) were notified by the Ministry of Electronics and Information Technology on 6 April 2023. The rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government” and also made other provisions pertaining to online gaming.
The Constitutional validity of IT Amendment Rules 2023 has been challenged through a writ petition challenging the IT Rules 2023 in the Bombay High Court. The contention is that the rules raise "serious constitutional questions," and Rule 3(1)(b)(v), as amended in 2023, impacts the fundamental right to freedom of speech and expression would fall for analysis by the High Court.
Supreme Court Stays Setting up of FCU
A bench comprising Chief Justice DY Chandra Hud, Justices JB Pardiwala and Manoj Misra convened to hear Special Leave Petitions filed by Kunal Kamra, the Editors Guild of India and the Association of Indian Magazines challenging the refusal of the Bombay High Court to stay the implementation of the IT Rules 2023. The Supreme Court has stayed the Union's notification of the Fact-Check Unit under the IT Amendment Rules 2023, pending the Bombay High Court's decision on the challenges to the IT Rules Amendment 2023.
Emphasizing Freedom of Speech in the Democratic Environment
The advent of advanced technology has also brought with it a new generation of threats and concerns: the misuse of said technology in the form of deepfakes and misinformation is one of the most pressing concerns plaguing society today. This realization has informed the critical need for stringent regulatory measures. The government is rightly prioritizing the need to immediately address digital threats, but there must be a balance between our digital security policies and the need to respect free speech and critical thinking. The culture of open dialogue is the bedrock of democracy. The ultimate truth is shaped through free trade in ideas within a competitive marketplace of ideas. The constitutional scheme of democracy places great importance on the fundamental value of liberty of thought and expression, which has also been emphasized by the Supreme Court in its various judgements.
The IT Rules, 2023,provide for creating a "fact check unit" to identify fake or false or misleading information “in relation to any business of the central government "This move raised concerns within the media fraternity, who argued that the determination of fake news cannot be placed solely in the hands of the government. It is also worth noting that if users post something illegal, they can still be punished under laws that already exist in the country.
We must take into account that freedom of speech under Article 19 of the Constitution is not an absolute right. Article 19(2) imposes restrictions on the Right to Freedom of Speech and expression. Hence, there has to be a balance between regulatory measures and citizens' fundamental rights.
Nowadays, the term ‘fake news’ is used very loosely. Additionally, there is a dearth of clearly established legal parameters that define what amounts to fake or misleading information. Clear definitions of the terms should be established to facilitate certainty as to what content is ‘fake news’ and what content is not. Any such restriction on speech must align with the exceptions outlined in Article19(2) of the Constitution.
Conclusion
Through a government notification, PIB - FCU was intended to act as a government-run fact-checking body to verify any information about the Central Government. However, the apex court of India stayed the Centre's notification. Now, the matter is sub judice, and we hope for the judicial analysis of the validity of IT Amendment Rules 2023.
Notably, the government is implementing measures to combat misinformation in the digital world, but it is imperative that we strive for a balance between regulatory checks and individual rights. As misinformation spreads across all sectors, a centralised approach is needed in order to tackle it effectively. Regulatory reforms must take into account the crucial roleplayed by social media in today’s business market: a huge amount of trade and commerce takes place online or is informed by digital content, which means that the government must introduce policies and mechanisms that continue to support economic activity. Collaborative efforts between the government and its agencies, technological companies, and advocacy groups are needed to deal with the issue better at a higher level.
References
- https://egazette.gov.in/(S(xzwt4b4haaqja32xqdiksbju))/ViewPDF.aspx
- https://pib.gov.in/PressReleasePage.aspx?PRID=2015792
- https://economictimes.indiatimes.com/tech/technology/govt-notifies-fact-checking-unit-under-pib-to-check-fake-news-misinformation-related-to-centre/articleshow/108653787.cms?from=mdr
- https://www.epw.in/journal/2023/43/commentary/it-amendment-rules-2023.html#:~:text=The%20Information%20Technology%20Amendment%20Rules,to%20be%20false%20or%20misleading
- https://www.livelaw.in/amp/top-stories/supreme-court-kunal-kamra-editors-guild-notifying-fact-check-unit-it-rules-2023-252998
- https://www.aljazeera.com/news/2024/3/21/india-top-court-stays-government-move-to-form-fact-check-unit-under-it-laws
- https://www.meity.gov.in/writereaddata/files/Information%20Technology 28Intermediary%20Guidelines%20and%20Digital% 20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- 2024 SCC On Line Bom 360