#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide
.webp)
Introduction: The Internet’s Foundational Ideal of Openness
The Internet was built as a decentralised network to foster open communication and global collaboration. Unlike traditional media or state infrastructure, no single government, company, or institution controls the Internet. Instead, it has historically been governed by a consensus of the multiple communities, like universities, independent researchers, and engineers, who were involved in building it. This bottom-up, cooperative approach was the foundation of Internet governance and ensured that the Internet remained open, interoperable, and accessible to all. As the Internet began to influence every aspect of life, including commerce, culture, education, and politics, it required a more organised governance model. This compelled the rise of the multi-stakeholder internet governance model in the early 2000s.
The Rise of Multistakeholder Internet Governance
Representatives from governments, civil society, technical experts, and the private sector congregated at the United Nations World Summit on Information Society (WSIS), and adopted the Tunis Agenda for the Information Society. Per this Agenda, internet governance was defined as “… the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” Internet issues are cross-cutting across technical, political, economic, and social domains, and no one actor can manage them alone. Thus, stakeholders with varying interests are meant to come together to give direction to issues in the digital environment, like data privacy, child safety, cybersecurity, freedom of expression, and more, while upholding human rights.
Internet Governance in Practice: A History of Power Shifts
While the idea of democratizing Internet governance is a noble one, the Tunis Agenda has been criticised for reflecting geopolitical asymmetries and relegating the roles of technical communities and civil society to the sidelines. Throughout the history of the internet, certain players have wielded more power in shaping how it is managed. Accordingly, internet governance can be said to have undergone three broad phases.
In the first phase, the Internet was managed primarily by technical experts in universities and private companies, which contributed to building and scaling it up. The standards and protocols set during this phase are in use today and make the Internet function the way it does. This was the time when the Internet was a transformative invention and optimistically hailed as the harbinger of a utopian society, especially in the USA, where it was invented.
In the second phase, the ideal of multistakeholderism was promoted, in which all those who benefit from the Internet work together to create processes that will govern it democratically. This model also aims to reduce the Internet’s vulnerability to unilateral decision-making, an ideal that has been under threat because this phase has seen the growth of Big Tech. What started as platforms enabling access to information, free speech, and creativity has turned into a breeding ground for misinformation, hate speech, cybercrime, Child Sexual Abuse Material (CSAM), and privacy concerns. The rise of generative AI only compounds these challenges. Tech giants like Google, Meta, X (formerly Twitter), OpenAI, Microsoft, Apple, etc. have amassed vast financial capital, technological monopoly, and user datasets. This gives them unprecedented influence not only over communications but also culture, society, and technology governance.
The anxieties surrounding Big Tech have fed into the third phase, with increasing calls for government regulation and digital nationalism. Governments worldwide are scrambling to regulate AI, data privacy, and cybersecurity, often through processes that lack transparency. An example is India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which was passed without parliamentary debate. Governments are also pressuring platforms to take down content through opaque takedown orders. Laws like the UK’s Investigatory Powers Act, 2016, are criticised for giving the government the power to indirectly mandate encryption backdoors, compromising the strength of end-to-end encryption systems. Further, the internet itself is fragmenting into the “splinternet” amid rising geopolitical tensions, in the form of Russia’s “sovereign internet” or through China’s Great Firewall.
Conclusion
While multistakeholderism is an ideal, Internet governance is a playground of contesting power relations in practice. As governments assert digital sovereignty and Big Tech consolidates influence, the space for meaningful participation of other stakeholders has been negligible. Consultation processes have often been symbolic. The principles of openness, inclusivity, and networked decision-making are once again at risk of being sidelined in favour of nationalism or profit. The promise of a decentralised, rights-respecting, and interoperable internet will only be fulfilled if we recommit to the spirit of Multi-Stakeholder Internet Governance, not just its structure. Efficient internet governance requires that the multiple stakeholders be empowered to carry out their roles, not just talk about them.
References
- https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed
- https://www.internetsociety.org/wp-content/uploads/2017/09/ISOC-PolicyBrief-InternetGovernance-20151030-nb.pdf
- https://itp.cdn.icann.org/en/files/government-engagement-ge/multistakeholder-model-internet-governance-fact-sheet-05-09-2024-en.pdf\
- https://nrs.help/post/internet-governance-and-its-importance/
- https://daidac.thecjid.org/how-data-power-is-skewing-internet-governance-to-big-tech-companies-and-ai-tech-guys/

Introduction
THE DIGITAL PERSONAL DATA PROTECTION BILL, 2022 Released for Public Consultation on November 18, 2022THE DIGITAL PERSONAL DATA PROTECTION BILL, 2023Tabled at LokSabha on August 03. 2023Personal data may be processed only for a lawful purpose for which an individual has given consent. Consent may be deemed in certain cases.The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.There is a Data Protection Board under the 2022 bill to deal with the non-compliance of the Act.Under the 2023 bill, there is the Establishment of a new Data Protection Board which will ensure compliance, remedies and penalties.
Under the new bill, the Board has been entrusted with the power of a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.The 2022 Bill grants certain rights to individuals, such as the right to obtain information, seek correction and erasure, and grievance redressal.The 2023 bill also grants More Rights to Individuals and establishes a balance between user protection and growing innovations. The bill creates a transparent and accountable data governance framework by giving more rights to individuals. In the 2023 bill, there is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.Under the 2022 bill, Personal data can be processed for a lawful purpose for which an individual has given his consent. And there was a concept of deemed consent.The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’.The 2022 bill allowed the transfer of personal data to locations notified by the government.There is an introduction of the negative list, which restricts cross-data transfer.