#FactCheck - "Deep fake video falsely circulated as of a Syrian prisoner who saw sunlight for the first time in 13 years”
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claim A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Introduction
Rajeev Chandrasekhar, the Union minister of state for information technology (IT), said that the Global Partnership on Artificial Intelligence (GPAI) Summit, which brings together 29 member governments, including the European Union, announced on 13th December 2023 that the New Delhi Declaration had been adopted. The proclamation committed to developing AI applications for medical treatment and agribusiness jointly and taking the needs of the Global South into account when developing AI.
In addition, signing countries committed to leveraging the GPAI infrastructure to establish a worldwide structure for AI safety and trust, as well as to make AI advantages and approaches accessible to all. In order to complete the recommended structure in six months, India also submitted a proposal to host the GPAI Global Governance Summit.
“The New Delhi Declaration, which aims to place GPAI at the forefront of defining the future of AI in terms of both development and building cooperative AI across the partner states, has been unanimously endorsed by 29 GPAI member countries. Nations have come to an agreement to develop AI applications in healthcare, agriculture, and numerous other fields that affect all of our nations and citizens,” Chandrasekhar stated.
The statement highlights GPAI's critical role in tackling modern AI difficulties, such as generative AI, through submitted AI projects meant to maximize benefits and minimize related risks while solving community problems and worldwide difficulties.
GPAI
Global Partnership on Artificial Intelligence (GPAI) is an organisation of 29 countries from the Americas (North and South), Europe and Asia. It has important players such as the US, France, Japan and India, but it excludes China. The previous meeting took place in Japan. In 2024, India will preside over GPAI.
In order to promote and steer the responsible implementation of artificial intelligence based on human rights, multiculturalism, gender equality, innovation, economic growth, the surroundings, and social impact, this forum was established in 2020. Its goal is to bring together elected officials and experts in order to make tangible contributions to the 2030 Agenda and the UN Sustainable Development Goals (SDGs).
Given the quick and significant advancements in artificial intelligence over the previous year, the meeting in New Delhi attracted particular attention. They have sparked worries about its misuse as well as enthusiasm about its possible advantages.
The Summit
The G20 summit, which India hosted in September 2023, provided an atmosphere for the discussions at the GPAI summit. There, participants of this esteemed worldwide economic conference came to an agreement on how to safely use AI for "Good and for All."
In order to safeguard people's freedoms and security, member governments pledged to address AI-related issues "in a responsible, inclusive, and human-centric manner."
The key tactic devised is to distribute AI's advantages fairly while reducing its hazards. Promoting international collaboration and discourse on global management for AI is the first step toward accomplishing this goal.
A major milestone in that approach was the GPAI summit.
The conversation on AI was started by India's Prime Minister Narendra Modi, who is undoubtedly one of the most tech-aware and tech-conscious international authorities.
He noted that every system needs to be revolutionary, honest, and trustworthy in order to be sustained.
"There is no doubt that AI is transformative, but it is up to us to make it more and more transparent." He continued by saying that when associated social, ethical, and financial concerns are appropriately addressed, trust will increase.
After extensive discussions, the summit attendees decided on a strategy to establish global collaboration on a number of AI-related issues. The proclamation pledged to place GPAI at the leading edge of defining AI in terms of creativity and cooperation while expanding possibilities for AI in healthcare, agriculture, and other areas of interest, according to Union Minister Rajeev Chandrasekhar.
There was an open discussion of a number of issues, including disinformation, joblessness and bias, protection of sensitive information, and violations of human rights. The participants reaffirmed their dedication to fostering dependable, safe, and secure AI within their respective domains.
Concerns raised by AI
- The issue of legislation comes first. There are now three methods in use. In order to best promote inventiveness, the UK government takes a "less is more" approach to regulation. Conversely, the European Union (EU) is taking a strong stance, planning to propose a new Artificial Intelligence Act that might categorize AI 'in accordance with use-case situations based essentially on the degree of interference and vulnerability'.
- Second, analysts say that India has the potential to lead the world in discussions about AI. For example, India has an advantage when it comes to AI discussions because of its personnel, educational system, technological stack, and populace, according to Markham Erickson of Google's Centers for Excellence. However, he voiced the hope that Indian regulations will be “interoperable” with those of other countries in order to maximize the benefits for small and medium-sized enterprises in the nation.
- Third, there is a general fear about how AI will affect jobs, just as there was in the early years of the Internet's development. Most people appear to agree that while many jobs won't be impacted, certain jobs might be lost as artificial intelligence develops and gets smarter. According to Erickson, the solution to the new circumstances is to create "a more AI-skilled workforce."
- Finally, a major concern relates to deepfakes defined as 'digital media, video, audio and images, edited and manipulated, using Artificial Intelligence (AI).'
Need for AI Strategy in Commercial Businesses
Firstly, astute or mobile corporate executives such as Shailendra Singh, managing director of Peak XV Partners, feel that all organisations must now have 'an AI strategy'.
Second, it is now impossible to isolate the influence of digital technology and artificial intelligence from the study of international relations (IR), foreign policy, and diplomacy. Academics have been contemplating and penning works of "the geopolitics of AI."
Combat Strategies
"We will talk about how to combine OECD capabilities to maximize our capacity to develop the finest approaches to the application and management of AI for the benefit of our people. The French Minister of Digital Transition and Telecommunications", Jean-Noël Barrot, informed reporters.
Vice-Minister of International Affairs for Japan's Ministry of Internal Affairs and Communications Hiroshi Yoshida stated, "We particularly think GPAI should be more inclusive so that we encourage more developing countries to join." Mr Chandrasekhar stated, "Inclusion of lower and middle-income countries is absolutely core to the GPAI mission," and added that Senegal has become a member of the steering group.
India's role in integrating agribusiness into the AI agenda was covered in a paragraph. The proclamation states, "We embrace the use of AI innovation in supporting sustainable agriculture as a new thematic priority for GPAI."
Conclusion
The New Delhi Declaration, which was adopted at the GPAI Summit, highlights the cooperative determination of 29 member nations to use AI for the benefit of all people. GPAI, which will be led by India in 2024, intends to influence AI research with an emphasis on healthcare, agriculture, and resolving ethical issues. Prime Minister Narendra Modi stressed the need to use AI responsibly and build clarity and confidence. Legislative concerns, India's potential for leadership, employment effects, and the difficulty of deepfakes were noted. The conference emphasized the importance of having an AI strategy in enterprises and covered battle tactics, with a focus on GPAI's objective, which includes tolerance for developing nations. Taken as a whole, the summit presents GPAI as an essential tool for navigating the rapidly changing AI field.
References
- https://www.thehindu.com/news/national/ai-summit-adopts-new-delhi-declaration-on-inclusiveness-collaboration/article67635398.ece
- https://www.livemint.com/news/india/gpai-meet-adopts-new-delhi-ai-declaration-11702487342900.html
- https://startup.outlookindia.com/sector/policy/global-partnership-on-ai-member-nations-unanimously-adopt-new-delhi-declaration-news-10065
- https://gpai.ai/

Introduction
Misinformation in India has emerged as a significant societal challenge, wielding a potent influence on public perception, political discourse, and social dynamics. A potential number of first-time voters across India identified fake news as a real problem in the nation. With the widespread adoption of digital platforms, false narratives, manipulated content, and fake news have found fertile ground to spread unchecked information and news.
In the backdrop of India being the largest market of WhatsApp users, who forward more content on chats than anywhere else, the practice of fact-checking forwarded information continues to remain low. The heavy reliance on print media, television, unreliable news channels and primarily, social media platforms acts as a catalyst since studies reveal that most Indians trust any content forwarded by family and friends. It is noted that out of all risks, misinformation and disinformation ranked the highest in India, coming before infectious diseases, illicit economic activity, inequality and labour shortages. World Economic Forum analysts, in connection with their 2024 Global Risk Report, note that “misinformation and disinformation in electoral processes could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism and long-term erosion of democratic processes.”
The Supreme Court of India on Misinformation
The Supreme Court of India, through various judgements, has noted the impact of misinformation on democratic processes within the country, especially during elections and voting. In 1995, while adjudicating a matter pertaining to keeping the broadcasting media under the control of the public, it noted that democracy becomes a farce when the medium of information is monopolized either by partisan central authority or by private individuals or oligarchic organizations.
In 2003, the Court stated that “Right to participate by casting a vote at the time of election would be meaningless unless the voters are well informed about all sides of the issue in respect of which they are called upon to express their views by casting their votes. Disinformation, misinformation, non-information all equally create an uninformed citizenry which would finally make democracy a mobocracy and a farce.” It noted that elections would be a useless procedure if voters remained unaware of the antecedents of the candidates contesting elections. Thus, a necessary aspect of a voter’s duty to cast intelligent and rational votes is being well-informed. Such information forms one facet of the fundamental right under Article 19 (1)(a) pertaining to freedom of speech and expression. Quoting James Madison, it stated that a citizen’s right to know the true facts about their country’s administration is one of the pillars of a democratic State.
On a similar note, the Supreme Court, while discussing the disclosure of information by an election candidate, gave weightage to the High Court of Bombay‘s opinion on the matter, which opined that non-disclosure of information resulted in misinformation and disinformation, thereby influencing voters to take uninformed decisions. It stated that a voter had the elementary right to know the full particulars of a candidate who is to represent him in Parliament/Assemblies.
While misinformation was discussed primarily in relation to elections, the effects of misinformation in other sectors have also been discussed from time to time. In particular, The court highlighted the World Health Organisation’s observation in 2021 while discussing the spread of COVID-19, noting that the pandemic was not only an epidemic but also an “infodemic” due to the overabundance of information on the internet, which was riddled with misinformation and disinformation. While condemning governments’ direct or indirect threats of prosecution to citizens, it noted that various citizens who relied on the internet to provide help in securing medical facilities and oxygen tanks were being targeted by alleging that the information posted by them was false and was posted to create panic, defame the administration or damage national image. It instructed authorities to cease such threats and prevent clampdown on information sharing.
More recently, in Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529], the apex court, while upholding the summons issued to Facebook by the Delhi Legislative Assembly in the aftermath of the 2020 Delhi Riots, noted that while social media enables equal and open dialogue between citizens and policymakers, it is also a tool in the where extremist views are peddled into mainstream media, thereby spreading misinformation. It noted Facebook’s role in the Mynmar, where misinformation and posts that Facebook employees missed fueled offline violence. Since Facebook is one of the most popular social media applications, the platform itself acts as a power center by hosting various opinions and voices on its forum. This directly impacts the governance of States, and some form of liability must be attached to the platform. The Supreme Court objected to Facebook taking contrary stands in various jurisdictions; while in the US, it projected itself as a publisher, which enabled it to maintain control over the material disseminated from its platform, while in India, “it has chosen to identify itself purely as a social media platform, despite its similar functions and services in the two countries.”
Conclusion
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The alarming statistics of fake news recognition among first-time voters, coupled with a lack of awareness regarding fact-checking organizations, underscore the urgency of addressing this issue. The Supreme Court of India has consistently recognized the detrimental impact of misinformation, particularly in elections. The judiciary has stressed the pivotal role of an informed citizenry in upholding the essence of democracy. It has emphasized the right to access accurate information as a fundamental aspect of freedom of speech and expression. As India grapples with the challenges of misinformation, the intersection of technology, media literacy and legal frameworks will be crucial in mitigating the adverse effects and fostering a more resilient and informed society.
References
- https://thewire.in/media/survey-finds-false-information-risk-highest-in-india
- https://www.statista.com/topics/5846/fake-news-in-india/#topicOverview
- https://www.weforum.org/publications/global-risks-report-2024/digest/
- https://main.sci.gov.in/supremecourt/2020/20428/20428_2020_37_1501_28386_Judgement_08-Jul-2021.pdf
- Secretary, Ministry of Information & Broadcasting, Govt, of India and Others v. Cricket Association of Bengal and Another [(1995) 2 SCC 161]
- People’s Union for Civil Liberties (PUCL) v. Union of India [(2003) 4 SCC 399]
- Kisan Shankar Kathore v. Arun Dattatray Sawant and Others [(2014) 14 SCC 162]
- Distribution of Essential Supplies & Services During Pandemic, In re [(2021) 18 SCC 201]
- Facebook v. Delhi Legislative Assembly [(2022) 3 SCC 529]