#FactCheck - AI Generated Photo Circulating Online Misleads About BARC Building Redesign
Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading
Related Blogs

In the Intricate mazes of the digital world, where the line between reality and illusion blurs, the quest for truth becomes a Sisyphean task. The recent firestorm of rumours surrounding global pop icon Dua Lipa's visit to Rajasthan, India, is a poignant example of this modern Dilemma. A single image, plucked from the continuum of time and stripped of context, became the fulcrum upon which a narrative of sexual harassment was precariously balanced. This incident, a mere droplet in the ocean of digital discourse, encapsulates the broader phenomenon of misinformation—a spectre that haunts the virtual halls of our interconnected existence.
Misinformation Incident
Amidst the ceaseless hum of social media, a claim surfaced with the tenacity of a weed in fertile soil: Dua Lipa, the three-time Grammy Award winner, had allegedly been subjected to sexual harassment during her sojourn in the historic city of Jodhpur. The evidence? A viral picture, its origins murky, accompanied by a caption that seemed to confirm the worst fears of her ardent followers. The digital populace quickly reacted, with many sharing the image, asserting the claim's veracity without pause for verification.
Unraveling the Fabric of Fake News: Fact-Checking Dua Lipa's India Experience
The narrative gained momentum through platforms of dubious credibility, such as the Twitter handle,' which, upon closer scrutiny by the Digital Forensics Research and Analytics Center, was revealed to be a purveyor of fake news. The very fabric of the claim began to unravel as the original photo was traced back to the official Facebook page of RVCJ Media, untainted by the allegations that had been so hastily ascribed to it. Moreover, the silence of Dua Lipa on the matter, rather than serving as a testament to the truth, inadvertently fueled the fires of speculation—a stark reminder of the paradox where the absence of denial is often misconstrued as an affirmation.
The pop star's words, shared on her Instagram account, painted a starkly different picture of her experience in India. She spoke not of fear and harassment, but of gratitude and joy, describing her trip as 'deeply meaningful' and expressing her luck to be 'within the magic' with her family. The juxtaposition of her heartfelt account with the sinister narrative constructed around her serves as a cautionary tale of the power of misinformation to distort and defile.
A Political Microcosm: Bye Elections of Telangana
Another incident is electoral misinformation, the political landscape of Telangana, India, bristled with anticipation as the Election Commission announced bye-elections for two Member of Legislative Council (MLC) seats. Here, too, the machinery of misinformation whirred into action, with political narratives being shaped and reshaped through the lens of partisan prisms. The electoral process, transparent in its intent, became susceptible to selective amplification, with certain facets magnified or distorted to fit entrenched political narratives. The bye-elections, thus, became a battleground not just for political supremacy but also for the integrity of information.
The Far-Reaching Claws of Misinformation: Fact Check
The misinformation regarding the experience of dua lipa upon India's visit and another incident of political Microcosm of Misinformation in Telangana are manifestations of a global challenge. Misinformation, adapts to the different contours of its environment, whether it be the gritty arena of politics or the glitzy realm of stardom. Its tentacles reach far and wide, with geopolitical implications that can destabilise regions, sow discord, and undermine the very pillars of democracy. The erosion of trust that misinformation engenders is perhaps its most insidious effect, as it chips away at the bedrock of societal cohesion and collective well-being.
Paradox of Technology
The same technological developments that have allowed the spread of misinformation also hold the keys to its containment. Artificial intelligence-powered fact-checking tools, blockchain-enabled transparency counter-measures, and comprehensive digital literacy campaigns stand as bulwarks against falsehoods. These tools, however, are not panaceas; they require the active engagement and critical thinking skills of each digital citizen to be truly effective.
Conclusion
As we stand at the cusp of the digital age, the way forward demands vigilance, collaboration, and innovation. Cultivating a digitally literate person, capable of discerning the nuances of digital content, is paramount. Governments, the tech industry, media companies, and civil society must join forces in a common front, leveraging their collective expertise in the battle against misinformation. Promoting algorithmic accountability and fostering diverse information ecosystems will also be crucial in mitigating the inadvertent amplification of falsehoods.
In the end, discerning truth in the digital age is a delicate process. It requires us to be attuned to the rhythm of reality, and wary of the seductive allure of unverified claims. As we navigate this digital realm, remember that the truth is not just a destination but a journey that demands our unwavering commitment to the pursuit of what is real and what is right.
References
- https://telanganatoday.com/eci-releases-schedule-for-bye-elections-to-two-mlc-seats-in-telangana
- https://www.oneindia.com/fact-check/was-pop-singer-dua-lipa-sexually-harassed-in-rajasthan-during-her-india-trip-heres-the-truth-3718833.html?story=3
- https://www.thequint.com/news/webqoof/edited-graphic-of-dua-lipa-being-sexually-harassed-in-jodhpur-falsely-shared-fact-check
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation
.webp)
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claim A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading