#FactCheck - AI Generated Photo Circulating Online Misleads About BARC Building Redesign
Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading
Related Blogs

Introduction
Charity and donation scams have continued to persist and are amplified in the digital era, where messages spread rapidly through WhatsApp, emails, and social media. These fraudulent schemes involve threat actors impersonating legitimate charities, government appeals, or social causes to solicit funds. Apart from targeting the general public, they also impact entities such as reputable tech firms and national institutions. Victims are tricked into transferring money or sharing personal information, often under the guise of urgent humanitarian efforts or causes.
A recent incident involves a fake WhatsApp message claiming to be from the Indian Ministry of Defence. The message urged users to donate to a fund for “modernising the Indian Army.” The government later confirmed this message was entirely fabricated and part of a larger scam. It emphasised that no such appeal had been issued by the Ministry, and urged citizens to verify such claims through official government portals before responding.
Tech Industry and Donation-Related Scams
Large corporations are also falling prey. According to media reports, an American IT company recently terminated around 700 Indian employees after uncovering a donation-related fraud. At least 200 of them were reportedly involved in a scheme linked to Telugu organisations in the US. The scam echoed a similar situation that had previously affected Apple, where Indian employees were fired after being implicated in donation fraud tied to the Telugu Association of North America (TANA). Investigations revealed that employees had made questionable donations to these groups in exchange for benefits such as visa support or employment favours.
Common People Targeted
While organisational scandals grab headlines, the common man remains equally or even more vulnerable. In a recent incident, a man lost over ₹1 lakh after clicking on a WhatsApp link asking for donations to a charity. Once he engaged with the link, the fraudsters manipulated him into making repeated transfers under various pretexts, ranging from processing fees to refund-related transactions (social engineering). Scammers often employ a similar set of tactics using urgency, emotional appeal, and impersonation of credible platforms to convince and deceive people.
Cautionary Steps
CyberPeace recommends adopting a cautious and informed approach when making charitable donations, especially online. Here are some key safety measures to follow:
- Verify Before You Donate. Always double-check the legitimacy of donation appeals. Use official government portals or the official charities' websites. Be wary of unfamiliar phone numbers, email addresses, or WhatsApp forwards asking for money.
- Avoid Clicking on Suspicious Links
Never click on links or download attachments from unknown or unverified sources. These could be phishing links/ malware designed to steal your data or access your bank accounts. - Be Sceptical of Urgency Scammers bank on creating a false sense of urgency to pressure their victims into donating quickly. One must take the time to evaluate before responding.
- Use Secure Payment Channels Ensure that one makes donations only through platforms that are secure, trusted, and verified. These include official UPI handles, government-backed portals (like PM CARES or Bharat Kosh), among others.
- Report Suspected Fraud In case one receives suspicious messages or falls victim to a scam, they are encouraged to report it to cybercrime authorities via cybercrime.gov.in (1930) or the local police, as prompt reporting can prevent further fraud.
Conclusion
Charity should never come at the cost of trust and safety. While donating to a good cause is noble, doing it mindfully is essential in today’s scam-prone environment. Always remember: a little caution today can save a lot tomorrow.
References
- https://economictimes.indiatimes.com/news/defence/misleading-message-circulating-on-whatsapp-related-to-donation-for-armys-modernisation-govt/articleshow/120672806.cms?from=mdr
- https://timesofindia.indiatimes.com/technology/tech-news/american-company-sacks-700-of-these-200-in-donation-scam-related-to-telugu-organisations-similar-to-firing-at-apple/articleshow/120075189.cms
- https://timesofindia.indiatimes.com/city/hyderabad/apple-fires-some-indians-over-donation-fraud-tana-under-scrutiny/articleshow/117034457.cms
- https://www.indiatoday.in/technology/news/story/man-gets-link-for-donation-and-charity-on-whatsapp-loses-over-rs-1-lakh-after-clicking-on-it-2688616-2025-03-04

Introduction
The Sexual Harassment of minors in cyberspace has become a matter of grave concern that needs to be addressed. Sextortion is the practice of extorting individuals into sharing explicit and sexual content under the threat of exposure. This grim activity has evolved into a pervasive issue on several social media platforms, particularly Instagram. To combat this illicit act, big corporate giants such as Meta have deployed a comprehensive ‘nudity protection’ feature, leveraging the use of AI (Artificial Intelligence) algorithms to ascertain and address the rapid distribution of unsolicited explicit content.
The Meta Initiative presented a multifaceted approach to improve user safety, especially for young people online, who are more vulnerable to predatory behavior.
The Salient Feature
Instagram’s use of advanced AI algorithms to automatically identify and blur out explicit images shared within direct messages is the driving force behind this initiative. This new safety measure serves two essential purposes.
- Preventing dissemination of sensitive content - The feature, when enabled, obstructs the visibility of sensitive personal pictures and also limits dissemination of the same.
- Empower minors to exercise more control over their social media - This cutting feature comes with the ability to disable the nudity protection at the will of users, allowing users, including minors, to regulate their exposure to age-inappropriate and harmful materials online. The nudity protection feature is enabled for all users under 18 as a default setting on Instagram globally. This measure guarantees a baseline standard of security for the most vulnerable demographic of users. Adults are able to exercise more autonomy over the feature, receiving periodic prompts for its voluntary activationWhen this feature detects an explicit image, it automatically blurs the image with cautionary overlay, enabling recipients to make an informed decision about whether or not they wish to view the flagged content. The decision to introduce this feature is an interesting and sensitive approach to balancing individual agency with institutionalising online protection.
Comprehensive Safety Measures Beyond Nudity Detection
The cutting-edge nudity protection feature is a crucial element of Instagram’s new strategy and is supported by a comprehensive set of measures devised to tackle sextortion and ensure a safe cyber environment for its users:
Awareness Drives and Safety Tips - Users sending and receiving sexually explicit content are directed to a screen with curated safety tips to ensure complete user awareness and inspire due diligence. These safety tips are critical in raising awareness about the risks of sharing sensitive content and inculcating responsible online behaviour.
New Technology to Identify Sextortionists - Meta Platforms are constantly evolving, and new sophisticated algorithms are introduced to better detect malicious accounts engaged in possible sextortion. These proactive measures check for any predatory behaviour so that such threats can be neutralised before they escalate and do grave harm.
Superior Reporting and Support Mechanisms - Instagram is implementing new technology to bolster its reporting mechanisms so that users reporting concerns pertaining to nudity, sexual exploitation and threats are instantaneously directed to local child safety authorities for necessary support and assistance.
This new sophisticated approach highlights Instagram's Commitment to forging a safer haven for users by addressing various aspects of this grim issue through the three-pronged strategy of detection, prevention and support.
User’s Safety and Accountability
The implementation of the nudity protection feature and various associated safety measures is Meta’s way of tackling the growing concern about user safety in a more proactive manner, especially when it concerns minors. Instagram’s experience with this feature will likely be the sandbox in which Meta tests its new user protection strategy and refines it before extending it to other platforms like Facebook and WhatsApp.
Critical Reception and Future Outlook
The nudity protection feature has been met with positive feedback from experts and online safety advocates, commending Instagram for taking a proactive stance against sextortion and exploitation. However, critics also emphasise the need for continued innovation, transparency, and accountability to effectively address evolving threats and ensure comprehensive protection for all users.
Conclusion
As digital spaces continue to evolve, Meta Platforms must demonstrate an ongoing commitment to adapting its safety measures and collaborating with relevant stakeholders to stay ahead of emerging challenges. Ongoing investment in advanced technology, user education, and robust support systems will be crucial in maintaining a secure and responsible online environment. Ultimately, Instagram's nudity protection feature represents a significant step forward in the fight against online sexual exploitation and abuse. By leveraging cutting-edge technology, fostering user awareness, and implementing comprehensive safety protocols, Meta Platforms is setting a positive example for other social media platforms to prioritise user safety and combat predatory behaviour in digital spaces.
References
- https://www.nbcnews.com/tech/tech-news/instagram-testing-blurring-nudity-messages-protect-teens-sextortion-rcna147402
- https://techcrunch.com/2024/04/11/meta-will-auto-blur-nudity-in-instagram-dms-in-latest-teen-safety-step/
- https://hypebeast.com/2024/4/instagram-dm-nudity-blurring-feature-teen-safety-info

Introduction
"In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
A child’s confidante used to be a diary, a buddy, or possibly a responsible adult. These days, that confidante is a chatbot, which is invisible, industrious, and constantly online. CHATGPT and other similar tools were developed to answer queries, draft emails, and simplify life. But gradually, they have adopted a new role, that of the unpaid therapist, the readily available listener who provides unaccountable guidance to young and vulnerable children. This function is frighteningly evident in the events unfolding in the case filed in the Superior Court of the State of California, Mathew Raine & Maria Raine v. OPEN AI, INC. & ors. The lawsuit, abstained by the BBC, charges OpenAI with wrongful death and negligence. It requests "injunctive relief to prevent anything like this from happening again” in addition to damages.
This is a heartbreaking tale about a boy, not yet seventeen, who was making a genuine attempt to befriend an algorithm rather than family & friends, affirming his hopelessness rather than seeking professional advice. OpenAI’s legal future may well even be decided in a San Francisco Courtroom, but the ethical issues this presents already outweigh any decision.
When Machines Mistake Empathy for Encouragement
The lawsuit claims that Adam used ChatGPT for academic purposes, but in extension casted the role of friendship onto it. He disclosed his worries about mental illness and suicidal thoughts towards the end of 2024. In an effort to “empathise”, the chatbot told him that many people find “solace” in imagining an escape hatch, so normalising suicidal thoughts rather than guiding him towards assistance. ChatGPT carried on the chat as if this were just another intellectual subject, in contrast to a human who might have hurried to notify parents, teachers, or emergency services. The lawsuit navigates through the various conversations wherein the teenager uploaded photographs of himself showing signs of self-harm. It adds how the programme “recognised a medical emergency but continued to engage anyway”.
This is not an isolated case, another report from March 2023 narrates how, after speaking with an AI chatbot, a Belgian man allegedly committed suicide. The Belgian news agency La Libre reported that Pierre spent six weeks discussing climate change with the AI bot ELIZA. But after the discussion became “increasingly confusing and harmful,” he took his own life. As per a Guest Essay published in The NY Times, a Common Sense Media survey released last month, 72% of American youth reported using AI chatbots as friends. Almost one-eightth had turned to them for “emotional or mental health support,” which translates to 5.2 million teenagers in the US. Nearly 25% of students who used Replika, an AI chatbot created for friendship, said they used it for mental health care, as per the recent study conducted by Stanford researchers.
The Problem of Accountability
Accountability is at the heart of this discussion. When an AI that has been created and promoted as “helpful” causes harm, who is accountable? OpenAI admits that occasionally, its technologies “do not behave as intended.” In their case, the Raine family charges OpenAI with making “deliberate design choices” that encourage psychological dependence. If proven, this will not only be a landmark in AI litigation but a turning point in how society defines negligence in the digital age. Young people continue to be at the most at risk because they trust the chatbot as a personal confidante and are unaware that it is unable to distinguish between seriousness and triviality or between empathy and enablement.
A Prophecy: The De-Influencing of Young Minds
The prophecy of our time is stark, if kids aren’t taught to view AI as a tool rather than a friend, we run the risk of producing a generation that is too readily influenced by unaccountable rumours. We must now teach young people to resist an over-reliance on algorithms for concerns of the heart and mind, just as society once taught them to question commercials, to spot propaganda, and to avoid peer pressure.
Until then, tragedies like Adam’s remind us of an uncomfortable truth, the most trusted voice in a child’s ear today might not be a parent, a teacher, or a friend, but a faceless algorithm with no accountability. And that is a world we must urgently learn to change.
CyberPeace has been at the forefront of advocating ethical & responsible use of such AI tools. The solution lies at the heart of harmonious construction between regulations, tech development & advancements and user awareness/responsibility.
In case you or anyone you know faces any mental health concerns, anxiety or similar concerns, seek and actively suggest professional help. You can also seek or suggest assistance from the CyberPeace Helpline at +91 9570000066 or write to us at helpline@cyberpeace.net
References
- https://www.bbc.com/news/articles/cgerwp7rdlvo
- https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html
- https://www.nytimes.com/2025/08/25/opinion/teen-mental-health-chatbots.html