#FactCheck - Uncovered: Viral LA Wildfire Video is a Shocking AI-Generated Fake!
Executive Summary:
A viral post on X (formerly Twitter) has been spreading misleading captions about a video that falsely claims to depict severe wildfires in Los Angeles similar to the real wildfire happening in Los Angeles. Using AI Content Detection tools we confirmed that the footage shown is entirely AI-generated and not authentic. In this report, we’ll break down the claims, fact-check the information, and provide a clear summary of the misinformation that has emerged with this viral clip.

Claim:
A video shared across social media platforms and messaging apps alleges to show wildfires ravaging Los Angeles, suggesting an ongoing natural disaster.

Fact Check:
After taking a close look at the video, we noticed some discrepancy such as the flames seem unnatural, the lighting is off, some glitches etc. which are usually seen in any AI generated video. Further we checked the video with an online AI content detection tool hive moderation, which says the video is AI generated, meaning that the video was deliberately created to mislead viewers. It’s crucial to stay alert to such deceptions, especially concerning serious topics like wildfires. Being well-informed allows us to navigate the complex information landscape and distinguish between real events and falsehoods.

Conclusion:
This video claiming to display wildfires in Los Angeles is AI generated, the case again reflects the importance of taking a minute to check if the information given is correct or not, especially when the matter is of severe importance, for example, a natural disaster. By being careful and cross-checking of the sources, we are able to minimize the spreading of misinformation and ensure that proper information reaches those who need it most.
- Claim: The video shows real footage of the ongoing wildfires in Los Angeles, California
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: Fake Video
Related Blogs

Introduction
In the era of the internet where everything is accessible at your fingertip, a disturbing trend is on the rise- over 90% of websites containing child abuse material now have self-generated images, obtained from victims as young as three years old. A shocking revelation, shared by the (IWF) internet watch foundation, The findings of the IWF have caused concern about the increasing exploitation of children below the age of 10. The alarming trend highlights the increasing exploitation of children under the age of 10, who are coerced, blackmailed, tricked, or groomed into participating in explicit acts online. The IWF's data for 2023 reveals a record-breaking 275,655 websites hosting child sexual abuse material, with 92% of them containing such "self-generated" content.
Disturbing Tactics Shift
Disturbing numbers came that, highlight a distressing truth. In 2023, 275,655 websites were discovered to hold child sexual abuse content, reaching a new record and reflecting an alarming 8% increase over the previous year. What's more concerning is that 92% of these websites had photos or videos generated by the website itself. Surprisingly, 107,615 of these websites had content involving children under the age of ten, with 2,500 explicitly featuring youngsters aged three to six.
Profound worries
Deep concern about the rising incidence of images taken by extortion or coercion from elementary school-aged youngsters. This footage is currently being distributed on very graphic and specialised websites devoted to child sexual assault. The process begins in a child's bedroom with the use of a camera and includes the exchange, dissemination, and gathering of explicit content by devoted and determined persons who engage in sexual exploitation. These criminals are ruthless. The materials are being circulated via mail, instant messaging, chat rooms, and social media platforms, (WhatsApp, Telegram, Skype, etc.)
Live Streaming of such material involves real-time broadcast which again is a major concern as the nature of the internet is borderless the access to such material is international, national, and regional, which even makes it difficult to get the predators and convict them. With the growth, it has become easy for predators to generate “self-generated “images or videos.
Financial Exploitation in the Shadows: The Alarming Rise of Sextortion
Looking at the statistics globally there have been studies that show an extremely shocking pattern known as “sextortion”, in which adolescents are targeted for extortion and forced to pay money under the threat of exposing images to their families or relatives and friends or on social media. The offender's goal is to obtain sexual gratification.
The financial variation of sextortion takes a darker turn, with criminals luring kids into making sexual content and then extorting them for money. They threaten to reveal the incriminating content unless their cash demands, which are frequently made in the form of gift cards, mobile payment services, wire transfers, or cryptocurrencies, are satisfied. In this situation, the predators are primarily driven by money gain, but the psychological impact on their victims is as terrible. A shocking case was highlighted where an 18-year-old was landed in jail for blackmailing a young girl, sending indecent images and videos to threaten her via Snapchat. The offender was pleaded guilty.
The Question on Security?
The introduction of end-to-end encryption in platforms like Facebook Messenger has triggered concerns within law enforcement agencies. While enhancing user privacy, critics argue that it may inadvertently facilitate criminal activities, particularly the exploitation of vulnerable individuals. The alignment with other encrypted services is seen as a potential challenge, making it harder to detect and investigate crimes, thus raising questions about finding a balance between privacy and public safety.
One of the major concerns in the online safety of children is the implementation of encryption by asserting that it enhances the security of individuals, particularly children, by safeguarding them from hackers, scammers, and criminals. They underscored their dedication to enforcing safety protocols, such as prohibiting adults from texting teenagers who do not follow them and employing technology to detect and counteract bad conduct.
These distressing revelations highlight the urgent need for comprehensive action to protect our society's most vulnerable citizens i.e., children, youngsters, and adolescents throughout the era of digital progress. As experts and politicians grapple with these troubling trends, the need for action to safeguard kids online becomes increasingly urgent.
Role of Technology in Combating Online Exploitation
With the rise of technology, there has been a rise in online child abuse, technology also serves as a powerful tool to combat it. The advanced algorithms and use of Artificial intelligence tools can be used to disseminate ‘self-generated’ images. Additional tech companies can collaborate and develop some effective solutions to safeguard every child and individual.
Role of law enforcement agencies
Child abuse knows no borders, and addressing the issues requires legal intervention at all levels. National, regional, and international law enforcement agencies investigate online child sexual exploitation and abuse and cooperate in the investigation of these cybercrimes, Various investigating agencies need to have mutual legal assistance and extradition, bilateral, and multilateral conventions to conduct to identify, investigate, and prosecute perpetrators of online child sexual exploitation and abuse. Apart from this cooperation between private and government agencies is important, sharing the database of perpetrators can help the agencies to get them caught.
How do you safeguard your children?
Looking at the present scenario it has become a crucial part of protecting and safeguarding our children online against online child abuse here are some practical steps that can help in safeguarding your loved one.
- Open communication: Establish open communication with your children, make them feel comfortable, and share your experiences with them, make them understand what good internet surfing is and educate them about the possible risks without generating fear.
- Teach Online Safety: educate your children about the importance of privacy and the risks associated with it. Teach them strong privacy habits like not sharing any personal information with a stranger on any social media platform. Teach them to create some unique passwords and to make them aware not to click on any suspicious links or download files from unknown sources.
- Set boundaries: As a parent set rules and guidelines for internet usage, set time limits, and monitor their online activities without infringing their privacy. Monitor their social media platforms and discuss inappropriate behaviour or online harassment. As a parent take an interest in your children's online activities, websites, and apps inform them, and teach them online safety measures.
Conclusion
The predominance of self-generated' photos in online child abuse content necessitates immediate attention and coordinated action from governments, technology corporations, and society as a whole. As we negotiate the complicated environment of the digital age, we must be watchful, modify our techniques, and collaborate to defend the innocence of the most vulnerable among us. To combat online child exploitation, we must all work together to build a safer, more secure online environment for children all around the world.
References
- https://www.the420.in/over-90-of-websites-containing-child-abuse-feature-self-generated-images-warns-iwf/
- https://news.sky.com/story/self-generated-images-found-on-92-of-websites-containing-child-sexual-abuse-with-victims-as-young-as-three-13049628
- https://www.firstpost.com/world/russia-rejects-us-proposal-to-resume-talks-on-nuclear-arms-control-13630672.html
- https://www.news4hackers.com/iwf-warns-that-more-than-90-of-websites-contain-self-generated-child-abuse-images/

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading