#FactCheck - Uncovered: Viral LA Wildfire Video is a Shocking AI-Generated Fake!
Executive Summary:
A viral post on X (formerly Twitter) has been spreading misleading captions about a video that falsely claims to depict severe wildfires in Los Angeles similar to the real wildfire happening in Los Angeles. Using AI Content Detection tools we confirmed that the footage shown is entirely AI-generated and not authentic. In this report, we’ll break down the claims, fact-check the information, and provide a clear summary of the misinformation that has emerged with this viral clip.

Claim:
A video shared across social media platforms and messaging apps alleges to show wildfires ravaging Los Angeles, suggesting an ongoing natural disaster.

Fact Check:
After taking a close look at the video, we noticed some discrepancy such as the flames seem unnatural, the lighting is off, some glitches etc. which are usually seen in any AI generated video. Further we checked the video with an online AI content detection tool hive moderation, which says the video is AI generated, meaning that the video was deliberately created to mislead viewers. It’s crucial to stay alert to such deceptions, especially concerning serious topics like wildfires. Being well-informed allows us to navigate the complex information landscape and distinguish between real events and falsehoods.

Conclusion:
This video claiming to display wildfires in Los Angeles is AI generated, the case again reflects the importance of taking a minute to check if the information given is correct or not, especially when the matter is of severe importance, for example, a natural disaster. By being careful and cross-checking of the sources, we are able to minimize the spreading of misinformation and ensure that proper information reaches those who need it most.
- Claim: The video shows real footage of the ongoing wildfires in Los Angeles, California
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: Fake Video
Related Blogs
.webp)
Introduction
In an era where organisations are increasingly interdependent through global supply chains, outsourcing and digital ecosystems, third-party risk has become one of the most vital aspects of enterprise risk management. The SolarWinds hack, the MOVEit vulnerabilities and recent software vendor attacks all serve as a reminder of the necessity to enhance Third-Party Risk Management (TPRM). As cyber risks evolve and become more sophisticated and as regulatory oversight sharpens globally, 2025 is a transformative year for the development of TPRM practices. This blog explores the top trends redefining TPRM in 2025, encompassing real-time risk scoring, AI-driven due diligence, harmonisation of regulations, integration of ESG, and a shift towards continuous monitoring. All of these trends signal a larger movement towards resilience, openness and anticipatory defence in an increasingly dependent world.
Real-Time and Continuous Monitoring becomes the Norm
The old TPRM methods entailed point-in-time testing, which typically was an annual or onboarding process. By 2025, organisations are shifting towards continuous, real-time monitoring of their third-party ecosystems. Now, authentic advanced tools are making it possible for companies to take a real-time pulse of the security of their vendors by monitoring threat indicators, patching practices and digital footprint variations. This change has been further spurred by the growth in cyber supply chain attacks, where the attackers target vendors to gain access to bigger organisations. Real-time monitoring software enables the timely detection of malicious activity, equipping organisations with a faster defence response. It also guarantees dynamic risk rating instead of relying on outdated questionnaire-based scoring.
AI and Automation in Risk Assessment and Due Diligence
Manual TPRM processes aren't sustainable anymore. In 2025, AI and machine learning are reshaping the TPRM lifecycle from onboarding and risk classification to contract review and incident handling. AI technology can now analyse massive amounts of vendor documentation and automatically raise red flags on potential issues. Natural language processing (NLP) is becoming more common for automated contract intelligence, which assists in the detection of risky clauses or liability gaps or data protection obligations. In addition, automation is increasing scalability for large organisations that have hundreds or thousands of third-party relationships, eliminating human errors and compliance fatigue. However, all of this must be implemented with a strong focus on security, transparency, and ethical AI use to ensure that sensitive vendor and organisational data remains protected throughout the process.
Risk Quantification and Business Impact Mapping
Risk scoring in isolation is no longer adequate. One of the major trends for 2025 is the merging of third-party risk with business impact analysis (BIA). Organisations are using tools that associate vendors to particular business processes and assets, allowing better knowledge of how a compromise of a vendor would impact operations, customer information or financial position. This movement has resulted in increased use of risk quantification models, such as FAIR (Factor Analysis of Information Risk), which puts dollar values on risks associated with vendors. By using the language of business value, CISOs and risk officers are more effective at prioritising risks and making resource allocations.
Environmental, Social, and Governance (ESG) enters into TPRM
As ESG keeps growing on the corporate agenda, organisations are taking TPRM one step further than cybersecurity and legal risks and expanding it to incorporate ESG-related factors. In 2025, organisations evaluate if their suppliers have ethical labour practices, sustainable supply chains, DEI (Diversity, Equity, Inclusion) metrics and climate impact disclosures. This growth is not only a reputational concern, but also a third-party non-compliance with ESG can now invoke regulatory or shareholder action. ESG risk scoring software and vendor ESG audits are becoming components of onboarding and performance evaluations.
Shared Assessments and Third-Party Exchanges
With the duplication of effort by having multiple vendors respond to the same security questionnaires, the trend is moving toward shared assessments. Systems such as the SIG Questionnaire (Standardised Information Gathering) and the Global Vendor Exchange allow vendors to upload once and share with many clients. This change not only simplifies the due diligence process but also enhances data accuracy, standardisation and vendor experience. In 2025, organisations are relying more and more on industry-wide vendor assurance platforms to minimise duplication, decrease costs and maximise trust.
Incident Response and Resilience Partnerships
Another trend on the rise is bringing vendors into incident response planning. In 2025, proactive organisations address major vendors as more than suppliers but as resilience partners. This encompasses shared tabletop exercises, communication procedures and breach notification SLAs. With the increasing ransomware attacks and cloud reliance, organisations are now calling for vendor-side recovery plans, RTO and RPO metrics. TPRM is transforming into a comprehensive resilience management function where readiness and not mere compliance takes centre stage.
Conclusion
Third-Party Risk Management in 2025 is no longer about checklists and compliance audits; it's a dynamic, intelligence-driven and continuous process. With regulatory alignment, AI automation, real-time monitoring, ESG integration and resilience partnerships leading the way, organisations are transforming their TPRM programs to address contemporary threat landscapes. As digital ecosystems grow increasingly complex and interdependent, managing third-party risk is now essential. Early adopters who invest in tools, talent and governance will be more likely to create secure and resilient businesses for the AI era.
References
- https://finance.ec.europa.eu/publications/digital-operational-resilience-act-dora_en
- https://digital-strategy.ec.europa.eu/en/policies/nis2-directive
- https://www.meity.gov.in/data-protection-framework
- https://securityscorecard.com
- https://sharedassessments.org/sig/
- https://www.fairinstitute.org/fair-model

Introduction
Misinformation spreads faster than a pimple before your best friend's wedding, and these viral skincare hacks on social media can do more harm than good if smeared on without a second thought. The unverified skin care tips, exaggerated results, and product endorsements lacking proper dermatological backing can often lead to breakouts and serious damage.
The Allure and Risks of Online Skincare Trends
In the age of social media, beauty advice is easily accessible, but not all trending skincare hacks are beneficial. Influencers lacking professional dermatological knowledge often endorse "medical grade" skincare products, which may not be suitable for all skin types. The viral DIY skincare hacks, such as natural remedies like multani mitti (Fuller's earth), have found a new audience online. However, suppose such skincare tips are approached without due care and caution regarding their suitability for different skin types, or without the proper formulation of ingredients. In that case, they can result in skin problems. It is crucial to approach online skincare advice with a critical eye, as not all trends are backed by scientific research.
CyberPeace Recommendations
- Influencer Responsibility and Ethical Endorsements in Skincare
Influencers play a crucial role in shaping public perception in the skincare and lifestyle industries. However, they must exercise due diligence before endorsing skincare products or practices, as misinformation can lead to financial loss and health consequences. Influencers should only promote products they have personally tested or vetted by dermatologists or skincare professionals. They should also research the brand's credibility, check ingredients for safety, and understand the product's target audience.
- Strengthening Digital Literacy in Skincare Spaces
CyberPeace highlights that improving digital literacy is one of the best strategies to stop the spread of false information about skincare. Users nowadays, particularly young people, are continuously exposed to a deluge of wellness and beauty-related content. Many people are duped by overstated claims, pseudoscientific cures, and influencer-driven marketing masquerading as sound advice if they lack the necessary digital literacy. We recommend supporting digital literacy initiatives that teach users how to evaluate sources, think critically, and comprehend how algorithms promote content. Long-term impact is thought to be achieved through influencer partnerships, gamified learning modules, and community workshops that promote media literacy.
- Recommendation for Users to Prioritise Research and Critical Thinking
Users should prioritise research and critical thinking when engaging with skincare content online. It's crucial to distinguish between valid advice and misinformation. Thorough research, including expert reviews, ingredient checks, and scientific sources, is essential. Questioning endorsements and relying on trusted platforms and dermatologists can help ensure a skincare routine based on sound practices.
- Mandating Transparency from Influencers and Brands
Enforcing stronger transparency laws for influencers and skincare companies is a key suggestion. Social media influencers frequently neglect to reveal sponsored collaborations or paid advertisements, giving followers the impression that the skincare advice is based on the creators' own experience and objective judgment. This dishonest practice frequently promotes goods with little to no scientific support and feeds false information. The social media companies need to be proactive in identifying and removing content that violates disclosure and advertising guidelines.
- Creating a Verified Registry for Skincare Professionals
Increasing the voices of real experts is one of the most important strategies to build credibility and trust online. The establishment of a publicly available, validated registry of certified dermatologists, cosmetologists, and skincare scientists is suggested by cybersecurity experts and medical professionals. These experts could then receive a "verified expert" badge from social media companies, making it easier for users to discern between content created by unqualified people and genuine, evidence-based advice. Algorithms that promote such verified content would inevitably limit the dissemination of false information.
- Enforcing Platform Accountability and Reporting System
There needs to be platform-level accountability and safeguard mechanisms in case of any false information about skincare. Platforms should monitor repeat offenders and implement a tiered penalty system that includes content removal and temporary or permanent bans on such malicious user profiles.
References

Introduction
On March 12, the Ministry of Corporate Affairs (MCA) proposed the Bill to curb anti-competitive practices of tech giants through ex-ante regulation. The Draft Digital Competition Bill is to apply to ‘Core Digital Services,’ with the Central Government having the authority to update the list periodically. The proposed list in the Bill encompasses online search engines, online social networking services, video-sharing platforms, interpersonal communications services, operating systems, web browsers, cloud services, advertising services, and online intermediation services.
The primary highlight of the Digital Competition Law Report created by the Committee on Digital Competition Law presented to the Parliament in the 2nd week of March 2024 involves a recommendation to introduce new legislation called the ‘Digital Competition Act,’ intended to strike a balance between certainty and flexibility. The report identified ten anti-competitive practices relevant to digital enterprises in India. These are anti-steering, platform neutrality/self-preferencing, bundling and tying, data usage (use of non-public data), pricing/ deep discounting, exclusive tie-ups, search and ranking preferencing, restricting third-party applications and finally advertising Policies.
Key Take-Aways: Digital Competition Bill, 2024
- Qualitative and quantitative criteria for identifying Systematically Significant Digital Enterprises, if it meets any of the specified thresholds.
- Financial thresholds in each of the immediately preceding three financial years like turnover in India, global turnover, gross merchandise value in India, or global market capitalization.
- User thresholds in each of the immediately preceding 3 financial years in India like the core digital service provided by the enterprise has at least 1 crore end users, or it has at least 10,000 business users.
- The Commission may make the designation based on other factors such as the size and resources of an enterprise, number of business or end users, market structure and size, scale and scope of activities of an enterprise and any other relevant factor.
- A period of 90 days is provided to notify the CCI of qualification as an SSDE. Additionally, the enterprise must also notify the Commission of other enterprises within the group that are directly or indirectly involved in the provision of Core Digital Services, as Associate Digital Enterprises (ADE) and the qualification shall be for 3 years.
- It prescribes obligations for SSDEs and their ADEs upon designation. The enterprise must comply with certain obligations regarding Core Digital Services, and non-compliance with the same shall result in penalties. Enterprises must not directly or indirectly prevent or restrict business users or end users from raising any issue of non-compliance with the enterprise’s obligations under the Act.
- Avoidance of favouritism in product offerings by SSDE, its related parties, or third parties for the manufacture and sale of products or provision of services over those offered by third-party business users on the Core Digital Service in any manner.
- The Commission will be having the same powers as vested to a civil court under the Code of Civil Procedure, 1908 when trying a suit.
- Penalty for non-compliance without reasonable cause may extend to Rs 1 lakh for each day during which such non-compliance occurs (max. of Rs 10 crore). It may extend to 3 years or with a fine, which may extend to Rs 25 crore or with both. The Commission may also pass an order imposing a penalty on an enterprise (not exceeding 1% of the global turnover) in case it provides incorrect, incomplete, misleading information or fails to provide information.
Suggestions and Recommendations
- The ex-ante model of regulation needs to be examined for the Indian scenario and studies need to be conducted on it has worked previously in different jurisdictions like the EU.
- The Bill should be aimed at prioritising the fostering of fair competition by preventing monopolistic practices in digital markets exclusively. A clear distinction from the already existing Competition Act, 2002 in its functioning needs to be created so that there is no overlap in the regulations and double jeopardy is not created for enterprises.
- Restrictions on tying and bundling and data usage have been shown to negatively impact MSMEs that rely significantly on big tech to reduce operational costs and enhance customer outreach.
- Clear definitions of "dominant position" and "anti-competitive behaviour" are essential for effective enforcement in terms of digital competition need to be defined.
- Encouraging innovation while safeguarding consumer data privacy in consonance with the DPDP Act should be the aim. Promoting interoperability and transparency in algorithms can prevent discriminatory practices.
- Regular reviews and stakeholder consultations will ensure the law adapts to rapidly evolving technologies.
- Collaboration with global antitrust bodies which is aimed at enhancing cross-border regulatory coherence and effectiveness.
Conclusion
The need for a competition law that is focused exclusively on Digital Enterprises is the need of the hour and hence the Committee recommended enacting the Digital Competition Act to enable CCI to selectively regulate large digital enterprises. The proposed legislation should be restricted to regulate only those enterprises that have a significant presence and ability to influence the Indian digital market. The impact of the law needs to be restrictive to digital enterprises and it should not encroach upon matters not influenced by the digital arena. India's proposed Digital Competition Bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses prevalent in the digital business space. The Ministry of Corporate Affairs has received 41-page public feedback on the draft which is expected to be tabled next year in front of the Parliament.
References
- https://www.medianama.com/wp-content/uploads/2024/03/DRAFT-DIGITAL-COMPETITION-BILL-2024.pdf
- https://prsindia.org/files/policy/policy_committee_reports/Report_Summary-Digital_Competition_Law.pdf
- https://economictimes.indiatimes.com/tech/startups/meity-meets-india-inc-to-hear-out-digital-competition-law-concerns/articleshow/111091837.cms?from=mdr
- https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open
- https://www.barandbench.com/law-firms/view-point/digital-competition-laws-beginning-of-a-new-era
- https://www.linkedin.com/pulse/policy-explainer-digital-competition-bill-nimisha-srivastava-lhltc/
- https://www.lexology.com/library/detail.aspx?g=5722a078-1839-4ece-aec9-49336ff53b6c