#Factcheck-False Claims of Houthi Attack on Israel’s Ashkelon Power Plant
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
In September 2024, the Australian government announced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 ( CLA Bill 2024 hereon), to provide new powers to the Australian Communications and Media Authority (ACMA), the statutory regulatory body for Australia's communications and media infrastructure, to combat online misinformation and disinformation. It proposed allowing the ACMA to hold digital platforms accountable for the “seriously harmful mis- and disinformation” being spread on their platforms and their response to it, while also balancing freedom of expression. However, the Bill was subsequently withdrawn, primarily over concerns regarding the possibility of censorship by the government. This development is reflective of the global contention on the balance between misinformation regulation and freedom of speech.
Background and Key Features of the Bill
According to the BBC’s Global Minds Survey of 2023, nearly 73% of Australians struggled to identify fake news and AI-generated misinformation. There has been a substantial rise in misinformation on platforms like Facebook, Twitter, and TikTok since the COVID-19 pandemic, especially during major events like the bushfires of 2020 and the 2022 federal elections. The government’s campaign against misinformation was launched against this background, with the launch of The Australian Code of Practice on Disinformation and Misinformation in 2021. The main provisions of the CLA Bill, 2024 were:
- Core Transparency Obligations of Digital Media Platforms: Publishing current media literacy plans, risk assessment reports, and policies or information on their approach to addressing mis- and disinformation. The ACMA would also be allowed to make additional rules regarding complaints and dispute-handling processes.
- Information Gathering and Record-Keeping Powers: The ACMA would form rules allowing it to gather consistent information across platforms and publish it. However, it would not have been empowered to gather and publish user information except in limited circumstances.
- Approving Codes and Making Standards: The ACMA would have powers to approve codes developed by the industry and make standards regarding reporting tools, links to authoritative information, support for fact-checking, and demonetisation of disinformation. This would make compliance mandatory for relevant sections of the industry.
- Parliamentary Oversight: The transparency obligations, codes approved and standards set by ACMA under the Bill would be subject to parliamentary scrutiny and disallowance. ACMA would be required to report to the Parliament annually.
- Freedom of Speech Protections: End-users would not be required to produce information for ACMA unless they are a person providing services to the platform, such as its employees or fact-checkers. Further, it would not be allowed to call for removing content from platforms unless it involved inauthentic behavior such as bots.
- Penalties for Non-Compliance: ACMA would be required to employ a “graduated, proportionate and risk-based approach” to non-compliance and enforcement in the form of formal warnings, remedial directions, injunctions, or significant civil penalties as decided by the courts, subject to review by the Administrative Review Tribunal (ART). No criminal penalties would be imposed.
Key Concerns
- Inadequacy of Freedom of Speech Protections: The biggest contention on this Bill has been regarding the issue of possible censorship, particularly of alternative opinions that are crucial to the health of a democratic system. To protect the freedom of speech, the Bill defined mis- and disinformation, what constitutes “serious harm” (election interference, harming public health, etc.), and what would be excluded from its scope. However, reservations among the Opposition persisted due to the lack of a clear mechanism to protect divergent opinions from the purview of this Bill.
- Efficacy of Regulatory Measures: Many argue that by allowing the digital platform industry to make its codes, this law lets it self-police. Big Tech companies have no incentive to curb misinformation effectively since their business models allow them to reap financial benefits from the rampant spread of misinformation. Unless there are financial non- or dis- incentives to curb misinformation, Big Tech is not likely to address the situation at war footing. Thus, this law would run the risk of being toothless. Secondly, the Bill did not require platforms to report on the “prevalence of” false content which, along with other metrics, is crucial for researchers and legislators to track the efficacy of the current misinformation-curbing practices employed by platforms.
- Threat of Government Overreach: The Bill sought to expand the ACMA’s compliance and enforcement powers concerning misinformation and disinformation on online communication platforms by giving it powers to form rules on information gathering, code registration, standard-making powers, and core transparency obligations. However, even though the ACMA as a regulatory authority is answerable to the Parliament, the Bill was unclear in defining limits to these powers. This raised concerns from civil society about potential government overreach in a domain filled with contextual ambiguities regarding information.
Conclusion
While the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill sought to equip the ACMA with tools to hold digital platforms accountable and mitigate the harm caused by false information, its critique highlights the complexities of regulating such content without infringing on freedom of speech. Legislations and proposals regarding the matter all over the world are having to contend with this challenge. Globally, legislation and proposals addressing this issue face similar challenges, emphasizing the need for a continuous discourse at the intersection of platform accountability, regulatory restraint, and the protection of diverse viewpoints.
To regulate Big Tech effectively, governments can benefit from adopting a consultative, incremental, and cooperative approach, as exemplified by the European Union’s Digital Services Act 2023. Such a framework provides for a balanced response, fostering accountability while safeguarding democratic freedoms.
Resources
- https://www.infrastructure.gov.au/sites/default/files/documents/factsheet-misinformation-disinformation-bill.pdf
- https://www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation
- https://www.mi-3.com.au/07-02-2024/over-80-australians-feel-they-may-have-fallen-fake-news-says-bbc
- https://www.hrlc.org.au/news/misinformation-inquiry
- https://humanrights.gov.au/our-work/legal/submission/combatting-misinformation-and-disinformation-bill-2024
- https://www.sbs.com.au/news/article/what-is-the-misinformation-bill-and-why-has-it-triggered-worries-about-freedom-of-speech/4n3ijebde
- https://www.hrw.org/report/2023/06/14/no-internet-means-no-work-no-pay-no-food/internet-shutdowns-deny-access-basic#:~:text=The%20Telegraph%20Act%20allows%20authorities,preventing%20incitement%20to%20the%20commission
- https://www.hrlc.org.au/submissions/2024/11/8/submission-combatting-misinformation?utm_medium=email&utm_campaign=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation&utm_content=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation+Preview+CID_31c6d7200ed9bd2f7f6f596ba2a8b1fb&utm_source=Email%20campaign&utm_term=Read%20the%20Human%20Rights%20Law%20Centres%20submission%20to%20the%20inquiry

Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading

Overview:
In today’s digital landscape, safeguarding personal data and communications is more crucial than ever. WhatsApp, as one of the world’s leading messaging platforms, consistently enhances its security features to protect user interactions, offering a seamless and private messaging experience
App Lock: Secure Access with Biometric Authentication
To fortify security at the device level, WhatsApp offers an app lock feature, enabling users to protect their app with biometric authentication such as fingerprint or Face ID. This feature ensures that only authorized users can access the app, adding an additional layer of protection to private conversations.
How to Enable App Lock:
- Open WhatsApp and navigate to Settings.
- Select Privacy.
- Scroll down and tap App Lock.
- Activate Fingerprint Lock or Face ID and follow the on-screen instructions.

Chat Lock: Restrict Access to Private Conversations
WhatsApp allows users to lock specific chats, moving them to a secured folder that requires biometric authentication or a passcode for access. This feature is ideal for safeguarding sensitive conversations from unauthorized viewing.
How to Lock a Chat:
- Open WhatsApp and select the chat to be locked.
- Tap on the three dots (Android) or More Options (iPhone).
- Select Lock Chat
- Enable the lock using Fingerprint or Face ID.

Privacy Checkup: Strengthening Security Preferences
The privacy checkup tool assists users in reviewing and customizing essential security settings. It provides guidance on adjusting visibility preferences, call security, and blocked contacts, ensuring a personalized and secure communication experience.
How to Run Privacy Checkup:
- Open WhatsApp and navigate to Settings.
- Tap Privacy.
- Select Privacy Checkup and follow the prompts to adjust settings.

Automatic Blocking of Unknown Accounts and Messages
To combat spam and potential security threats, WhatsApp automatically restricts unknown accounts that send excessive messages. Users can also manually block or report suspicious contacts to further enhance security.
How to Manage Blocking of Unknown Accounts:
- Open WhatsApp and go to Settings.
- Select Privacy.
- Tap to Advanced
- Enable Block unknown account messages

IP Address Protection in Calls
To prevent tracking and enhance privacy, WhatsApp provides an option to hide IP addresses during calls. When enabled, calls are routed through WhatsApp’s servers, preventing location exposure via direct connections.
How to Enable IP Address Protection in Calls:
- Open WhatsApp and go to Settings.
- Select Privacy, then tap Advanced.
- Enable Protect IP Address in Calls.

Disappearing Messages: Auto-Deleting Conversations
Disappearing messages help maintain confidentiality by automatically deleting sent messages after a predefined period—24 hours, 7 days, or 90 days. This feature is particularly beneficial for reducing digital footprints.
How to Enable Disappearing Messages:
- Open the chat and tap the Chat Name.
- Select Disappearing Messages.
- Choose the preferred duration before messages disappear.

View Once: One-Time Access to Media Files
The ‘View Once’ feature ensures that shared photos and videos can only be viewed a single time before being automatically deleted, reducing the risk of unauthorized storage or redistribution.
How to Send View Once Media:
- Open a chat and tap the attachment icon.
- Choose Camera or Gallery to select media.
- Tap the ‘1’ icon before sending the media file.

Group Privacy Controls: Manage Who Can Add You
WhatsApp provides users with the ability to control group invitations, preventing unwanted additions by unknown individuals. Users can restrict group invitations to ‘Everyone,’ ‘My Contacts,’ or ‘My Contacts Except…’ for enhanced privacy.
How to Adjust Group Privacy Settings:
- Open WhatsApp and go to Settings.
- Select Privacy and tap Groups.
- Choose from the available options: Everyone, My Contacts, or My Contacts Except

Conclusion
WhatsApp continuously enhances its security features to protect user privacy and ensure safe communication. With tools like App Lock, Chat Lock, Privacy Checkup, IP Address Protection, and Disappearing Messages, users can safeguard their data and interactions. Features like View Once and Group Privacy Controls further enhance confidentiality. By enabling these settings, users can maintain a secure and private messaging experience, effectively reducing risks associated with unauthorized access, tracking, and digital footprints. Stay updated and leverage these features for enhanced security.