#Factcheck-False Claims of Houthi Attack on Israel’s Ashkelon Power Plant
Research Wing
Innovation and Research
PUBLISHED ON
Jan 4, 2025
10
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.
Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.
This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
Claim: The video shows massive fire at Israel's Ashkelon power plant
Claimed On:Instagram and X (Formerly Known As Twitter)
One of the best forums for many video producers is YouTube. It also has a great chance of generating huge profits. YouTube content producers need assistance to get the most views, likes, comments, and subscribers for their videos and channels. As a result, some people could use YouTube bots to unnaturally raise their ranks on the YouTube site, which might help them get more organic views and reach a larger audience. However, this strategy is typically seen as unfair and can violate the YouTube platform’s terms of service.
As YouTube grows in popularity, so does the usage of YouTube bots. These bots are software programs that may automate operations on the YouTube platform, such as watching, liking, or disliking videos, subscribing to or unsubscribing from channels, making comments, and adding videos to playlists, among others. There have been YouTube bots around for a while. Many YouTubers widely use these computer codes to increase the number of views on their videos and accounts, which helps them rank higher in YouTube’s algorithm. Researchers discovered a new bot that takes private information from YouTube users’ accounts.
CRIL (Cyble Research and Intelligence Labs) has been monitoring new and active malware families CRIL has discovered a new YouTube bot virus capable of viewing, liking, and commenting on YouTube videos. Furthermore, it is capable of stealing sensitive information from browsers and acting as a bot that accepts orders from the Command and Control (C&C) server to carry out other harmful operations.
The Bot Insight
This YouTube bot has the same capabilities as all other YouTube bots, including the ability to view, like, and comment on videos. Additionally, it has the ability to steal private data from browsers and act as a bot that takes commands from a Command and Control (C&C) server for various malicious purposes. Researchers from Cyble discovered the inner workings of this information breach the Youtube bot uses the sample hash(SHA256) e9dac8b677a670e70919730ee65ab66cc27730378b9233d944ad7879c530d312.They discovered that it was created using the.NET compiler and is an executable file with a 32-bit size.
The virus runs an AntiVM check as soon as it is executed to thwart researchers’ attempts to find and analyze malware in a virtual environment.
It stops the execution if it finds that it is operating in a regulated setting. If not, it will carry out the tasks listed in the argument strings.
Additionally, the virus creates a mutex, copies itself to the %appdata% folder as AvastSecurity.exe, and then uses cmd.exe to run.
The new mutex makes a task scheduler entry and aids in ensuring
The victim’s system’s installed Chromium browsers are used to harvest cookies, autofill information, and login information by the AvastSecurity.exe program.
In order to view the chosen video, the virus runs the YouTube Playwright function, passing the previously indicated arguments along with the browser’s path and cookie data.
YouTube bot uses the YouTube Playwright function to launch the browser environment with the specified parameters and automate actions like watching, liking, and commenting on YouTube videos. The feature is dependent on Microsoft. playwright’s kit.
The malware establishes a connection to a C2 server and gets instructions to erase the entry for the scheduled task and end its own process, extract log files to the C2 server, download and run other files, and start/stop watching a YouTube movie.
Additionally, it verifies that the victim’s PC has the required dependencies, including the Playwright package and the Chrome browser, installed. When it gets the command “view,” it will download and install these dependencies if they are missing.
Recommendations
The following is a list of some of the most critical cybersecurity best practices that serve as the first line of defense against intruders. We propose that our readers follow the advice provided below:
Downloading pirated software from warez/torrent websites should be avoided. Such a virus is commonly found in “Hack Tools” available on websites such as YouTube, pirate sites, etc.
When feasible, use strong passwords and impose multi-factor authentication.
Enable automatic software updates on your laptop, smartphone, and other linked devices.
Use a reputable antivirus and internet security software package on your linked devices, such as your computer, laptop, and smartphone.
Avoid clicking on suspicious links and opening email attachments without verifying they are legitimate.Inform staff members on how to guard against dangers like phishing and unsafe URLs.
Block URLs like Torrent/Warez that might be used to propagate malware.To prevent malware or TAs from stealing data, keep an eye on the beacon at the network level.
Conclusion
Using YouTube bots may be a seductive strategy for content producers looking to increase their ranks and expand their viewership on the site. However, the employment of bots is typically regarded as unfair and may violate YouTube’s terms of service. Utilizing YouTube bots carries additional risk because they might be identified, which could lead to account suspension or termination for the user. Mitigating this pressing issue through awareness drives and surveys to determine the bone of contention is best. NonProfits and civil society organizations can bridge the gap between the tech giant and the end user to facilitate better know-how about these unknown bots.
A video of Prime Minister Narendra Modi is going viral across multiple social media platforms. In the clip, PM Modi is purportedly heard praising Christianity and stating that only Jesus Christ can lead people to heaven.Several users are sharing and commenting on the video, believing it to be genuine. The CyberPeace researched the viral claim and found it to be false. The circulating video has been created using artificial intelligence (AI).
Claim:
On January 29, 2026, a Facebook user named ‘Khaju Damor’ posted the viral video of PM Modi. The post gained traction, with many users sharing and commenting on it as if it were authentic. (Links and archived versions provided)
As part of our research , we first closely examined the viral video. Upon careful observation, several inconsistencies were noticed. The Prime Minister’s facial expressions and hand movements appeared unnatural. The lip-sync and overall visual presentation also raised suspicions about the clip being digitally manipulated. To verify this further, we analyzed the video using the AI detection tool Hive Moderation. The tool’s analysis indicated a 99% probability that the video was AI-generated.
To independently confirm the findings, we also ran the clip through another detection platform, Undetectable.ai. Its analysis likewise indicated a very high likelihood that the video was created using artificial intelligence.
Conclusion:
Our research confirms that the viral video of Prime Minister Narendra Modi praising Christianity and making the alleged statement about heaven is fake. The clip has been generated using AI tools and does not depict a real statement made by the Prime Minister.
A video clip being circulated on social media allegedly shows the Hon’ble President of India, Smt. Droupadi Murmu, the TV anchor Anjana Om Kashyap and the Hon’ble Chief Minister of Uttar Pradesh, Shri Yogi Adityanath promoting a medicine for diabetes. While The CyberPeace Research Team did a thorough investigation, the claim was found to be not true. The video was digitally edited, with original footage of the heavy weight persons being altered to falsely suggest their endorsement of the medication. Specific discrepancies were found in the lip movements and context of the clips which indicated AI Manipulation. Additionally, the distinguished persons featured in the video were actually discussing unrelated topics in their original footage. Therefore, the claim that the video shows endorsements of a diabetes drug by such heavy weights is debunked. The conclusion drawn from the analysis is that the video is an AI creation and does not reflect any genuine promotion. Furthermore, it's also detected by AI voice detection tools.
Claims:
A video making the rounds on social media purporting to show the Hon'ble President of India, Smt. Draupadi Murmu, TV anchor Anjana Om Kashyap, and Hon'ble Chief Minister of Uttar Pradesh Shri Yogi Adityanath giving their endorsement to a diabetes medicine.
Upon receiving the post we carefully watched the video and certainly found some discrepancies between lip synchronization and the word that we can hear. Also the voice of Chief Minister of Uttar Pradesh Shri Yogi Adityanath seems to be suspicious which clearly indicates some sign of fabrication. In the video, we can hear Hon'ble President of India Smt. Droupadi Murmu endorses a medicine that cured her diabetes. We then divided the video into keyframes, and reverse-searched one of the frames of the video. We landed on a video uploaded by Aaj Tak on their official YouTube Channel.
We found something similar to the same viral video, we can see the courtesy written as Sansad TV. Taking a cue from this we did some keyword searches and found another video uploaded by the YouTube Channel Sansad TV. In this video, we found no mention of any diabetes medicine. It was actually the Swearing in Ceremony of the Hon’ble President of India, Smt. Droupadi Murmu.
In the second part, there was a man addressed as Dr. Abhinash Mishra who allegedly invented the medicine that cures diabetes. We reverse-searched the image of that person and landed at a CNBC news website where the same face was identified as Dr Atul Gawande who is a professor at Harvard School of Public Health. We watched the video and found no sign of endorsing or talking about any diabetes medicine he invented.
We also extracted the audio from the viral video and analyzed it using the AI audio detection tool named Eleven Labs, which found the audio very likely to be created using the AI Voice generation tool with the probability of 98%.
Hence, the Claim made in the viral video is false and misleading. The Video is digitally edited using different clips and the audio is generated using the AI Voice creation tool to mislead netizens. It is worth noting that we have previously debunked such voice-altered news with bogus claims.
Conclusion:
In conclusion, the viral video claiming that Hon'ble President of India, Smt. Droupadi Murmu and Chief Minister of Uttar Pradesh Shri Yogi Adityanath promoted a diabetes medicine that cured their diabetes, is found to be false. Upon thorough investigation it was found that the video is digitally edited from different clips, the clip of Hon'ble President of India, Smt. Droupadi Murmu is taken from the clip of Oath Taking Ceremony of 15th President of India and the claimed doctor Abhinash Mishra whose video was found in CNBC News Outlet. The real name of the person is Dr. Atul Gawande who is a professor at Harvard School of Public Health. Online users must be careful while receiving such posts and should verify before sharing them with others.
Claim: A video is being circulated on social media claiming to show distinguished individuals promoting a particular medicine for diabetes treatment.
Claimed on: Facebook
Fact Check: Fake & Misleading
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.