#FactCheck - Edited Video Falsely Claims as an attack on PM Netanyahu in the Israeli Senate
Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.

Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.


Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.

We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.



Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
Apple launched Passkeys with iOS 16 as a more authentic and secure mechanism. It is safer than passwords, and it is more efficient in comparison to passwords. Apple users using iOS 16 passkeys features should enable two-factor authentication. The passkeys are an unchallenging mechanism than the passwords for the passkeys. The user just has to open the apps and websites, and then the biometric sensor automatically recognises the face and fingerprints. There can be a PIN and pattern used to log instead of passwords. The passkeys add an extra coating of protection to the user’s systems against cyber threats like phishing attacks by SMS and one-time password-based. In a report 9 to 5mac, there is confirmation that 95% of users are using passkeys. Also, with the passkeys, users’ experience will be better, and it is a more security-proof mechanism. The passwords were weak, reused credentials and credentials leaked, and the chances of phishing attacks were real.
What are passkeys?
Passkey is a digital key linked to users’ accounts and websites or applications. Passkeys allow the user to log into any application and website without entering passwords, usernames, or other details. The aim of this new feature is to replace the old long pattern of entering passwords for going through any websites and applications.
The passkeys are developed by Microsoft, Apple, and Google together, and it is also called FIDO Authentication (Fast identity online). It eliminates the need to remember passwords and the need for typing. So, the passkeys work as they replace the password with a unique digital key, which is tied to the account then, the key is stored in the device itself, and it is end-to-end encrypted. The passkeys will always be on the sites on which users specifically created them. the passkeys use the technology of cryptography for more security purposes. And the passkeys guarantee against the phish.
And since the passkeys follow FIDO standards so, this also can be used for third-party nonapple devices as the third-party device generate a QR code that enables the iOS user to scan that to log in. It will recognise the face of the person for authentication and then asks for permission on another device to deny or allow.
How are passkeys more secure than passwords?
The passkeys follow the public key cryptographic protocols that support the security keys, and they work against phishing and other cyber threats. It is more secure than SMS and apps based on one-time passwords. And another type of multi-factor authentication.
Why are passwords insecure?
The users create passwords easily, and it is wondering if they are secure. The very important passwords are short and easy to crack as they generally relate to the user’s personal information or popular words. One password is reused by the user to the different accounts, and then, in this case, hacking one account gives access to all accounts to the hackers. The problem is that passwords have inherent flaws, like they could be easily stolen.
Are passkeys about to become obligatory?
Many websites restrict the type of passwords, as some websites ask for mixtures of numbers and symbols, and many websites ask for two-factor authentication. There is no surety about the obligation of passkeys widespread as it is still a new concept and it will take time, so it is going to be optional for a while.
- There was a case of a Heartland payment system data breach, and Heartland was handling over 100 million monthly credit card transactions for 175,000 retailers at the time of the incident. Visa and MasterCard detected the hack in January 2009 when they notified Heartland of suspicious transactions. And this happened due to a password breach. The corporation paid an estimated $145 million in settlement for illegal payments. Today, data-driven breaches affect millions of people’s personal information.
- GoDaddy reported a security attack in November that affected the accounts of over a million of its WordPress customers. The attacker acquired unauthorised access to GoDaddy’s Managed WordPress hosting environment by hacking into the provisioning system in the company’s legacy Managed WordPress code.
Conclusion
The use of strong and unique passwords is an essential requirement to safeguard information and data from cyberattacks, but still, passwords have its own disadvantages. And by the replacement of passwords, a passkey, a digital key that ensures proper safety and there is security against cyberattacks and cybercrimes through passkey. There are cases above-mentioned that happened due to the password’s weaker security. And in this technology world, there is a need for something for protection and prevention from cybercrimes, and the world dumps passwords and adopts passkeys.
References
- https://www.cnet.com/tech/mobile/switch-to-passkeys-more-secure-than-passwords-on-ios-16-iphone-14/
- https://economictimes.indiatimes.com/magazines/panache/google-is-ending-passwords-rolls-out-passkeys-for-easy-log-in-how-to-set-it/articleshow/99988444.cms?from=mdr
- https://security.googleblog.com/2023/05/making-authentication-faster-than-ever.html#:~:text=Because%20they%20are%20based%20on,%2Dfactor%20authentication%20(MFA).

Introduction
As we delve deeper into the intricate, almost esoteric digital landscape of the 21st century, we are confronted by a new and troubling phenomenon that threatens the very bastions of our personal security. This is not a mere subplot in some dystopian novel but a harsh and palatable reality firmly rooted in today's technologically driven society. We must grapple with the consequences of the alarming evolution of cyber threats, particularly the sophisticated use of artificial intelligence in creating face swaps—a technique now cleverly harnessed by nefarious actors to undermine the bedrock of biometric security systems.
What is GoldPickaxe?
It was amidst the hum of countless servers and data centers that the term 'GoldPickaxe' began to echo, sending shivers down the spines of cybersecurity experts. Originating from the intricate web spun by a group of Chinese hackers as reported in Dark Reading. GoldPickaxe represents the latest in a long lineage of digital predators. It is an astute embodiment of the disguise, blending into the digital environment as a seemingly harmless government service app. But behind its innocuous facade, it bears the intent to ensnare and deceive, with the elderly demographic being especially susceptible to its trap.
Victims, unassuming and trustful, are cajoled into revealing their most sensitive information: phone numbers, private details, and, most alarmingly, their facial data. These virtual reflections, intended to be the safeguard of one's digital persona, are snatched away and misused in a perilous transformation. The attackers harness such biometric data, feeding it into the arcane furnaces of deepfake technology, wherein AI face-swapping crafts eerily accurate and deceptive facsimiles. These digital doppelgängers become the master keys, effortlessly bypassing the sentinel eyes of facial recognition systems that lock the vaults of Southeast Asia's financial institutions.
Through the diligent and unyielding work of the research team at Group-IB, the trajectory of one victim's harrowing ordeal—a Vietnamese individual pilfered of a life-altering $40,000—sheds light on the severity of this technological betrayal. The advancements in deep face technology, once seen as a marvel of AI, now present a clear and present danger, outpacing the mechanisms meant to deter unauthorized access, and leaving the unenlightened multitude unaware and exposed.
Adding weight to the discussion, experts, a potentate in biometric technology, commented with a somber tone: 'This is why we see face swaps as a tool of choice for hackers. It gives the threat actor this incredible level of power and control.' This chilling testament to the potency of digital fraudulence further emphasizes that even seemingly impregnable ecosystems, such as that of Apple’s, are not beyond the reach of these relentless invaders.
New Threat
Emerging from this landscape is the doppelgänger of GoldPickaxe specifically tailored for the iOS landscape—GoldDigger's mutation into GoldPickaxe for Apple's hallowed platform is nothing short of a wake-up call. It engenders not just a single threat but an evolving suite of menaces, including its uncanny offspring, 'GoldDiggerPlus,' which is wielding the terrifying power to piggyback on real-time communications of the affected devices. Continuously refined and updated, these threats become chimeras, each iteration more elusive, more formidable than its predecessor.
One ingenious and insidious tactic exploited by these cyber adversaries is the diversionary use of Apple's TestFlight, a trusted beta testing platform, as a trojan horse for their malware. Upon clampdown by Apple, the hackers, exhibiting an unsettling level of adaptability, inveigle users to endorse MDM profiles, hitherto reserved for corporate device management, thereby chaining these unknowing participants to their will.
How To Protect
Against this stark backdrop, the question of how one might armor oneself against such predation looms large. It is a question with no simple answer, demanding vigilance and proactive measures.
General Vigilance : Aware of the Trojan's advance, Apple is striving to devise countermeasures, yet individuals can take concrete steps to safeguard their digital lives.
Consider Lockdown Mode: It is imperative to exhibit discernment with TestFlight installations, to warily examine MDM profiles, and seriously consider embracing the protective embrace of Lockdown Mode. Activating Lockdown Mode on an iPhone is akin to drawing the portcullis and manning the battlements of one's digital stronghold. The process is straightforward: a journey to the settings menu, a descent into privacy and security, and finally, the sanctification of Lockdown Mode, followed by a device restart. It is a curtailment of convenience, yes, but a potent defense against the malevolence lurking in the unseen digital thicket.
As 'GoldPickaxe' insidiously carves its path into the iOS realm—a rare and unsettling occurrence—it flags the possible twilight of the iPhone's vaunted reputation for tight security. Should these shadow operators set their sights beyond Southeast Asia, angling their digital scalpels towards the U.S., Canada, and other English-speaking enclaves, the consequences could be dire.
Conclusion
Thus, it is imperative that as digital citizens, we fortify ourselves with best practices in cybersecurity. Our journey through cyberspace must be cautious, our digital trails deliberate and sparse. Let the specter of iPhone malware serve as a compelling reason to arm ourselves with knowledge and prudence, the twin guardians that will let us navigate the murky waters of the internet with assurance, outwitting those who weave webs of deceit. In heeding these words, we preserve not only our financial assets but the sanctity of our digital identities against the underhanded schemes of those who would see them usurped.
References
- https://www.timesnownews.com/technology-science/new-ios-malware-stealing-face-id-data-bank-infos-on-iphones-how-to-protect-yourself-article-107761568
- https://www.darkreading.com/application-security/ios-malware-steals-faces-defeat-biometrics-ai-swaps
- https://www.tomsguide.com/computing/malware-adware/first-ever-ios-trojan-discovered-and-its-stealing-face-id-data-to-break-into-bank-accounts
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/