Netflix is no stranger to its subscribers being targeted by SMS and email-led phishing campaigns. But the most recent campaign has been deployed at a global scale, affecting paid users in as many as 23 countries according to cybersecurity firm Bitdefender. In this particular campaign, attackers are using the carrot-and-stick tactic of either creating a false sense of urgency or promising rewards to steal financial information and Netflix credentials. For example, users may be contacted via SMS and told that their account is being suspended due to payment failures. A fake website may be shared through a link, encouraging the individual to share sensitive information to restore their account. Once this information has been input, it is now accessible to the attackers. This can create significant stress and even financial loss for its users. Thus, they are encouraged to develop the necessary skills to recognize and respond to these threats effectively.
How The Netflix Scam Works
Users are typically contacted through SMS. Bitdefender reports that these messages may look something like this:
"NETFLIX: There was an issue processing your payment. To keep your services active, please sign in and confirm your details at: https://account-details[.]com"
On clicking the link, the victim is directed to a website designed to mimic an authentic user experience interface, containing Netflix’s logo, color scheme, and grammatically-correct text. The website uses this interface to encourage the victim to divulge sensitive personal information, such as account credentials and payment details. Since this is a phishing website, the user’s personal information becomes accessible to the attacker as soon as it is entered. This information is then sold individually or in bundles on the dark web.
Practical Steps to Stay Safe
Know Netflix’s Customer Interface: According to Netflix, it will never ask users to share personal information including credit or debit card numbers, bank account details, and Netflix passwords. It will also never ask for payment through a third-party vendor or website.
Verify Authenticity: Do not open links from unknown sources sent by email or sms. If unsure, access Netflix directly by typing the URL into the browser instead of clicking on links in emails or texts. If the link has been opened, do not enter any information.
Use Netflix’s Official Support Channels: Confirm any suspicious communication through Netflix’s verified help page or app. Write to phishing@netflix.com with any complaints about such an issue.
Contact Your Financial Institution: If you have entered your personal information into a phishing website, you should immediately reach out to your bank to block your card and change your Netflix password. Contact the authorities via www.cybercrime.gov.in or by calling the helpline at 1930 in case of loss of funds.
Use Strong Passwords and Enable MFA/2FA: Users are advised to use a unique, strong password with multiple characters. Enable Multi-Factor Authentication or Two Factor Authentication to your accounts, if available, to add an extra level of security.
Conclusion
Phishing campaigns which are designed to gather customer data through fraudulent means often involve sending links to as many users as possible, with the aim of monetizing stolen information. Attackers exploit user trust in online platforms to steal sensitive personal information, making such campaigns more sophisticated as highlighted above. This underscores the need for users of online platforms to practice good cyber hygiene by verifying information, learning to detect suspicious information and ignoring it, and staying aware of the types of online fraud they may be exposed to.
A photo of Bollywood actress Kareena Kapoor Khan is being widely shared on social media with the claim that she is pregnant again. In the viral image, Kareena appears with a visible baby bump, leading users to speculate about another pregnancy. However, research by the CyberPeace Research Wing found the claim to be misleading. The research revealed that the image is not recent and is actually from 2020, now being reshared with a false narrative.
Claim:
An Instagram user shared the viral image on April 18, 2026, and posted a caption jokingly suggesting that after Taimur and Jehangir, Kareena was expecting more children.
To verify the claim, relevant keyword searches were conducted online, but no credible media report was found supporting the claim that Kareena Kapoor Khan is currently pregnant. A reverse search of the viral visual led to an older video uploaded on the YouTube channel Bol Bollywood on December 7, 2020, where the same image was used.
Further research also found a similar video report on the YouTube channel Bollywood Bluff, published on December 8, 2020, featuring the same visual and similar claims. These findings confirmed that the viral image is several years old and unrelated to any recent development.
The claim that Kareena Kapoor Khan is pregnant again is misleading. The viral photo is not recent but an old image from 2020 that is being circulated with a false claim.
In the tapestry of our modern digital ecosystem, a silent, pervasive conflict simmers beneath the surface, where the quest for cyber resilience seems Sisyphean at times. It is in this interconnected cyber dance that the obscure orchestrator, StripedFly, emerges as the maestro of stealth and disruption, spinning a complex, mostly unseen web of digital discord. StripedFly is not some abstract concept; it represents a continual battle against the invisible forces that threaten the sanctity of our digital domain.
This saga of StripedFly is not a tale of mere coincidence or fleeting concern. It is emblematic of a fundamental struggle that defines the era of interconnected technology—a struggle that is both unyielding and unforgiving in its scope. Over the past half-decade, StripedFly has slithered its way into over a million devices, creating a clandestine symphony of cybersecurity breaches, data theft, and unintentional complicity in its agenda. Let's delve deep into this grand odyssey to unravel the odious intricacies of StripedFly and assess the reverberations felt across our collective pursuit of cyber harmony.
The StripedFly malware represents the epitome of a digital chameleon, a master of cyber camouflage, masquerading as a mundane cryptocurrency miner while quietly plotting the grand symphony of digital bedlam. Its deceptive sophistication has effortlessly skirted around the conventional tripwires laid by our cybersecurity guardians for years. The Russian cybersecurity giant Kaspersky's encounter with StripedFly in 2017 brought this ghostly figure into the spotlight—hitherto, a phantom whistling past the digital graveyard of past threats.
How Does it work
Distinctive in its composition, StripedFly conceals within its modular framework the potential for vast infiltration—an exploitation toolkit designed to puncture the fortifications of both Linux and Windows systems. In an emboldened maneuver, it utilizes a customized version of the EternalBlue SMBv1 exploit—a technique notoriously linked to the enigmatic Equation Group. Through such nefarious channels, StripedFly not only deploys its malicious code but also tenaciously downloads binary files and executes PowerShell scripts with a sinister adeptness unbeknownst to its victims.
Despite its insidious nature, perhaps its most diabolical trait lies in its array of plugin-like functions. It's capable of exfiltrating sensitive information, erasing its tracks, and uninstalling itself with almost supernatural alacrity, leaving behind a vacuous space where once tangible evidence of its existence resided.
In the intricate chess game of cyber threats, StripedFly plays the long game, prioritizing persistence over temporary havoc. Its tactics are calculated—the meticulous disabling of SMBv1 on compromised hosts, the insidious utilization of pilfered keys to propagate itself across networks via SMB and SSH protocols, and the creation of task scheduler entries on Windows systems or employing various methods to assert its nefarious influence within Linux environments.
The Enigma around the Malware
This dualistic entity couples its espionage with monetary gain, downloading a Monero cryptocurrency miner and utilizing the shadowy veils of DNS over HTTPS (DoH) to camouflage its command and control pool servers. This intricate masquerade serves as a cunning, albeit elaborate, smokescreen, lulling security mechanisms into complacency and blind spots.
StripedFly goes above and beyond in its quest to minimize its digital footprint. Not only does it store its components as encrypted data on code repository platforms, deftly dispersed among the likes of Bitbucket, GitHub, and GitLab, but it also harbors a bespoke, efficient TOR client to communicate with its cloistered C2 server out of sight and reach in the labyrinthine depths of the TOR network.
One might speculate on the genesis of this advanced persistent threat—its nuanced approach to invasion, its parallels to EternalBlue, and the artistic flare that permeates its coding style suggest a sophisticated architect. Indeed, the suggestion of an APT actor at the helm of StripedFly invites a cascade of questions concerning the ultimate objectives of such a refined, enduring campaign.
How to deal with it
To those who stand guard in our ever-shifting cyber landscape, the narrative of StripedFly is a clarion call. StObjective reminders of the trench warfare we engage in to preserve the oasis of digital peace within a desert of relentless threats. The StripedFly chronicle stands as a persistent, looming testament to the necessity for heeding the sirens of vigilance and precaution in cyber practice.
Reaffirmation is essential in our quest to demystify the shadows cast by StripedFly, as it punctuates the critical mission to nurture a more impregnable digital habitat. Awareness and dedication propel us forward—the acquisition of knowledge regarding emerging threats, the diligent updating and patching of our systems, and the fortification of robust, multilayered defenses are keystones in our architecture of cyber defense. Together, in concert and collaboration, we stand a better chance of shielding our digital frontier from the dim recesses where threats like StripedFly lurk, patiently awaiting their moment to strike.
Agentic AI systems are autonomous systems that can plan, make decisions, and take actions by interacting with external tools and environments. But they shift the nature of risk by blurring the lines among input, decision, and execution. A conventional model generates an output and stops. An agent takes input, makes plans, invokes tools, updates its state and repeats the cycle. This creates a system where decisions are continuously revised through interaction with external tools and environments, rather than being fixed at the point of input.
This means the attack surface expands in size and becomes more dynamic. Instead of remaining confined to components as in traditional computational systems, they spread in layers and can continue to grow through time. To understand this shift, the system can be analysed through functional layers such as inputs, memory, reasoning, and execution, while recognising that risk does not remain isolated within these layers but emerges through their interaction.
Agentic AI Attack Surface
A layered view of how risks emerge across input, memory, reasoning, execution, and system integration, including feedback loops and cross-system dependencies that amplify vulnerabilities.
Input Layer: Where Untrusted Data Becomes Control
The entry point of an agent is no longer one prompt. The documents, APIs, files, system logs and the outputs of other agents can now be considered input. This diversity is significant due to the fact that every source of input carries its own trust assumptions, and in the majority of cases, they are weak.
The most obvious threat is prompt injection, where inputs are treated as instructions rather than data. Since inputs are treated as instructions, a virus, a malicious webpage, or a document can contain instructions that override system goals without necessarily being detected as something harmful.
Indirect prompt injection extends this risk beyond direct user interaction. Instead of targeting the interface, attackers compromise the retrieval process by embedding malicious instructions within external data sources. When the agent retrieves and processes the data, it treats the embedded content as legitimate input. As a result, the attack is executed through normal reasoning processes, allowing the system to act on untrusted data without recognising the manipulation.
Data poisoning also occurs at runtime. In contrast to classical poisoning (where training data is manipulated), runtime poisoning distorts the agent’s perception of its environment as it runs. This can change decisions without causing apparent failures.
Obfuscation introduces another indirect attacker vector. Encoded instructions or complicated forms may bypass human review but remain readable to the model. This creates asymmetry whereby the system knows more about the attack than those operating it. Once compromised at this layer, the agent implements compromised instructions which affect downstream operations.
Context and Memory: Persistence of Influence
Agentic systems depend on memory to operate efficiently. They often retain context across sessions and frequently store information between sessions.
This introduces a different type of risk: persistence. Through memory poisoning, attackers can insert false or adversarial information into sorted context, which then influences future decisions. Unlike prompt injection, which is often limited to a single interaction, this effect carries forward. Over time, the agent begins to operate on a distorted internal state, shaping decisions in ways that may not be immediately visible.
Another issue is cross-session leakage. Information in a particular context may be replayed in a different context when memory is being shared or there is insufficient memory separation. This is specifically dangerous in those systems that combine retrieval and long-term storage. The context management in itself becomes a weakness. Agents are required to make decisions on what to retain and what to discard. This is susceptible to attackers who can flood the context or manipulate what is still visible and indirectly affect reasoning.
The underlying problem is structural. Memory turns data into a state. Once state is corrupted, the system cannot easily distinguish valid knowledge from adversarial influence.
The issue is structural. Memory converts temporary data into a persistent state. Once this state is weakened, the system cannot reliably separate valid information from adversarial influence, making recovery significantly more difficult.
Reasoning and Planning: Manipulating Intent Without Breaking Logic
The reasoning layer is where agentic AI stands apart from traditional systems. The model no longer reacts to inputs alone. It actively breaks down objectives, analyses alternatives, and ranks actions.
At the reasoning stage, the nature of risk shifts. The concern is no longer limited to injecting instructions, but to influencing how decisions are made. One example is goal manipulation, where the agent subtly reinterprets its objective and produces outcomes that are technically correct but strategically harmful. Reasoning hijacking operates within intermediate steps, altering how constraints are evaluated or how trade-offs are prioritised. The system may remain internally consistent, which makes such deviations difficult to detect.
Tool selection becomes a critical control point. Agents decide which tools to use and when, so influencing these choices can redirect execution without directly accessing the tools themselves. Hallucinations also take on a different role here. In static systems, they remain errors. In agentic systems, they can trigger actions. A perceived need or incorrect judgement can translate into real-world consequences.
This layer introduces probabilistic failure. The system is not fully weakened, but it is nudged towards decisions that appear reasonable yet are incorrect. The risk lies in how those decisions are justified.
Tool and Execution: When Decisions Gain Reach
Once an agent begins interacting with tools, its behaviour extends beyond the model into external systems. APIs, databases, and services become part of the execution path.
One key risk is the use of unauthorised tools. When agents operate with broad permissions, any manipulation of the upstream can be converted into real-world actions. This makes access control a central security concern. Command injection also takes a different form here. The agent generates commands based on its reasoning, so if that reasoning is compromised, the resulting actions may still appear valid despite being harmful.
External tool outputs introduce another risk. If these systems return corrupted or misleading data, the agent may accept it without verification and incorporate it into its decisions. It is also becoming increasingly reliant on third-part tools and plugins adds to this exposure. If these components are compromised, they can affect behaviour without directly attacking the core system, creating a supply-side risk.
At this stage, the agent effectively operates as an insider. It holds legitimate credentials and interacts with systems in expected ways, making misuse harder to identify.
Application and Integration: System-Level Exposure
Agentic systems rarely operate in isolation. They are embedded in larger environments, interacting with identity systems, business logic, and operational workflows.
Access control becomes a major vulnerability. Agents tend to operate across multiple systems with various permission models, creating irregularities that can be exploited. Risks also arise from identity and delegation. In case an agent is operating on behalf of a user, then any vulnerabilities in authentication or session management can allow attackers to assume that authority.
Workflow execution amplifies these risks. Agents can initiate multi-step processes such as transactions, updates, or approvals. Manipulating a single step can change the result of the entire workflow. As integrations increase, so do the number of interaction points, making cumulative risk harder to track.
At this layer, failures are not isolated. They propagate into business operations, making consequences harder to contain.
Output and Action: Where Failures Become Visible
The output layer is where failures become visible, though they rarely originate there.
Data leakage has been a key concern. Agents may disclose information they are allowed to access, especially when tasks boundaries are not clearly defined. Misinformation and unsafe outputs are also important, particularly when outputs directly influence actions or decisions.
Generated code and commands introduce execution risk. If outputs are used without validation, errors or manipulations can have system-level effects. The shift towards autonomous action increases this risk, as small upstream deviations can lead to significant consequences without human intervention. This layer reflects symptoms rather than root causes. Addressing it alone does not reduce the underlying risk.
Beyond Layers: The Missing Dimension
A layered view helps, but it does not capture the full picture. Agentic systems are defined by continuous interaction across layers.
The key missing dimension is the runtime loop. Inputs shape reasoning, reasoning drives action, and actions feed back into both reasoning and memory. These cycles create feedback loops, where small manipulations may escalate over time. This also reduces observability. With multiple interacting components, it becomes difficult to trace cause and effect or identify where failures originate.
Supply chain dependencies add another layer of risk. Models, datasets, APIs, and plugins each introduce their own points of failure. A compromise at any of these points can propagate across the system. The attack surface also includes governance. Weak supervision, unclear responsibility, or excessive autonomy increase overall risk. Human control is not external to the system; it is part of its security.
Conclusion: Structuring the Attack Surface
Agentic AI expands the attack surface beyond traditional systems. It is both recursive and stateful. Risk does not just accumulate across layers; it moves and changes as the system operates.
Any useful representation must go beyond a linear stack. It should capture feedback loops, persistent state, and cross-layer dependencies that characterise the way these systems actually behave. The system is not a pipeline but a cycle. That is where both its capability and its risk emerge.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.