Securing Digital Banking: RBI Mandates Migration to [.]bank[.]in Domains
Introduction
The Reserve Bank of India (RBI) has mandated banks to switch their digital banking domains to 'Bank.in' by October 31, 2025, as part of a strategy to modernise the sector and maintain consumer confidence. The move is expected to provide a consistent and secure interface for online banking, as a response to the increasing threats posed by cybercriminals who exploit vulnerabilities in online platforms. The RBI's directive is seen as a proactive measure to address the growing concerns over cybersecurity in the banking sector.
RBI Circular - Migration to '.bank.in' domain
The official circular released by the RBI dated April 22, 2025, read as follows:
“It has now been decided to operationalise the ‘. bank.in’ domain for banks through the Institute for Development and Research in Banking Technology (IDRBT), which has been authorised by National Internet Exchange of India (NIXI), under the aegis of the Ministry of Electronics and Information Technology (MeitY), to serve as the exclusive registrar for this domain. Banks may contact IDRBT at sahyog@idrbt.ac.in to initiate the registration process. IDRBT shall guide the banks on various aspects related to application process and migration to new domain.”
“All banks are advised to commence the migration of their existing domains to the ‘.bank.in’ domain and complete the process at the earliest and in any case, not later than October 31, 2025.”
CyberPeace Outlook
The Reserve Bank of India's directive mandating banks to shift to the 'Bank.in' domain by October 31, 2025, represents a strategic and forward-looking measure to modernise the nation’s digital banking infrastructure. With this initiative, the RBI is setting a new benchmark in cybersecurity by creating a trusted, exclusive domain that banks must adopt. This move will drastically reduce cyber threats, phishing attacks, and fake banking websites, which have been major sources of financial fraud. This fixed domain will simplify verification for consumers and tech platforms to more easily identify legitimate banking websites and apps. Furthermore, a strong drop in online financial fraud will have a long-term effect by this order. Since phishing and domain spoofing are two of the most prevalent forms of cybercrime, a shift to a strictly regulated domain name system will remove the potential for lookalike URLs and fraudulent websites that mimic banks. As India’s digital economy grows, RBI’s move is timely, essential, and future-ready.
References
Related Blogs

Introduction
In the sprawling online world, trusted relationships are frequently taken advantage of by cybercriminals seeking to penetrate guarded systems. The Watering Hole Attack is one advanced method, which focuses on a user’s ecosystem by compromising the genuine sites they often use. This attack method is different from phishing or direct attacks as it quietly exploits the everyday browsing of the target to serve malicious content. The quiet and exact nature of watering hole attacks makes them prevalent amongst Advanced Persistent Threat (APT) groups, especially in conjunction with state-sponsored cyber-espionage operations.
What Qualifies as a Watering Hole Attack?
A Watering Hole Attack targets and infects a trusted website. The targeted website is one that is used by a particular organization or community, such as a specific industry sector. This type of cyberattack is analogous to the method of attack used by animals and predators waiting by the water’s edge for prey to drink. Attackers prey on their targets by injecting malicious code, such as an exploit kit or malware loader, into websites that are popular with their victims. These victims are then infected when they visit said websites unknowingly. This opens as a gateway for attackers to infiltrate corporate systems, harvest credentials, and pivot across internal networks.
How Watering Hole Attacks Unfold
The attack lifecycle usually progresses as follows:
- Reconnaissance - Attackers gather intelligence on the websites frequented by the target audience, including specialized communities, partner websites, or local news sites.
- Website Exploitation - Through the use of outdated CMS software and insecure plugins, attackers gain access to the target website and insert malicious code such as JS or iframe redirections.
- Delivery and Exploitation - The visitor’s browser executes the malicious code injected into the page. The code might include a redirection payload which sends the user to an exploit kit that checks the user’s browser, plugins, operating system, and other components for vulnerabilities.
- Infection and Persistence - The infected system malware such as RATs, keyloggers, or backdoors. These enable lateral and long-term movements within the organisation for espionage.
- Command and Control (C2) - For further instructions, additional payload delivery, and stolen data retrieval, infected devices connect to servers managed by the attackers.
Key Features of Watering Hole Attacks
- Indirect Approach: Instead of going after the main target, attackers focus on sites that the main target trusts.
- Supply-Chain-Like Impact: An infected industry portal can affect many companies at the same time.
- Low Profile: It is difficult to identify since the traffic comes from real websites.
- Advanced Customization: Exploit kits are known to specialize in making custom payloads for specific browsers or OS versions to increase the chance of success.
Why Are These Attacks Dangerous?
Worming hole attacks shift the battlefield to new grounds in cyber warfare on the web. They eliminate the need for firewalls, email shields, and other security measures because they operate on the traffic to and from real, trusted websites. When the attacks work as intended, the following consequences can be expected:
- Stealing Credentials: Including privileged accounts and VPN credentials.
- Espionage: Theft of intellectual property, defense blueprints, or government confidential information.
- Supply Chain Attacks: Resulting in a series of infections among related companies.
- Zero-Day Exploits: Including automated attacks using zero-day exploits for full damage.
Incidents of Primary Concern
The implications of watering hole attacks have been felt in the real world for quite some time. An example from 2019 reveals this, where a known VoIP firm’s site was compromised and used to spread data-stealing malware to its users. Likewise, in 2014, the Operation Snowman campaign—which seems to have a state-backed origin—attempted to infect users of a U.S. veterans’ portal in order to gain access to visitors from government, defense, and related fields. Rounding up the list, in 2021, cybercriminals attacked regional publications focusing on energy, using the publications to spread malware to company officials and engineers working on critical infrastructure, as well as to steal data from their systems. These attacks show the widespread and dangerous impact of watering hole attacks in the world of cybersecurity.
Detection Issues
Due to the following reasons, traditional approaches to security fail to detect watering hole attacks:
- Use of Authentic Websites: Attacks involving trusted and popular domains evade detection via blacklisting.
- Encrypted Traffic: Delivering payloads over HTTPS conceals malicious scripts from being inspected at the network level.
- Fileless Methods: Using in-memory execution is a modern campaign technique, and detection based on signatures is futile.
Mitigation Strategies
To effectively neutralize the threat of watering hole attacks, an organization should implement a defense-in-depth strategy that incorporates the following elements:
- Patch Management and Hardening -
- Conduct routine updates on operating systems, web browsers, and extensions to eliminate exploit opportunities.
- Either remove or reduce the use of high-risk elements such as Flash and Java, if feasible.
- Network Segmentation - Minimize lateral movement by isolating critical systems from the general user network.
- Behavioral Analytics - Implement Endpoint Detection and Response (EDR) tools to oversee unusual behaviors on processes—for example, script execution or dubious outgoing connections.
- DNS Filtering and Web Isolation - Implement DNS-layer security to deny access to known malicious domains and use browser isolation for dangerous sites.
- Threat Intelligence Integration - Track watering hole threats and campaigns for indicators of compromise (IoCs) on advisories and threat feeds.
- Multi-Layer Email and Web Security - Use web gateways integrated with dynamic content scanning, heuristic analysis, and sandboxing.
- Zero Trust Architecture - Apply least privilege access, require device attestation, and continuous authentication for accessing sensitive resources.
Incident Response Best Practices
- Forensic Analysis: Check affected endpoints for any mechanisms set up for persistence and communication with C2 servers.
- Log Review: Look through proxy, DNS, and firewall logs to detect suspicious traffic.
- Threat Hunting: Search your environment for known Indicators of Compromise (IoCs) related to recent watering hole attacks.
- User Awareness Training: Help employees understand the dangers related to visiting external industry websites and promote safe browsing practices.
The Immediate Need for Action
The adoption of cloud computing and remote working models has significantly increased the attack surface for watering hole attacks. Trust and healthcare sectors are increasingly targeted by nation-state groups and cybercrime gangs using this technique. Not taking action may lead to data leaks, legal fines, and break-ins through the supply chain, which damage the trustworthiness and operational capacity of the enterprise.
Conclusion
Watering hole attacks demonstrate how phishing attacks evolve from a broad attack to a very specific, trust-based attack. Protecting against these advanced attacks requires the zero-trust mindset, adaptive defenses, and continuous monitoring, which is multicentral security. Advanced response measures, proactive threat intelligence, and detection technologies integration enable organizations to turn this silent threat from a lurking predator to a manageable risk.
References
- https://www.fortinet.com/resources/cyberglossary/watering-hole-attack
- https://en.wikipedia.org/wiki/Watering_hole_attack
- https://www.proofpoint.com/us/threat-reference/watering-hole
- https://www.techtarget.com/searchsecurity/definition/watering-hole-attack

Executive Summary
A video circulating on social media shows a woman using abusive language in front of a camera. Users sharing the clip claim that the woman is a professor at Galgotias University and that the video exposes her alleged reality. However, an research by CyberPeace found the claim to be misleading. The probe revealed that the woman seen in the viral video has no connection with Galgotias University and is not a professor there.Fact-checking further showed that the video is not recent but around seven years old. The woman featured in the clip was identified as Shubhrastha, who is a political strategist by profession.
Claim:
A user on X (formerly Twitter) shared the viral video on February 18, 2026, claiming: “A ‘class in abuse studies’ at Galgotias University? An obscene video of a professor teaching ethics has gone viral. Another shameful chapter has been added to the list of controversies surrounding Galgotias University.” The post further alleged that after falsely claiming a Chinese robot as its own, the university’s “Culture and Ethics” faculty member was seen publicly using abusive language in the viral clip. The post link and its archived version are provided below:

Fact Check:
To verify the authenticity of the viral claim, we extracted key frames from the video and conducted a reverse image search using Google Lens. During the research , we found the same video uploaded on the Indian Spectator’s YouTube channel on June 9, 2018

The video was also found on another YouTube channel, where it had been uploaded on June 12, 2018.

Conclusion
The research clearly establishes that the woman seen in the viral video has no association with Galgotias University and is not a professor there. The clip is also not recent but approximately seven years old. The woman in the video was identified as Shubhrastha, a political strategist.

Introduction
Artificial intelligence is often hailed as a democratiser of knowledge, opportunity and skill. It is set to improve diagnostics, personalised learning, and productivity to boost the economy, which can assist millions of people to leave poverty. However, this may be an incomplete picture. A report of the United Nations Development Programme in 2025 tells a more complex tale. The Next Great Divergence: Why AI May Widen Inequality Between Countries cautions that, unless acts are taken to intervene, AI will not alleviate inequality between countries but will instead concentrate benefits in already advantaged economies and increase risks in more vulnerable ones.
Two Gaps, One Crisis
AI is not going to create a level playing field: it has been injected into a world where there is unprecedented inequality. The report outlines two structural asymmetries that will influence the ways in which its effects manifest: a capability gap and a vulnerability gap.
Those countries that have high connectivity, skills, compute and regulation will be in a position to reap a greater portion of the AI dividend. Others will be exposed to greater risks of job losses, information exclusion, misinformation, and the indirect consequences of increased energy and water demands.
The centre of this transition is the Asia-Pacific region, that harbors a population of more than 55 per cent of the world. More than half of the global AI users are now located in the region, but the initial positions are quite different. Nations such as Singapore and South Korea are already spending a lot of money on AI infrastructure, with others still striving to offer basic broadband services. Two out of three individuals already use AI tools in certain high-income economies. In most countries with low incomes, the utilisation is lower. Such figures are important as they depict not only a gap in technology but also a structural difference in terms of who controls AI and who is controlled by the latter.
When Inequality Becomes a Trust Problem
Any trusted technological system is based on three tenets: transparency, fairness and accountability. AI inequality negatively impacts all three.
If governments implement imported AI systems in areas with limited technical capability, with limited transparency on their operation, their construction, and their biases. Citizens do not really trust when decision-making systems are black boxes and domestic institutions lack the know-how to question them.
Data exclusion also interferes with fairness. The AI systems trained with the datasets not sufficiently representative of the rural population, linguistic minorities, and women will generate poorer results in those groups systematically. Since South Asian women are much less likely to own a smartphone, this impacts their representation in digital data, and consequently in any AI system trained on such data.
Safety Risks Are Not Evenly Distributed
The lack of trust has a direct safety aspect. For example, those countries that have less robust information ecosystems have a greater exposure to AI-generated misinformation that can bias the discourse of the populace, alter elections, and cause violence. They also have the weakest capability of screening, tagging, or combating such content.
The same can be said about labour markets. The very same technologies that can speed up marginalisation and destabilise governance increase human insecurity, especially among employees in the informal economy with weak social security. The UNDP report points out that the exposure of female employment to disruption by AI is disproportionate to that of male employment, which further presents a gendered dimension in an already unequal situation.
Risks of infrastructure are skewed as well. Large AI systems may create disproportionately high energy and water demands on countries that host the data infrastructure without there being an equivalent economic payback. The environmental cost is local while profits are outsourced. Dangers of AI spread downwards, and the advantages go upwards.
The Governance Gap and Regulatory Arbitrage
Governance is perhaps the most important aspect. There are only a few states that presently have extensive AI regulation systems. This gives rise to a patchy landscape, in which safety standards differ dramatically and where companies have an incentive to install systems in jurisdictions that have weaker regulation.
The main reason is the lack of capability, as expressed by Philip Schellekens, chief economist of the UNDP in Asia and the Pacific, who says that those countries that invest in skills, computing power and well-run governance structures will gain. The rest will be left far behind.
This departure has its ramifications outside the nations. When users in other areas are subjected to widely different rates of safety and equity by the same international platforms, the concept of uniform digital norms would no longer be sustainable. Confidence in AI systems is lost not only locally but also on a global scale.
Way Forward
The UNDP report makes it clear that there is no inevitability of divergence. To avert it, however, it is necessary to consider AI governance as a development, rather than a technology problem.
The capacity to govern should be constructed and not presumed. This implies assisting countries in establishing regulatory systems, institutional capacity, and facilitating cross-border collaboration on standards. It can also imply considering some AI features as a public good, with common models and open standards that do not allow a few firms or states to become too powerful.
The UNDP articulates the problem in a simple manner: in the end, the world's people and not machines must decide on what technologies should be given priority and how to utilise them optimally.
Conclusion
AI inequality is often framed as an economic divergence story. But its implications run deeper. It reshapes who is protected, who is visible in data, and who has the power to challenge harmful outcomes. The risk is not just that some countries fall behind economically. It is that the global digital ecosystem fragments into zones of high trust and low trust, high protection and low protection. The choices made now will determine which path prevails. AI can reinforce existing divides or help bridge them.
But that outcome will not be decided by the technology itself. It will be decided by how societies choose to distribute access, power, and responsibility in the systems they build.
References
- https://www.undp.org/sites/g/files/zskgke326/files/2025-12/undp-rbap-the-next-great-divergence_1.pdf
- https://www.undp.org/asia-pacific/press-releases/ai-risks-sparking-new-era-divergence-development-gaps-between-countries-widen-undp-report-finds
- https://www.undp.org/asia-pacific/blog/next-great-divergence-how-ai-could-split-world-again-if-we-dont-intervene
- https://www.aljazeera.com/news/2025/12/2/ai-threatens-to-widen-inequality-among-states-un
- https://www.undp.org/asia-pacific/next-great-divergence
- https://www.eco-business.com/press-releases/ai-risks-spark-new-era-of-divergence-as-development-gaps-widen-undp-report/