BGMI Relaunch

Introduction
As e-sports flourish in India, mobile gaming platforms and apps have contributed massively to this boom. The wave of online mobile gaming has led to a new recognition of esports. As we see the Sports Ministry being very proactive for e-sports and e-athletes, it is pertinent to ensure that we do not compromise our cyber security for the sake of these games. When we talk about online mobile gaming, the most common names that come to our minds are PUBG and BGMI. As news for all Indian gamers, BGMI is set to be relaunched in India after approval from the Ministry of Electronics and Information Technology.
Why was BGMI banned?
The Govt banned Battle Ground Mobile India on the pretext of being a Chinese application and the fact that all the data was hosted in China itself. This caused a cascade of compliance and user safety issues as the Data was stored outside India. Since 2020 The Indian Govt has been proactive in banning Chinese applications, which might have an adverse effect on national security and Indian citizens. Nearly 200 plus applications have been banned by the Govt, and most of them were banned due to their data hubs being in China. The issue of cross-border data flow has been a key issue in Geo-Politics, and whoever hosts the data virtually owns it as well and under the potential threat of this fact, all apps hosting their data in China were banned.
Why is BGMI coming back?
BGMI was banned for not hosting data in India, and since the ban, the Krafton Inc.-owned game has been engaging in Idnai to set up data banks and servers to have a separate gaming server for Indian players. These moves will lead to a safe gaming ecosystem and result in better adherence to the laws and policies of the land. The developers have not declared a relaunch date yet, but the game is expected to be available for download for iOS and Android users in the coming few days. The game will be back on app stores as a letter from the Ministry of Electronics and Information Technology has been issued stating that the games be allowed and made available for download on the respective app stores.
Grounds for BGMI
BGMI has to ensure that they comply with all the laws, policies and guidelines in India and have to show the same to the Ministry to get an extension on approval. The game has been permitted for only 90 days (3 Months). Hon’ble MoS Meity Rajeev Chandrashekhar stated in a tweet “This is a 3-month trial approval of #BGMI after it has complied with issues of server locations and data security etc. We will keep a close watch on other issues of User harm, Addiction etc., in the next 3 months before a final decision is taken”. This clearly shows the magnitude of the bans on Chinese apps. The ministry and the Govt will not play the soft game now, it’s all about compliance and safeguarding the user’s data.
Way Forward
This move will play a significant role in the future, not only for gaming companies but also for other online industries, to ensure compliance. This move will act as a precedent for the issue of cross-border data flow and the advantages of data localisation. It will go a long way in advocacy for the betterment of the Indian cyber ecosystem. Meity alone cannot safeguard the space completely, it is a shared responsibility of the Govt, industry and netizens.
Conclusion
The advent of online mobile gaming has taken the nation by storm, and thus, being safe and secure in this ecosystem is paramount. The provisional permission form BGMI shows the stance of the Govt and how it is following the no-tolerance policy for noncompliance with laws. The latest policies and bills, like the Digital India Act, Digital Personal Data Protection Act, etc., will go a long way in securing the interests and rights of the Indian netizen and will create a blanket of safety and prevention of issues and threats in the future.
Related Blogs

The race for global leadership in AI is in full force. As China and the US emerge as the ‘AI Superpowers’ in the world, the world grapples with the questions around AI governance, ethics, regulation, and safety. Some are calling this an ‘AI Arms Race.’ Most of the applications of these AI systems are in large language models for commercial use or military applications. Countries like Germany, Japan, France, Singapore, and India are now participating in this race and are not mere spectators.
The Government of India’s Ministry of Electronics and Information Technology (MeitY) has launched the IndiaAI Mission, an umbrella program for the use and development of AI technology. This MeitY initiative lays the groundwork for supporting an array of AI goals for the country. The government has allocated INR 10,300 crore for this endeavour. This mission includes pivotal initiatives like the IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
There are several challenges and opportunities that India will have to navigate and capitalize on to become a significant player in the global AI race. The various components of India’s ‘AI Stack’ will have to work well in tandem to create a robust ecosystem that yields globally competitive results. The IndiaAI mission focuses on building large language models in vernacular languages and developing compute infrastructure. There must be more focus on developing good datasets and research as well.
Resource Allocation and Infrastructure Development
The government is focusing on building the elementary foundation for AI competitiveness. This includes the procurement of AI chips and compute capacity, about 10,000 graphics processing units (GPUs), to support India’s start-ups, researchers, and academics. These GPUs have been strategically distributed, with 70% being high-end newer models and the remaining 30% comprising lower-end older-generation models. This approach ensures that a robust ecosystem is built, which includes everything from cutting-edge research to more routine applications. A major player in this initiative is Yotta Data Services, which holds the largest share of 9,216 GPUs, including 8,192 Nvidia H100s. Other significant contributors include Amazon AWS's managed service providers, Jio Platforms, and CtrlS Datacenters.
Policy Implications: Charting a Course for Tech Sovereignty and Self-reliance
With this government initiative, there is a concerted effort to develop indigenous AI models and reduce tech dependence on foreign players. There is a push to develop local Large Language Models and domain-specific foundational models, creating AI solutions that are truly Indian in nature and application. Many advanced chip manufacturing takes place in Taiwan, which has a looming China threat. India’s focus on chip procurement and GPUs speaks to a larger agenda of self-reliance and sovereignty, keeping in mind the geopolitical calculus. This is an important thing to focus on, however, it must not come at the cost of developing the technological ‘know-how’ and research.
Developing AI capabilities at home also has national security implications. When it comes to defence systems, control over AI infrastructure and data becomes extremely important. The IndiaAI Mission will focus on safe and trusted AI, including developing frameworks that fit the Indian context. It has to be ensured that AI applications align with India's security interests and can be confidently deployed in sensitive defence applications.
The big problem here to solve here is the ‘data problem.’ There must be a focus on developing strategies to mitigate the data problem that disadvantages the Indian AI ecosystem. Some data problems are unique to India, such as generating data in local languages. While other problems are the ones that appear in every AI ecosystem development lifecycle namely generating publicly available data and licensed data. India must strengthen its ‘Digital Public Infrastructure’ and data commons across sectors and domains.
India has proposed setting up the India Data Management Office to serve as India’s data regulator as part of its draft National Data Governance Framework Policy. The MeitY IndiaAI expert working group report also talked about operationalizing the India Datasets Platform and suggested the establishment of data management units within each ministry.
Economic Impact: Growth and Innovation
The government’s focus on technology and industry has far-reaching economic implications. There is a push to develop the AI startup ecosystem in the country. The IndiaAI mission heavily focuses on inviting ideas and projects under its ambit. The investments will strengthen the IndiaAI startup financing system, making it easier for nascent AI businesses to obtain capital and accelerate their development from product to market. Funding provisions for industry-led AI initiatives that promote social impact and stimulate innovation and entrepreneurship are also included in the plan. The government press release states, "The overarching aim of this financial outlay is to ensure a structured implementation of the IndiaAI Mission through a public-private partnership model aimed at nurturing India’s AI innovation ecosystem.”
The government also wants to establish India as a hub for sustainable AI innovation and attract top AI talent from across the globe. One crucial aspect that needs to be worked on here is fostering talent and skill development. India has a unique advantage, that is, top-tier talent in STEM fields. Yet we suffer from a severe talent gap that needs to be addressed on a priority basis. Even though India is making strides in nurturing AI talents, out-migration of tech talent is still a reality. Once the hardware manufacturing “goods-side” of economics transitions to service delivery in the field of AI globally, India will need to be ready to deploy its talent. Several structural and policy interfaces, like the New Education Policy and industry-academic partnership frameworks, allow India to capitalize on this opportunity.
India’s talent strategy must be robust and long-term, focusing heavily on multi-stakeholder engagement. The government has a pivotal role here by creating industry-academia interfaces and enabling tech hubs and innovation parks.
India's Position in the Global AI Race
India’s foreign policy and geopolitical standpoint have been one of global cooperation. This must not change when it comes to AI. Even though this has been dubbed as the “AI Arms Race,” India should encourage worldwide collaboration on AI R&D through collaboration with other countries in order to strengthen its own capabilities. India must prioritise more significant open-source AI development, work with the US, Europe, Australia, Japan, and other friendly countries to prevent the unethical use of AI and contribute to the formation of a global consensus on the boundaries for AI development.
The IndiaAI Mission will have far-reaching implications for India’s diplomatic and economic relations. The unique proposition that India comes with is its ethos of inclusivity, ethics, regulation, and safety from the get-go. We should keep up the efforts to create a powerful voice for the Global South in AI. The IndiaAI Mission marks a pivotal moment in India's technological journey. Its success could not only elevate India's status as a tech leader but also serve as a model for other nations looking to harness the power of AI for national development and global competitiveness. In conclusion, the IndiaAI Mission seeks to strengthen India's position as a global leader in AI, promote technological independence, guarantee the ethical and responsible application of AI, and democratise the advantages of AI at all societal levels.
References
- Ashwini Vaishnaw to launch IndiaAI portal, 10 firms to provide 14,000 GPUs. (2025, February 17). https://www.business-standard.com/. Retrieved February 25, 2025, from https://www.business-standard.com/industry/news/indiaai-compute-portal-ashwini-vaishnaw-gpu-artificial-intelligence-jio-125021700245_1.html
- Global IndiaAI Summit 2024 being organized with a commitment to advance responsible development, deployment and adoption of AI in the country. (n.d.). https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2029841
- India to Launch AI Compute Portal, 10 Firms to Supply 14,000 GPUs. (2025, February 17). apacnewsnetwork.com. https://apacnewsnetwork.com/2025/02/india-to-launch-ai-compute-portal-10-firms-to-supply-14000-gpus/
- INDIAai | Pillars. (n.d.). IndiaAI. https://indiaai.gov.in/
- IndiaAI Innovation Challenge 2024 | Software Technology Park of India | Ministry of Electronics & Information Technology Government of India. (n.d.). http://stpi.in/en/events/indiaai-innovation-challenge-2024
- IndiaAI Mission To Deploy 14,000 GPUs For Compute Capacity, Starts Subsidy Plan. (2025, February 17). www.businessworld.in. Retrieved February 25, 2025, from https://www.businessworld.in/article/indiaai-mission-to-deploy-14000-gpus-for-compute-capacity-starts-subsidy-plan-548253
- India’s interesting AI initiatives in 2024: AI landscape in India. (n.d.). IndiaAI. https://indiaai.gov.in/article/india-s-interesting-ai-initiatives-in-2024-ai-landscape-in-india
- Mehra, P. (2025, February 17). Yotta joins India AI Mission to provide advanced GPU, AI cloud services. Techcircle. https://www.techcircle.in/2025/02/17/yotta-joins-india-ai-mission-to-provide-advanced-gpu-ai-cloud-services/
- IndiaAI 2023: Expert Group Report – First Edition. (n.d.). IndiaAI. https://indiaai.gov.in/news/indiaai-2023-expert-group-report-first-edition
- Satish, R., Mahindru, T., World Economic Forum, Microsoft, Butterfield, K. F., Sarkar, A., Roy, A., Kumar, R., Sethi, A., Ravindran, B., Marchant, G., Google, Havens, J., Srichandra (IEEE), Vatsa, M., Goenka, S., Anandan, P., Panicker, R., Srivatsa, R., . . . Kumar, R. (2021). Approach Document for India. In World Economic Forum Centre for the Fourth Industrial Revolution, Approach Document for India [Report]. https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
- Stratton, J. (2023, August 10). Those who solve the data dilemma will win the A.I. revolution. Fortune. https://fortune.com/2023/08/10/workday-data-ai-revolution/
- Suri, A. (n.d.). The missing pieces in India’s AI puzzle: talent, data, and R&D. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2025/02/the-missing-pieces-in-indias-ai-puzzle-talent-data-and-randd?lang=en
- The AI arms race. (2024, February 13). Financial Times. https://www.ft.com/content/21eb5996-89a3-11e8-bf9e-8771d5404543

Introduction:
This Op-ed sheds light on the perspectives of the US and China regarding cyber espionage. Additionally, it seeks to analyze China's response to the US accusation regarding cyber espionage.
What is Cyber espionage?
Cyber espionage or cyber spying is the act of obtaining personal, sensitive, or proprietary information from individuals without their knowledge or consent. In an increasingly transparent and technological society, the ability to control the private information an individual reveals on the Internet and the ability of others to access that information are a growing concern. This includes storage and retrieval of e-mail by third parties, social media, search engines, data mining, GPS tracking, the explosion of smartphone usage, and many other technology considerations. In the age of big data, there is a growing concern for privacy issues surrounding the storage and misuse of personal data and non-consensual mining of private information by companies, criminals, and governments.
Cyber espionage aims for economic, political, and technological gain. Fox example Stuxnet (2010) cyber-attack by the US and its allies Israel against Iran’s Nuclear facilities. Three espionage tools were discovered connected to Stuxnet, such as Gauss, FLAME and DuQu, for stealing data such as passwords, screenshots, Bluetooth, Skype functions, etc.
Cyber espionage is one of the most significant and intriguing international challenges globally. Many nations and international bodies, such as the US and China, have created their definitions and have always struggled over cyber espionage norms.
The US Perspective
In 2009, US officials (along with other allied countries) mentioned that cyber espionage was acceptable if it safeguarded national security, although they condemned economically motivated cyber espionage. Even the Director of National Intelligence said in 2013 that foreign intelligence capabilities cannot steal foreign companies' trade secrets to benefit their firms. This stance is consistent with the Economic Espionage Act (EEA) of 1996, particularly Section 1831, which prohibits economic espionage. This includes the theft of a trade secret that "will benefit any foreign government, foreign agent or foreign instrumentality.
Second, the US advocates for cybersecurity market standards and strongly opposes transferring personal data extracted from the US Office of Personnel Management (OPM) to cybercrime markets. Furthermore, China has been reported to sell OPM data on illicit markets. It became a grave concern for the US government when the Chinese government managed to acquire sensitive details of 22.1 million US government workers through cyber intrusions in 2014.
Third, Cyber-espionage is acceptable unless it’s utilized for Doxing, which involves disclosing personal information about someone online without their consent and using it as a tool for political influence operations. However, Western academics and scholars have endeavoured to distinguish between doxing and whistleblowing. They argue that whistleblowing, exemplified by events like the Snowden Leaks and Vault 7 disclosures, serves the interests of US citizens. In the US, being regarded as an open society, certain disclosures are not promoted but rather required by mandate.
Fourth, the US argues that there is no cyber espionage against critical infrastructure during peacetime. According to the US, there are 16 critical infrastructure sectors, including chemical, nuclear, energy, defence, food, water, and so on. These sectors are considered essential to the US, and any disruption or harm would impact security, national public health and national economic security.
The US concern regarding China’s cyber espionage
According to James Lewis (a senior vice president at the Center for US-China Economic and Security Review Commission), the US faces losses between $ 20 billion and $30 billion annually due to China’s cyberespionage. The 2018 U.S. Trade Representative (USTR) Section 301 report highlighted instances, where the Chinese government and executives from Chinese companies engaged in clandestine cyber intrusions to obtaining commercially valuable information from the U.S. businesses, such as in 2018 where officials from China’s Ministry of State Security, stole trade from General Electric aviation and other aerospace companies.
China's response to the US accusations of cyber espionage
China's perspective on cyber espionage is outlined by its 2014 anti-espionage law, which was revised in 2023. Article 1 of this legislation is formulated to prevent, halt, and punish espionage actions to maintain national security. Article 4 addresses the act of espionage and does not differentiate between state-sponsored cyber espionage for economic purposes and state-sponsored cyber espionage for national security purposes. However, China doesn't make a clear difference between government-to-government hacking (spying) and government-to-corporate sector hacking, unlike the US. This distinction is less apparent in China due to its strong state-owned enterprise (SOE) sector. However, military spying is considered part of the national interest in the US, while corporate spying is considered a crime.
China asserts that the US has established cyber norms concerning cyber espionage to normalize public attribution as acceptable conduct. This is achieved by targeting China for cyber operations, imposing sanctions on accused Chinese individuals, and making political accusations, such as blaming China and Russia for meddling in US elections. Despite all this, Washington D.C has never taken responsibility for the infamous Flame and Stuxnet cyber operations, which were widely recognized as part of a broader collaborative initiative known as Operation Olympic Games between the US and Israel. Additionally, the US takes the lead in surveillance activities conducted against China, Russia, German Chancellor Angela Merkel, the United Nations (UN) Secretary-General, and several French presidents. Surveillance programs such as Irritant Horn, Stellar Wind, Bvp47, the Hive, and PRISM are recognized as tools used by the US to monitor both allies and adversaries to maintain global hegemony.
China urges the US to cease its smear campaign associated with Volt Typhoon’s cyberattack for cyber espionage, citing the publication of a report titled “Volt Typhoon: A Conspiratorial Swindling Campaign Targets with U.S. Congress and Taxpayers Conducted by U.S. Intelligence Community” by China's National Computer Virus Emergency Response Centre and the 360 Digital Security Group on 15 April. According to the report, 'Volt Typhoon' is a ransomware cyber criminal group self-identified as the 'Dark Power' and is not affiliated with any state or region. Multiple cybersecurity authorities in the US collaborated to fabricate this story just for more budgets from Congress. In the meantime, Microsoft and other U.S. cybersecurity firms are seeking more big contracts from US cybersecurity authorities. The reality behind “Volt Typhoon '' is a conspiratorial swindling campaign to achieve two objectives by amplifying the "China threat theory" and cheating money from the U.S. Congress and taxpayers.
Beijing condemned the US claims of cyber espionage without any solid evidence. China also blames the US for economic espionage by citing the European Parliament report that the National Security Agency (NSA) was also involved in assisting Boeing in beating Airbus for a multi-billion dollar contract. Furthermore, Brazilian President Dilma Rousseff also accused the US authorities of spying against the state-owned oil company “Petrobras” for economic reasons.
Conclusion
In 2015, the US and China marked a milestone as both President Xi Jinping and Barack Obama signed an agreement, committing that neither country's government would conduct or knowingly support cyber-enabled theft of trade secrets, intellectual property, or other confidential business information to grant competitive advantages to firms or commercial sectors. However, the China Cybersecurity Industry Alliance (CCIA) published a report titled 'US Threats and Sabotage to the Security and Development of Global Cyberspace' in 2024, highlighting the US escalating cyber-attack and espionage activities against China and other nations. Additionally, there has been a considerable increase in the volume and sophistication of Chinese hacking since 2016. According to a survey by the Center for International and Strategic Studies, out of 224 cyber espionage incidents reported since 2000, 69% occurred after Xi assumed office. Therefore, China and the US must address cybersecurity issues through dialogue and cooperation, utilizing bilateral and multilateral agreements.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.