TRAI’s Consultation Paper on OTT Platforms
Introduction
Recently, a Consultation Paper on Regulatory Mechanisms for Over-The-Top (OTT) Communication Services was published by the Telecom Regulatory Authority of India (TRAI). The paper explores several OTT regulation-related challenges and solicits input from stakeholders on a suggested regulatory framework. We’ll summarise the paper’s main conclusions in this blog.
Structure of the Paper
The Telecom Regulatory Authority of India’s Consultation Paper on Regulatory Mechanism for Over-The-Top (OTT) Communication Services and Selective Banning of OTT Services intends to solicit comments and recommendations from stakeholders about the regulation of OTT services in India. The paper is broken up into five chapters that cover the introduction and background, issues with regulatory mechanisms for OTT communication services, issues with the selective banning of OTT services, a summary of the issues for consultation, and an overview of international practices on the topic. Written comments from interested parties are requested and may be sent electronically to the Advisor (Networks, Spectrum and Licencing) at TRAI. These comments will also be posted on the TRAI website.
Overview of the Paper
- Chapter 1: Introduction and Background
- The first chapter of the essay introduces the subject of OTT communication services and argues why regulatory frameworks are necessary. The chapter also gives a general outline of the topics and the paper’s organisation that will be covered in the following chapters.
- Chapter 2: Examination of the Issues Related to Regulatory Mechanism for Over-The-Top Communication Services
- The second chapter of the essay looks at the problems with OTT communication service regulation. It talks about the many kinds of OTT services and how they affect the conventional telecom sector. The chapter also looks at the regulatory issues raised by OTT services and the various strategies used by various nations to address them.
- Chapter 3: Examination of the Issues Related to Selective Banning of OTT Services
- The final chapter of the essay looks at the problems of selectively outlawing OTT services. It analyses the justifications for government restrictions on OTT services as well as the possible effects of such restrictions on consumers and the telecom sector. The chapter also looks at the legal and regulatory structures that determine how OTT services are prohibited in various nations.
- Chapter 4: International Practices
- An overview of global OTT communication service best practices is given in the paper’s fourth chapter. It talks about the various regulatory strategies used by nations throughout the world and how they affect consumers and the telecom sector. The chapter also looks at the difficulties regulators encounter when trying to create efficient regulatory frameworks for OTT services.
- Chapter 5: Issues for Consultation
- This chapter is the spirit of the consultation paper as it covers the points and questions for consultation. This chapter has been classified into two sub-sections – Issues Related to Regulatory Mechanisms for OTT Communication Services and Issues Related to the Selective Banning of OTT Services. The inputs will be entirely focused on these sub headers, and the scope, extent, and ambit of the consultation paper rests on these questions and necessary inputs.
Conclusion
An important publication that aims to address the regulatory issues raised by OTT services is the Consultation Paper on Regulatory Mechanisms for Over-The-Top Communication Services. The paper offers a thorough analysis of the problems with OTT service regulation and requests input from stakeholders on the suggested regulatory structure. In order to make sure that the regulatory framework is efficient and advantageous for everyone, it is crucial for all stakeholders to offer their opinion on the document.
Related Blogs

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL

According to Statista, the number of users in India's digital assets market is expected to reach 107.30m users by 2025 (Impacts of Inflation on Financial Markets, August 2023). India's digital asset market has been experiencing exponential growth fueled by the increased adoption of cryptocurrencies and blockchain technology. This furthers the need for its regulation. Digital assets include cryptocurrencies, NFTs, asset-backed tokens, and tokenised real estate.
India has defined Digital Assets under Section 47(A) of the Income Tax Act, 1961. The Finance Act 2022-23 has added the word 'virtual' to make it “Virtual Digital Assets”. A “virtual digital asset” is any information or code, number, or token, created through cryptographic methods or otherwise, by any name, giving a digital representation of value exchanged with or without consideration. A VDA should contain an inherent value and represent a store of value or unit of account, functional in any financial transaction or investment. These can be stored, transferred, or traded in electronic format.
Digital Asset Governance: Update and Future Outlook
Indian regulators have been conservative in their approach towards digital assets, with the Reserve Bank of India first issuing directions against cryptocurrency transactions in 2018. This ban was removed by the Supreme Court through a court order in 2020. The presentation of the Cryptocurrency and Regulation of Official Digital Currency Bill of 2021 is a fairly important milestone in its attempts to lay down the framework for issuing an official digital currency by the Reserve Bank of India. While some digital assets seem to have potential, like the Central Bank Digital Currencies (CBDCs) and blockchain-based financial applications, a blanket prohibition has been enforced on private cryptocurrencies.
However, in more recent trends, the landscape is changing as the RBI's CBDC is to provide a state-backed digital alternative to cash under a more structured regulatory framework. This move seeks to balance state control with innovation on investor safety and compliance, expecting to reduce risk and enhance security for investors by enacting strict anti-money laundering and know-your-customer laws. Highlighting these developments is important to examine how global regulatory trends influence India's digital asset policies.
Impact of Global Development on India’s Approach
Global regulatory developments have an impact on Indian policies on digital assets. The European Union's Markets in Crypto-assets (MiCA) is to introduce a comprehensive regulatory framework for cryptocurrencies that could act as an inspiration for India. MiCA regulation covers crypto-assets that are not currently regulated by existing financial services legislation. Its particular focus on consumer protection and market integrity resonates with India in terms of investigating needs related to digital assets, including fraud and price volatility. Additionally, evolving policies in the US, such as regulating crypto exchanges and classifying certain tokens as securities, could also form the basis for India's regulatory posture.
Collaboration on the international level is also a chief contributing factor. India’s regular participation in global forums like the G20, facilitates an opportunity to align its regulations on digital assets with other countries, tending toward an even more standardised and predictable framework for cross-border transactions. This can significantly help India given that the nation has a huge diaspora providing a critical inflow of remuneration.
CyberPeace Outlook
Though digital assets offer many opportunities to India, challenges also exist. Cryptocurrency volatility affects investors, posing concerns over fraud and illicit dealings. A balance between the need for innovation and investor protection is paramount to avoid killing the growth of India's digital asset ecosystem with overly restrictive regulations.
Financial inclusion, efficient cross-border payments with low transaction costs, and the opening of investment opportunities are a few opportunities offered by digital assets. For example, the tokenisation of real estate throws open real estate investment to smaller investors. To strengthen the opportunities while addressing challenges, some policy reforms and new frameworks might prove beneficial.
CyberPeace Policy Recommendations
- Establish a regulatory sandbox for startups working in the area of blockchain and digital assets. This would allow them to test innovative solutions in a controlled environment with regulatory oversight minimising risks.
- Clear guidelines for the taxation of digital assets should be provided as they will ensure transparency, reduce ambiguity for investors, and promote compliance with tax regulations. Specific guidelines can be drawn from the EU's MiCA regulation.
- Workshops, online resources, and campaigns are some examples of initiatives aimed at improving consumer awareness about digital assets, benefits and associated risks that should be implemented. Partnerships with global fintech firms will provide a great opportunity to learn best practices.
Conclusion
India is positioned at a critical juncture with respect to the debate on digital assets. The challenge which lies ahead is one of balancing innovation with effective regulation. The introduction of the Central Bank Digital Currency (CBDC) and the development of new policies signal a willingness on the part of the regulators to embrace the digital future. In contrast, issues like volatility, fraud, and regulatory compliance continue to pose hurdles. By drawing insights from global frameworks and strengthening ties through international forums, India can pave the way for a secure and dynamic digital asset ecosystem. Embracing strategic measures such as regulatory sandboxes and transparent tax guidelines will not only protect investors but also unlock the immense potential of digital assets, propelling India into a new era of financial innovation and inclusivity.
References
- https://www.weforum.org/agenda/2024/10/different-countries-navigating-uncertainty-digital-asset-regulation-election-year/
- https://www.acfcs.org/eu-passes-landmark-crypto-regulation
- https://www.indiabudget.gov.in/budget2022-23/doc/Finance_Bill.pdf
- https://www.weforum.org/agenda/2024/10/different-countries-navigating-uncertainty-digital-asset-regulation-election-year/
- https://www3.weforum.org/docs/WEF_Digital_Assets_Regulation_2024.pdf

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/