Whatsapp is one of the leading OTT messaging platforms, which has been owned by the tech giant Meta since 2013. WhatsApp enjoys a user base of nearly 2.24 billion people globally, with almost 487 million users in India. Since the advent of Whatsapp, it has been the most commonly used messaging app, and it has made an impact to such an extent that it is used for professional as well as personal purposes. Meta powers the platform and follows similar guidelines and policies as its parent company.
The New Feature
Users of WhatsApp on the web and desktop can now access one account from various devices. One WhatsApp account may now be used on up to four handsets thanks to a new update from Meta. Be aware that the multi-device capability has been planned for some time and is finally being made available to stable WhatsApp users. Each linked device (up to four devices can be linked) will function independently, and the independent devices will continue to receive messages even if the central device’s network connection is lost. Remember that WhatsApp will automatically log out of all the companion devices if the primary smartphone is dormant for an extended period. Four more gadgets may be a mix of four PCs and smartphones or four smartphones. This feature is now available for updates and downloads on Android as well as iOS platforms.
Potential issues
As we go deeper into the digital age, it is the responsibility of the tech giants to pilot innovation with features of security by design. Thus such new features should be accompanied by coherent safety and security policies or advisories to ensure the users understand the implications of the new features. Convenience over conditions is an essential part of cyberspace. It points to the civic duty of netizens to go through the conditions of any app rather than only focus on the convenience it creates. The following potential issues may arise from the new features on Whatsapp –
Increased cybercrime- The bad actors now do not need to access SIM cards to commit frauds over the platforms as now on a single number 4 devices can be used hence the cybercriminal activity can increase over the platform. It is also pertinent for the platform to create SoPs for fake accounts which use multiple devices, as they pose a direct threat to the users and their interests.
Difficulty in identifying and tracing- The LEAs will face a significant issue in identifying the bad actors and tracing them as the individual’s involvement through a linked device needs to be given legal validity and scope for investigation. This may also cause issues in evidence handling and analysis.
Surge in Misinformation and Disinformation- With access to multiple devices, the screen time of an individual is also bound to increase. This leads to more time spent online, thus causing a rise in instances of misinformation and disinformation by bad actors. Thus the aspect of fack checking is of prime importance.
Potential Oversharing of Personal Data- With the increased accessibility on different devices, it is very easy for the app to seek data from all devices on which the app is running, thus leading to a bigger reservoir of personal data for the platforms and data fiduciaries.
Higher risk of Phishing, Ransomware and Malware Attacks- As the devices under the same login credentials and mobile number will increase, the message can be viewed on all the devices, thus increasing the risk of widespread embedded ransomware and malware in multiple devices is and ever-present threat.
One number, more criminals- This feature will allow cybercriminals to operate using one device only, earlier they used to forge Adhaar cards to get new sims, but this feature will enable the bad actors to commit crimes and attacks from one single SIM using 4 different devices.
Rise in Digital Footprint- As the number of devices increases, the users will generate more digital footprints. As a tech giant, Meta will have access to a bigger database, which increases the risk of data breaches by third-party actors.
Conclusion
In the fast-paced digital world, it is important to remain updated about new software, technologies and policies for our applications or forms of tech. This was a long-awaited feature from WhatsApp, and its value of it doesn’t lie in technological advancement only but also in the formulation of policies to govern this technology towards the trust and safety aspect of users. The platforms, in synergy with the policy makers, need to create a robust framework to accommodate the new features and add-ons on apps vehicle, staying in compliance with the laws of the land. Awareness about new features and vulnerabilities is a must for all netizens, and it is a shared responsibility for all netizens to spread the word about safety and security mechanisms.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
In the age of digital technology, the concept of net neutrality has become more crucial for preserving the equity and openness of the internet. Thanks to net neutrality, all internet traffic is treated equally, without difference or preferential treatment. Thanks to this concept, users can freely access and distribute content, which promotes innovation, competition, and the democratisation of knowledge. India has seen controversy over net neutrality, which has led to a legal battle to protect an open internet. In this blog post, we’ll look at the challenges of the law and the efforts made to safeguard net neutrality in India.
Background on Net Neutrality in India
Net neutrality became a hot topic in India after a major telecom service provider suggested charging various fees for accessing different parts of the internet. Internet users, activists, and organisations in favour of an open internet raised concern over this. Millions of comments were made on the consultation document by the Telecom Regulatory Authority of India (TRAI) published in 2015, highlighting the significance of net neutrality for the country’s internet users.
Legal Battle and Regulatory Interventions
The battle for net neutrality in India acquired notoriety when TRAI released the “Prohibition of Discriminatory Tariffs for Data Services Regulations” in 2016. These laws, often known as the “Free Basics” prohibition, were created to put an end to the usage of zero-rating platforms, which exempt specific websites or services from data expenses. The regulations ensured that all data on the internet would be handled uniformly, regardless of where it originated.
But the legal conflict didn’t end there. The telecom industry challenged TRAI’s regulations, resulting in a flurry of legal conflicts in numerous courts around the country. The Telecom Regulatory Authority of India Act and its provisions of it that control TRAI’s ability to regulate internet services were at the heart of the legal dispute.
The Indian judicial system greatly helped the protection of net neutrality. The importance of non-discriminatory internet access was highlighted in 2018 when the Telecom Disputes Settlement and Appellate Tribunal (TDSAT) upheld the TRAI regulations and ruled in favour of net neutrality. The TDSAT ruling created a crucial precedent for net neutrality in India. In 2019, after several rounds of litigation, the Supreme Court of India backed the principles of net neutrality, declaring that it is a fundamental idea that must be protected. The nation’s legislative framework for preserving a free and open internet was bolstered by the ruling by the top court.
Ongoing Challenges and the Way Forward
Even though India has made great strides towards upholding net neutrality, challenges persist. Because of the rapid advancement of technology and the emergence of new services and platforms, net neutrality must always be safeguarded. Some practices, such as “zero-rating” schemes and service-specific data plans, continue to raise questions about potential violations of net neutrality principles. Regulatory efforts must be proactive and under constant watch to allay these worries. The regulatory organisation, TRAI, is responsible for monitoring for and responding to breaches of the net neutrality principles. It’s crucial to strike a balance between promoting innovation and competition and maintaining a free and open internet.
Additionally, public awareness and education on the issue are crucial for the continuation of net neutrality. By informing users of their rights and promoting involvement in the conversation, a more inclusive and democratic decision-making process is assured. Civil society organisations and advocacy groups may successfully educate the public about net neutrality and gain their support.
Conclusion
The legal battle for net neutrality in India has been a significant turning point in the campaign to preserve an open and neutral internet. A robust framework for net neutrality in the country has been established thanks to legislative initiatives and judicial decisions. However, due to ongoing challenges and the dynamic nature of technology, maintaining net neutrality calls for vigilant oversight and strong actions. An open and impartial internet is crucial for fostering innovation, increasing free speech, and providing equal access to information. India’s attempts to uphold net neutrality should motivate other nations dealing with similar issues. All parties, including politicians, must work together to protect the principles of net neutrality and ensure that the Internet is accessible to everyone.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.