Meta slapped by EU fine over data privacy breach
Introduction
The European Union has fined the meta $ 1.3 billion for infringing the EU privacy laws by transferring the personal data of Facebook users to the United States. The EU fined Meta’s business in Ireland. As per the European Union, transferring Personal data to the US is a breach of the General data protection Regulation or European Union law on data protection and privacy.
GDPR Compliance
The terms of GDPR promise to gather users’ personal information legally and under strict conditions. And those who collect and manage personal data must protect users’ personal data from exploitation. The GDPR restricts an organisation’s capacity to transfer personal data outside the EU if the transfer is solely based on that body’s evaluation of the sufficiency of the personal data’s protection. Transfers should only be made where European authorities have determined that a third country, a territory within that third country, or an international organisation provides acceptable protection for data protection.
Violation by Meta
The punishment, announced by Ireland’s Data Protection Commission, might be one of the most significant in the five years since the European Union passed the landmark General Data Protection Regulation. According to regulators, Facebook failed to comply with a 2020 judgment by the European Union’s top court that Facebook data transferred over the Atlantic was not sufficiently safeguarded from American espionage agencies. However, whether Meta will ever need to encrypt Facebook users’ data in Europe is still being determined. Meta announced it would appeal the ruling, launching a potentially legal procedure.
Simultaneously, European Union and American officials are negotiating a new data-sharing pact that would provide legal protections for Meta and scores of other companies to continue moving information between the US and Europe. This pact could overturn much of the European Union’s Monday ruling.
Article 46(1) GDPR Has been violated by the meta, And as per the Irish privacy.
What is required by the GDPR before transferring personal information across national boundaries?

Personal data transfers to countries outside the European Economic Area are generally permitted if these nations are regarded to provide a sufficient degree of data protection. According to Article 45 of the GDPR, the European Commission evaluates the degree of personal data protection in third countries.
The European Union judgment demonstrates how government rules are upending the borderless way data has traditionally migrated. Companies are increasingly being pressed to store data within the country where it is acquired rather than allowing it to transfer freely to data centres around the world as a result of data-protection requirements, national security laws, and other regulations.
The US internet giant had previously warned that if forced to stop using SCCs (standard contractual clauses) without a proper alternative data transfer agreement in place, it would be compelled to shut down services such as Facebook and Instagram in Europe.
What will happen next for Facebook in Europe?
The ruling includes a six-month transition period before it must halt data flows, meaning the service will continue to operate in the meantime. (More specifically, Meta has been given a five-month transition period to freeze any future transfer of personal data to the United States and a six-month deadline to terminate the unlawful processing and/or storage of European user data it has previously transferred without a legitimate legal basis. Meta has also stated that it will appeal and appears to seek a stay of execution while it pursues its legal arguments in court.
Conclusion
The GDPR places restrictions on transferring personal data outside the European Union to third-party nations or international bodies to ensure that the GDPR’s level of protection for individuals is not jeopardised. But the meta violated the European Union’s privacy laws by the user’s personal information to the US. Under the compliance of GDPR, transferring and sending personal information to users intentionally is an offence. and presently, the personal data of Facebook users has been breached by the Meta, as they shared the information with the US.
Related Blogs

Introduction
The Union Minister of Information and Broadcasting Ashwini Vaishnaw addressed the Press Council of India on the occasion of National Press Day regarding emergent concerns in the digital media and technology landscape. Union Minister of Information and Broadcasting Ashwini Vaishnaw has identified four major challenges facing news media in India, including fake news, algorithmic bias, artificial intelligence, and fair compensation. He emphasized the need for greater accountability and fairness from Big Tech to combat misinformation and protect democracy. Vaishnaw argued that platforms do not verify information posted online, leading to the spread of false and misleading information. He called on online platforms and Big Tech to combat misinformation and protect democracy.
Key Concerns Highlighted by Union Minister Ashwini Vaishnaw
- Misinformation: Due to India's unique sensitivities, digital platforms should adopt country-specific responsibilities and metrics. The Minister also questioned the safe harbour principle, which shields platforms from liability for user-generated content.
- Algorithmic Biases: The prioritisation of viral content, which is often divisive, by social media algorithms can have serious implications on societal peace.
- Impact of AI on intellectual Property: The training of AI on pre-existing datasets presents the ethical challenge of robbing original creators of their rights to their intellectual property
- Fair compensation: Traditional news media is increasingly facing financial strain since news consumption is shifting rapidly to social media platforms, creating uneven compensation dynamics.
Cyberpeace Insights
- Misinformation: Marked by routine upheavals and moral panics, Indian society is vulnerable to the severe impacts of fake news, including mob violence, political propaganda, health misinformation and more. Inspired by the EU's Digital Services Act, 2022, and other related legislation that addresses hate speech and misinformation, the Indian Minister has called for revisiting the safe harbour protection under Section 79 of the IT Act, 2000. However, any legislation on misinformation must strike a balance between protecting the fundamental rights to freedom of speech, and privacy while safeguarding citizens from its harmful effects.
- Algorithmic Biases: Social media algorithms are designed to boost user engagement since this increases advertisement revenue. This leads to the creation of filter bubbles- exposure to personalized information online and echo chambers interaction with other users with the same opinions that align with their worldview. These phenomena induce radicalization of views, increase intolerance fuel polarization in public discourse, and trigger the spread of more misinformation. Tackling this requires algorithmic design changes such as disincentivizing sensationalism, content labelling, funding fact-checking networks, etc. to improve transparency.
- Impact of AI on Intellectual Property: AI models are trained on data that may contain copyrighted material. It can lead to a loss of revenue for primary content creators, while tech companies owning AI models may financially benefit disproportionately by re-rendering their original works. Large-scale uptake of AI models will significantly impact fields such as advertising, journalism, entertainment, etc by disrupting their market. Managing this requires a push for Ethical AI regulations and the protection of original content creators.
Conclusion: Charting a Balanced Path
The socio-cultural and economic fabric of the Indian subcontinent is not only distinct from the rest of the world but has cross-cutting internal diversities, too. Its digital landscape stands at a crossroads as rapid global technological advancements present increasing opportunities and challenges. In light of growing incidents of misinformation on social media platforms, it is also crucial that regulators consider framing rules that encourage and mandate content verification mechanisms for online platforms, incentivizing them to adopt advanced AI-driven fact-checking tools and other relevant measures. Additionally, establishing public-private partnerships to monitor misinformation trends is crucial to rapidly debunking viral falsehoods. However ethical concerns and user privacy should be taken into consideration while taking such steps. Addressing misinformation requires a collaborative approach that balances platform accountability, technological innovation, and the protection of democratic values.
Sources
- https://www.indiatoday.in/india/story/news-media-4-challenges-ashwini-vaishnaw-national-press-day-speech-big-tech-fake-news-algorithm-ai-2634737-2024-11-17
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_881
- https://www.legaldive.com/news/digital-services-act-dsa-eu-misinformation-law-propaganda-compliance-facebook-gdpr/691657/
- https://www.fondationdescartes.org/en/2020/07/filter-bubbles-and-echo-chambers/
- https://www.google.com/searchq=News+Media+Bargaining+Code&oq=News+Media+Bargaining+Code&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQABiABDIHCAIQABiABDIHCAMQABiABDIHCAQQABiABDIHCAUQABiABDIICAYQABgWGB4yCAgHEAAYFhgeMggICBAAGBYYHjIICAkQABgWGB7SAQcyMjVqMGo3qAIIsAIB&sourceid=chrome&ie=UTF-8

Executive Summary
The IT giant Apple has alerted customers to the impending threat of "mercenary spyware" assaults in 92 countries, including India. These highly skilled attacks, which are frequently linked to both private and state actors (such as the NSO Group’s Pegasus spyware), target specific individuals, including politicians, journalists, activists and diplomats. In sharp contrast to consumer-grade malware, these attacks are in a league unto themselves: highly-customized to fit the individual target and involving significant resources to create and use.
As the incidence of such attacks rises, it is important that all persons, businesses, and officials equip themselves with information about how such mercenary spyware programs work, what are the most-used methods, how these attacks can be prevented and what one must do if targeted. Individuals and organizations can begin protecting themselves against these attacks by enabling "Lockdown Mode" to provide an extra layer of security to their devices and by frequently changing passwords and by not visiting the suspicious URLs or attachments.
Introduction: Understanding Mercenary Spyware
Mercenary spyware is a special kind of spyware that is developed exclusively for law enforcement and government organizations. These kinds of spywares are not available in app stores, and are developed for attacking a particular individual and require a significant investment of resources and advanced technologies. Mercenary spyware hackers infiltrate systems by means of techniques such as phishing (by sending malicious links or attachments), pretexting (by manipulating the individuals to share personal information) or baiting (using tempting offers). They often intend to use Advanced Persistent Threats (APT) where the hackers remain undetected for a prolonged period of time to steal data by continuous stealthy infiltration of the target’s network. The other method to gain access is through zero-day vulnerabilities, which is the process of gaining access to mobile devices using vulnerabilities existing in software. A well-known example of mercenary spyware includes the infamous Pegasus by the NSO Group.
Actions: By Apple against Mercenary Spyware
Apple has introduced an advanced, optional protection feature in its newer product versions (including iOS 16, iPadOS 16, and macOS Ventura) to combat mercenary spyware attacks. These features have been provided to the users who are at risk of targeted cyber attacks.
Apple released a statement on the matter, sharing, “mercenary spyware attackers apply exceptional resources to target a very small number of specific individuals and their devices. Mercenary spyware attacks cost millions of dollars and often have a short shelf life, making them much harder to detect and prevent.”
When Apple's internal threat intelligence and investigations detect these highly-targeted attacks, they take immediate action to notify the affected users. The notification process involves:
- Displaying a "Threat Notification" at the top of the user's Apple ID page after they sign in.

- Sending an email and iMessage alert to the addresses and phone numbers associated with the user's Apple ID.
- Providing clear instructions on steps the user should take to protect their devices, including enabling "Lockdown Mode" for the strongest available security.
- Apple stresses that these threat notifications are "high-confidence alerts" - meaning they have strong evidence that the user has been deliberately targeted by mercenary spyware. As such, these alerts should be taken extremely seriously by recipients.
Modus Operandi of Mercenary Spyware
- Installing advanced surveillance equipment remotely and covertly.
- Using zero-click or one-click attacks to take advantage of device vulnerabilities.
- Gain access to a variety of data on the device, including location tracking, call logs, text messages, passwords, microphone, camera, and app information.
- Installation by utilizing many system vulnerabilities on devices running particular iOS and Android versions.
- Defense by patching vulnerabilities with security updates (e.g., CVE-2023-41991, CVE-2023-41992, CVE-2023-41993).
- Utilizing defensive DNS services, non-signature-based endpoint technologies, and frequent device reboots as mitigation techniques.
Prevention Measures: Safeguarding Your Devices
- Turn on security measures: Make use of the security features that the device maker has supplied, such as Apple's Lockdown Mode, which is intended to prevent viruses of all types from infecting Apple products, such as iPhones.
- Frequent software upgrades: Make sure the newest security and software updates are installed on your devices. This aids in patching holes that mercenary malware could exploit.
- Steer clear of misleading connections: Exercise caution while opening attachments or accessing links from unidentified sources. Installing mercenary spyware is possible via phishing links or attachments.
- Limit app permissions: Reassess and restrict app permissions to avoid unwanted access to private information.
- Use secure networks: To reduce the chance of data interception, connect to secure Wi-Fi networks and stay away from public or unprotected connections.
- Install security applications: To identify and stop any spyware attacks, think about installing reliable security programs from reliable sources.
- Be alert: If Apple or other device makers send you a threat notice, consider it carefully and take the advised security precautions.
- Two-factor authentication: To provide an extra degree of protection against unwanted access, enable two-factor authentication (2FA) on your Apple ID and other significant accounts.
- Consider additional security measures: For high-risk individuals, consider using additional security measures, such as encrypted communication apps and secure file storage services
Way Forward: Strengthening Digital Defenses, Strengthening Democracy
People, businesses and administrations must prioritize cyber security measures and keep up with emerging dangers as mercenary spyware attacks continue to develop and spread. To effectively address the growing threat of digital espionage, cooperation between government agencies, cybersecurity specialists, and technology businesses is essential.
In the Indian context, the update carries significant policy implications and must inspire a discussion on legal frameworks for government surveillance practices and cyber security protocols in the nation. As the public becomes more informed about such sophisticated cyber threats, we can expect a greater push for oversight mechanisms and regulatory protocols. The misuse of surveillance technology poses a significant threat to individuals and institutions alike. Policy reforms concerning surveillance tech must be tailored to address the specific concerns of the use of such methods by state actors vs. private players.
There is a pressing need for electoral reforms that help safeguard democratic processes in the current digital age. There has been a paradigm shift in how political activities are conducted in current times: the advent of the digital domain has seen parties and leaders pivot their campaigning efforts to favor the online audience as enthusiastically as they campaign offline. Given that this is an election year, quite possibly the most significant one in modern Indian history, digital outreach and online public engagement are expected to be at an all-time high. And so, it is imperative to protect the electoral process against cyber threats so that public trust in the legitimacy of India’s democratic is rewarded and the digital domain is an asset, and not a threat, to good governance.

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html