DEA to Social Media Companies
Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.
Related Blogs

Introduction
Cybersecurity threats have been globally prevalent for quite some time now. All nations, organisations and individuals stand at risk from new and emerging potential cybersecurity threats, putting finances, privacy, data, identities and sometimes human lives at stake. The latest Data Breach Report by IBM revealed that nearly a staggering 83% of organisations experienced more than one data breach instance during 2022. As per the 2022 Data Breach Investigations Report by Verizon, the total number of global ransomware attacks surged by 13%, indicating a concerning rise equal to the last five years combined. The statistics clearly showcase how the future is filled with potential threats as we advance further into the digital age.
Who is Okta?
Okta is a secure identity cloud that links all your apps, logins and devices into a unified digital fabric. Okta has been in existence since 2009 and is based out of San Francisco, USA and has been one of the leading service providers in the States. The advent of the company led to early success based on the high-quality services and products introduced by them in the market. Although Okta is not as well-known as the big techs, it plays a vital role in big organisations' cybersecurity systems. More than 18,000 users of the identity management company's products rely on it to give them a single login for the several platforms that a particular business uses. For instance, Zoom leverages Okta to provide "seamless" access to its Google Workspace, ServiceNow, VMware, and Workday systems with only one login, thus showing how Okta is fundamental in providing services to ease the human effort on various platforms. In the digital age, such organisations are instrumental in leading the pathway to innovation and entrepreneurship.
The Okta Breach
The last Friday, 20 October, Okta reported a hack of its support system, leading to chaos and havoc within the organisation. The result of the hack can be seen in the market in the form of the massive losses incurred by Okta in the stock exchange.
Since the attack, the company's market value has dropped by more than $2 billion. The well-known incident is the most recent in a long line of events connected to Okta or its products, which also includes a wave of casino invasions that caused days-long disruptions to hotel rooms in Las Vegas, casino giants Caesars and MGM were both affected by hacks as reported earlier this year. Both of those attacks, targeting MGM and Caesars’ Okta installations, used a sophisticated social engineering attack that went through IT help desks.
What can be done to prevent this?
Cybersecurity attacks on organisations have become a very common occurrence ever since the pandemic and are rampant all across the globe. Major big techs have been successful in setting up SoPs, safeguards and precautionary measures to protect their companies and their digital assets and interests. However, the Medium, Mico and small business owners are the most vulnerable to such unknown high-intensity attacks. The governments of various nations have established Computer Emergency Response Teams to monitor and investigate such massive-scale cyberattacks both on organisations and individuals. The issue of cybersecurity can be better addressed by inculcating the following aspects into our daily digital routines:
- Team Upskilling: Organisations need to be critical in creating upskilling avenues for employees pertaining to cybersecurity and threats. These campaigns should be run periodically, focusing on both the individual and organisational impact of any threat.
- Reporting Mechanism for Employees and Customers: Business owners and organisations need to deploy robust, sustainable and efficient reporting mechanisms for both employees well as customers. The mechanism will be fundamental in pinpointing the potential grey areas and threats in the cyber security mechanism as well. A dedicated reporting mechanism is now a mandate by a lot of governments around the world as it showcases transparency and natural justice in terms of legal remedies.
- Preventive, Precautionary and Recovery Policies: Organisations need to create and deploy respective preventive, precautionary and recovery policies in regard to different forms of cyber attacks and threats. This will be helpful in a better understanding of threats and faster response in cases of emergencies and attacks. These policies should be updated regularly, keeping in mind the emerging technologies. Efficient deployment of the policies can be done by conducting mock drills and threat assessment activities.
- Global Dialogue Forums: It is pertinent for organisations and the industry to create a community of cyber security enthusiasts from different and diverse backgrounds to address the growing issues of cyberspace; this can be done by conducting and creating global dialogue forums, which will act as the beacon of sharing best practices, advisories, threat assessment reports, potential threats and attacks thus establishing better inter-agency and inter-organisation communication and coordination.
- Data Anonymisation and Encryption: Organisations should have data management/processing policies in place for transparency and should always store data in an encrypted and anonymous manner, thus creating a blanket of safety in case of any data breach.
- Critical infrastructure: The industry leaders should push the limits of innovation by setting up state-of-the-art critical cyber infrastructure to create employment, innovation, and entrepreneurship spirit among the youth, thus creating a whole new generation of cyber-ready professionals and dedicated netizens. Critical infrastructures are essential in creating a safe, secure, resilient and secured digital ecosystem.
- Cysec Audits & Sandboxing: All organisations should establish periodic routines of Cybersecurity audits, both by internal and external entities, to find any issue/grey area in the security systems. This will create a more robust and adaptive cybersecurity mechanism for the organisation and its employees. All tech developing and testing companies need to conduct proper sandboxing exercises for all or any new tech/software creation to identify its shortcomings and flaws.
Conclusion
In view of the rising cybersecurity attacks on organisations, especially small and medium companies, a lot has been done, and a lot more needs to be done to establish an aspect of safety and security for companies, employees and customers. The impact of the Okta breach very clearly show how cyber attacks can cause massive repercussion for any organisation in the form of monetary loss, loss of business, damage to reputation and a lot of other factors. One should take such instances as examples and learnings for ourselves and prepare our organisation to combat similar types of threats, ultimately working towards preventing these types of threats and eradicating the influence of bad actors from our digital ecosystem altogether.
References:
- https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach#:~:text=In%202022%2C%20the%20global%20average,legal%20fees%2C%20and%20audit%20fees.
- https://www.okta.com/intro-to-okta/#:~:text=Okta%20is%20a%20secure%20identity,use%20to%20work%2C%20instantly%20available.
- https://www.cyberpeace.org/resources/blogs/mgm-resorts-shuts-down-it-systems-after-cyberattack

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html

Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations