Law in 30 Seconds? The Rise of Influencer Hype and Legal Misinformation
Mr. Neeraj Soni
Sr. Researcher - Policy & Advocacy, CyberPeace
PUBLISHED ON
Mar 21, 2025
10
Introduction
In today's digital age, we consume a lot of information and content on social media apps, and it has become a daily part of our lives. Additionally, the algorithm of these apps is such that once you like a particular category of content or show interest in it, the algorithm starts showing you a lot of similar content. With this, the hype around becoming a content creator has also increased, and people have started making short reel videos and sharing a lot of information. There are influencers in every field, whether it's lifestyle, fitness, education, entertainment, vlogging, and now even legal advice.
The online content, reels, and viral videos by social media influencers giving legal advice can have far-reaching consequences. ‘LAW’ is a vast subject where even a single punctuation mark holds significant meaning. If it is misinterpreted or only partially explained in social media reels and short videos, it can lead to serious consequences. Laws apply based on the facts and circumstances of each case, and they can differ depending on the nature of the case or offence. This trend of ‘swipe for legal advice’ or ‘law in 30 seconds’, along with the rise of the increasing number of legal influencers, poses a serious problem in the online information landscape. It raises questions about the credibility and accuracy of such legal advice, as misinformation can mislead the masses, fuel legal confusion, and create risks.
Bar Council of India’s stance against legal misinformation on social media platforms
The Bar Council of India (BCI) on Monday (March 17, 2025) expressed concern over the rise of self-styled legal influencers on social media, stating that many without proper credentials spread misinformation on critical legal issues. Additionally, “Incorrect or misleading interpretations of landmark judgments like the Citizenship Amendment Act (CAA), the Right to Privacy ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India, and GST regulations have resulted in widespread confusion, misguided legal decisions, and undue judicial burden,” the body said. The BCI also ordered the mandatory cessation of misleading and unauthorised legal advice dissemination by non-enrolled individuals and called for the establishment of stringent vetting mechanisms for legal content on digital platforms. The BCI emphasised the need for swift removal of misleading legal information.
Conclusion
Legal misinformation on social media is a growing issue that not only disrupts public perception but also influences real-life decisions. The internet is turning complex legal discourse into a chaotic game of whispers, with influencers sometimes misquoting laws and self-proclaimed "legal experts" offering advice that wouldn't survive in a courtroom. The solution is not censorship, but counterbalance. Verified legal voices need to step up, fact-checking must be relentless, and digital literacy must evolve to keep up with the fast-moving world of misinformation. Otherwise, "legal truth" could be determined by whoever has the best engagement rate, rather than by legislation or precedent.
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
In today’s time, everything is online, and the world is interconnected. Cases of data breaches and cyberattacks have been a reality for various organisations and industries, In the recent case (of SAS), Scandinavian Airlines experienced a cyberattack that resulted in the exposure of customer details, highlighting the critical importance of preventing customer privacy. The incident is a wake-up call for Airlines and businesses to evaluate their cyber security measures and learn valuable lessons to safeguard customers’ data. In this blog, we will explore the incident and discuss the strategies for protecting customers’ privacy in this age of digitalisation.
Analysing the backdrop
The incident has been a shocker for the aviation industry, SAS Scandinavian Airlines has been a victim of a cyberattack that compromised consumer data. Let’s understand the motive of cyber crooks and the technique they used :
Motive Behind the Attack: Understanding the reasons that may have driven the criminals is critical to comprehending the context of the Scandinavian Airlines cyber assault. Financial gain, geopolitical conflicts, activism, or personal vendettas are common motivators for cybercriminals. Identifying the purpose of the assault can provide insight into the attacker’s aims and the possible impact on both the targeted organisation and its consumers. Understanding the attack vector and strategies used by cyber attackers reveals the amount of complexity and possible weaknesses in an organisation’s cybersecurity defences. Scandinavian Airlines’ cyber assault might have included phishing, spyware, ransomware, or exploiting software weaknesses. Analysing these tactics allows organisations to strengthen their security against similar assaults.
Impact on Victims: The Scandinavian Airlines (SAS) cyber attack victims, including customers and individuals related to the company, have suffered substantial consequences. Data breaches and cyber-attack have serious consequences due to the leak of personal information.
1)Financial Losses and Fraudulent Activities: One of the most immediate and upsetting consequences of a cyber assault is the possibility of financial loss. Exposed personal information, such as credit card numbers, can be used by hackers to carry out illegal activities such as unauthorised transactions and identity theft. Victims may experience financial difficulties and the need to spend time and money resolving these concerns.
2)Concerns about privacy and personal security: A breach of personal data can significantly impact the privacy and personal security of victims. The disclosed information, including names, addresses, and contact information, might be exploited for nefarious reasons, such as targeted phishing or physical harassment. Victims may have increased anxiety about their safety and privacy, which can interrupt their everyday life and create mental pain.
3) Reputational Damage and Trust Issues: The cyber attack may cause reputational harm to persons linked with Scandinavian Airlines, such as workers or partners. The breach may diminish consumers’ and stakeholders’ faith in the organisation, leading to a bad view of its capacity to protect personal information. This lack of trust might have long-term consequences for the impacted people’s professional and personal relationships.
4) Emotional Stress and Psychological Impact: The psychological impact of a cyber assault can be severe. Fear, worry, and a sense of violation induced by having personal information exposed can create emotional stress and psychological suffering. Victims may experience emotions of vulnerability, loss of control, and distrust toward digital platforms, potentially harming their overall quality of life.
5) Time and Effort Required for Remediation: Addressing the repercussions of a cyber assault demands significant time and effort from the victims. They may need to call financial institutions, reset passwords, monitor accounts for unusual activity, and use credit monitoring services. Resolving the consequences of a data breach may be a difficult and time-consuming process, adding stress and inconvenience to the victims’ lives.
6) Secondary Impacts: The impacts of an online attack could continue beyond the immediate implications. Future repercussions for victims may include trouble acquiring credit or insurance, difficulties finding future work, and continuous worry about exploiting their personal information. These secondary effects can seriously affect victims’ financial and general well-being.
Apart from this, the trust lost would take time to rebuild.
Takeaways from this attack
The cyber-attack on Scandinavian Airlines (SAS) is a sharp reminder of cybercrime’s ever-present and increasing menace. This event provides crucial insights that businesses and people may use to strengthen cybersecurity defences. In the lessons that were learned from the Scandinavian Airlines cyber assault and examine the steps that may be taken to improve cybersecurity and reduce future risks. Some of the key points that can be considered are as follows:
Proactive Risk Assessment and Vulnerability Management: The cyber assault on Scandinavian Airlines emphasises the significance of regular risk assessments and vulnerability management. Organisations must proactively identify and fix possible system and network vulnerabilities. Regular security audits, penetration testing, and vulnerability assessments can help identify flaws before bad actors exploit them.
Strong security measures and best practices: To guard against cyber attacks, it is necessary to implement effective security measures and follow cybersecurity best practices. Lessons from the Scandinavian Airlines cyber assault emphasise the importance of effective firewalls, up-to-date antivirus software, secure setups, frequent software patching, and strong password rules. Using multi-factor authentication and encryption technologies for sensitive data can also considerably improve security.
Employee Training and Awareness: Human mistake is frequently a big component in cyber assaults. Organisations should prioritise employee training and awareness programs to educate employees about phishing schemes, social engineering methods, and safe internet practices. Employees may become the first line of defence against possible attacks by cultivating a culture of cybersecurity awareness.
Data Protection and Privacy Measures: Protecting consumer data should be a key priority for businesses. Lessons from the Scandinavian Airlines cyber assault emphasise the significance of having effective data protection measures, such as encryption and access limits. Adhering to data privacy standards and maintaining safe data storage and transfer can reduce the risks connected with data breaches.
Collaboration and Information Sharing: The Scandinavian Airlines cyber assault emphasises the need for collaboration and information sharing among the cybersecurity community. Organisations should actively share threat intelligence, cooperate with industry partners, and stay current on developing cyber threats. Sharing information and experiences can help to build the collective defence against cybercrime.
Conclusion
The Scandinavian Airlines cyber assault is a reminder that cybersecurity must be a key concern for organisations and people. Organisations may improve their cybersecurity safeguards, proactively discover vulnerabilities, and respond effectively to prospective attacks by learning from this occurrence and adopting the lessons learned. Building a strong cybersecurity culture, frequently upgrading security practices, and encouraging cooperation within the cybersecurity community are all critical steps toward a more robust digital world. We may aim to keep one step ahead of thieves and preserve our important information assets by constantly monitoring and taking proactive actions.
The world of Artificial Intelligence is entering a new phase with the rise of Agentic AI, often described as the third wave of AI evolution. Unlike earlier systems that relied on static models (that learn from the information that is fed) and reactive outputs, Agentic AI introduces intelligent agents that can make decisions, take initiative, and act autonomously in real time. These systems are designed to require minimal human oversight while actively collaborating and learning continuously. Such capabilities indicate an incoming shift, especially in the ways in which Indian businesses can function. For better understanding, Agentic AI is capable of streamlining operations, personalising services, and driving innovations at scale.
India and Agentic AI
Building as we go, India is making continuous strides in the AI revolution- deliberating on government frameworks, and simultaneously adapting. At Microsoft's Pinnacle 2025 summit in Hyderabad, India's pivotal role in shaping the future of Agentic AI was brought to the spotlight. With over 17 million developers on GitHub and ambitions to become the world's largest developer community by 2028, India's tech talent is gearing up to lead global AI innovations. Microsoft's Azure AI Foundry, also highlighted the country's growing influence in the AI landscape.
Indian companies are actively integrating Agentic AI into their operations to enhance efficiency and customer experience. Zomato is leveraging AI agents to optimise delivery logistics, ensuring timely and efficient service. Infosys has developed AI-driven copilots to assist developers in code generation, reducing development time, requiring fewer people to work on a particular project, and improving software quality.
As per a report by Deloitte, the Indian AI market is projected to grow potentially $20 billion by 2028. However, this is accompanied by significant challenges. 92% of Indian executives identify security concerns as the primary obstacle to responsible AI usage. Additionally, regulatory uncertainties and privacy risks associated with sensitive data were also highlighted.
Challenges in Adoption
Despite the enthusiasm, several barriers hinder the widespread adoption of Agentic AI in India:
Skills Gap: While the AI workforce is expected to grow to 1.25 million by 2027, the current growth rate of 13% is considered to be insufficient with respect to the demands of the market.
Data Infrastructure: Effective AI systems require robust, structured, and accessible datasets. Many organisations lack the necessary data maturity, leading to flawed AI outputs and decision-making failures.
Trust and Governance: Building trust in AI systems is crucial. Concerns over data privacy, ethical usage, and regulatory compliance require robust governance frameworks to ensure the adoption of AI in a responsible manner.
Looming fear of job loss: As AI continues to take up more sophisticated roles, a general feeling of hesitancy with respect to the loss of employment/human labour might come in the way of adopting such measures.
Outsourcing: Currently, most companies prefer outsourcing or buying AI solutions rather than building them in-house. This gives rise to the issue of adapting to evolving needs.
The Road Ahead
To fully realise the potential of Agentic AI, India must address the following challenges :
Training the Workforce: Initiatives and workshops tailored for employees that provide AI training can prove to be helpful. Some relevant examples are Microsoft’s commitment to provide AI training to 2 million individuals by 2025 and Infosys's in-house AI training programs.
Data Readiness: Investing in modern data infrastructure and promoting data literacy are essential to improve data quality and accessibility.
Establishing Governance Frameworks: Developing clear regulatory guidelines and ethical standards will foster trust and facilitate responsible AI adoption. Like the IndiaAI mission, efforts regarding evolving technology and keeping up with it are imperative.
Agentic AI holds unrealised potential to transform India's business landscape when coupled with innovation and a focus on quality that enhances global competitiveness. India is at a position where by proactively addressing the existing challenges, this potential can be realised and set the foundation for a new technological revolution (along with in-house development), solidifying its position as a global AI leader.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.