“Try Without Personalisation” Google’s New Search Feature For Non-Personalised Search Results
Introduction
Google’s search engine is widely known for its ability to tailor its search results based on user activity, enhancing the relevance of search outcomes. Recently, Google introduced the ‘Try Without Personalisation’ feature. This feature allows users to view results independent of their prior activity. This change marks a significant shift in platform experiences, offering users more control over their search experience while addressing privacy concerns.
However, even in this non-personalised mode, certain contextual factors including location, language, and device type, continue to influence results. This essentially provides the search with a baseline level of relevance. This feature carries significant policy implications, particularly in the areas of privacy, consumer rights, and market competition.
Understanding the Feature
When users engage with this option of non-personalised search, it will no longer show them helpful individual results that are personalisation-dependent and will instead provide unbiased search results. Essentially,this feature provides users with neutral (non-personalised) search results by bypassing their data.
This feature allows the following changes:
- Disables the user’s ability to find past searches in Autofill/Autocomplete.
- Does not pause or delete stored activity within a user’s Google account. Users, because of this feature, will be able to pause or delete stored activity through data and privacy controls.
- The feature doesn't delete or disable app/website preferences like language or search settings are some of the unaffected preferences.
- It also does not disable or delete the material that users save.
- When a user is signed in, they can ‘turn off the personalisation’ by clicking on the search option at the end of the webpage. These changes, offered by the feature, in functionality, have significant implications for privacy, competition, and user trust.
Policy Implications: An Analysis
This feature aligns with global privacy frameworks such as the GDPR in the EU and the DPDP Act in India. By adhering to principles like data minimisation and user consent, it offers users control over their data and the choice to enable or disable personalisation, thereby enhancing user autonomy and trust.
However, there is a trade-off between user expectations for relevance and the impartiality of non-personalised results. Additionally, the introduction of such features may align with emerging regulations on data usage, transparency, and consent. Policymakers play a crucial role in encouraging innovations like these while ensuring they safeguard user rights and maintain a competitive market.
Conclusion and Future Outlook
Google's 'Try Without Personalisation' feature represents a pivotal moment for innovation by balancing user privacy with search functionality. By aligning with global privacy frameworks such as the GDPR and the DPDP Act, it empowers users to control their data while navigating the complex interplay between relevance and neutrality. However, its success hinges on overcoming technical hurdles, fostering user understanding, and addressing competitive and regulatory scrutiny. As digital platforms increasingly prioritise transparency, such features could redefine user expectations and regulatory standards in the evolving tech ecosystem.
References
Related Blogs

Introduction
The development of high-speed broadband internet in the 90s triggered a growth in online gaming, particularly in East Asian countries like South Korea and China. This culminated in the proliferation of competitive video game genres, which had otherwise existed mostly in the form of high-score and face-to-face competitions at arcades. The online competitive gaming market has only become bigger over the years, with a separate domain for professional competition, called esports. This industry is projected to reach US$4.3 billion by 2029, driven by advancements in gaming technology, increased viewership, multi-million dollar tournaments, professional leagues, sponsorships, and advertising revenues. However, the industry is still in its infancy and struggles with fairness and integrity issues. It can draw lessons in regulation from the traditional sports market to address these challenges for uniform global growth.
The Growth of Esports
The appeal of online gaming lies in its design innovations, social connectivity, and accessibility. Its rising popularity has culminated in online gaming competitions becoming an industry, formally organised into leagues and tournaments with reward prizes reaching up to millions of dollars. Professional teams now have coaches, analysts and psychologists supporting their players. For scale, the 2024 ESports World Cup (EWS) held in Saudi Arabia had the largest combined prize pool of over US$60 million. Such tournaments can be viewed in arenas and streamed online, and by 2025, around 322.7 million people are forecast to be occasional viewers of esports events.
According to Statista, esports revenue is expected to demonstrate an annual growth rate (CAGR 2024-2029) of 6.59%, resulting in a projected market volume of US$5.9 billion by 2029. Esports has even been recognised in traditional sporting events, debuting as a medal sport in the Asian Games 2022. In 2024, the International Olympic Committee (IOC) announced the Olympic Esports Games, with the inaugural event set to take place in 2025 in Saudi Arabia. Hosting esports events such as the EWS is expected to boost tourism and the host country’s local economy.
The Challenges of Esports Regulation
While the esports ecosystem provides numerous opportunities for growth and partnerships, its under-regulation presents challenges. Due to the lack of a single governing body like the IOC for the Olympics or FIFA for football to lay down centralised rules, the industry faces certain challenges, such as :
- Integrity issues: Esports are not immune to cheating attempts. Match-fixing, using advanced software hacks, doping (e.g., Adderall use), and the use of other illegal aids are common. DOTA, Counter-Strike, and Overwatch tournaments are particularly susceptible to cheating scandals.
- Players’ Rights: The teams that contractually own professional players provide remuneration and exercise significant control over athletes, who face issues like overwork, a short-lived career, stress, the absence of collective bargaining forums, instability, etc.
- Fragmented National Regulations: While multiple countries have recognised esports as a sport, policies on esports governance and allied regulation vary within and across borders. For example, age restrictions and laws on gambling, taxation, labour, and advertising differ by country. This can create confusion, risks and extra costs, impacting the growth of the ecosystem.
- Cybersecurity Concerns: The esports industry carries substantial prize pools and has growing viewer engagement, which makes it increasingly vulnerable to Distributed Denial of Service (DDoS) attacks, malware, ransomware, data breaches, phishing, and account hijacking. Tournament organisers must prioritise investments in secure network infrastructure, perform regular security audits, encrypt sensitive data, implement network monitoring, utilise API penetration testing tools, deploy intrusion detection systems, and establish comprehensive incident response and mitigation plans.
Proposals for Esports Regulation: Lessons from Traditional Sports
To address the most urgent challenges to the esports industry as outlined above, the following interventions, drawing on the governance and regulatory frameworks of traditional sports, can be made:
- Need for a Centralised Esports Governing Body: Unlike traditional sports, the esports landscape lacks a Global Sports Organisation (GSO) to oversee its governance. Instead, it is handled de facto by game publishers with industry interests different from those of traditional GSOs. Publishers’ primary source of revenue is not esports, which means they can adopt policies unsuitable for its growth but good for their core business. Appointing a centralised governing body with the power to balance the interests of multiple stakeholders and manage issues like unregulated gambling, athlete health, and integrity challenges is a logical next step for this industry.
- Gambling/Betting Regulations: While national laws on gambling/betting vary, GSOs establish uniform codes of conduct that bind participants contractually, ensuring consistent ethical standards across jurisdictions. Similar rules in esports are managed by individual publishers/ tournament organisers, leading to inconsistencies and legal grey areas. The esports ecosystem needs standardised regulation to preserve fair play codes and competitive integrity.
- Anti-Doping Policies: There is increasing adderall abuse among young players to enhance performance with the rising monetary stakes in esports. The industry must establish a global framework similar to the World Anti-Doping Code, which, in conjunction with eight international standards, harmonises anti-doping policies across all traditional sports and countries in the world. The esports industry should either adopt this or develop its own policy to curb stimulant abuse.
- Norms for Participant Health: Professional players start around age 16 or 17 and tend to retire around 24. They may be subjected to rigorous practice hours and stringent contracts by the teams that own them. There is a need for international norm-setting by a federation overseeing the protection of underage players. Enforcement of these norms can be one of the responsibilities of a decentralised system comprising country and state-level bodies. This also ensures fair play governance.
- Respect and Diversity: While esports is technologically accessible, it still has room for better representation of diverse gender identities, age groups, abilities, races, ethnicities, religions, and sexual orientations. Embracing greater diversity and inclusivity would benefit the industry's growth and enhance its potential to foster social connectivity through healthy competition.
Conclusion
The development of the world’s first esports island in Abu Dhabi gives impetus to the rapidly growing esports industry with millions of fans across the globe. To sustain this momentum, stakeholders must collaborate to build a strong governance framework that protects players, supports fans, and strengthens the ecosystem. By learning from traditional sports, esports can establish centralised governance, enforce standardised anti-doping measures, safeguard athlete rights, and promote inclusivity, especially for young and diverse communities. Embracing regulation and inclusivity will not only enhance esports' credibility but also position it as a powerful platform for unity, creativity, and social connection in the digital age.
Resources
- https://www.statista.com/outlook/amo/esports/worldwide
- https://www.statista.com/statistics/490480/global-esports-audience-size-viewer-type/
- https://asoworld.com/blog/global-esports-market-report-2024/#:~:text=A%20key%20driver%20of%20this%20growth%20is%20the%20Sponsorship%20%26%20Advertising,US%24288.9%20million%20in%202024.
- https://lawschoolpolicyreview.com/2023/12/28/a-case-for-recognising-professional-esports-players-as-employees-of-their-game-publisher/
- https://levelblue.com/blogs/security-essentials/the-hidden-risks-of-esports-cybersecurity-on-the-virtual-battlefield
- https://medium.com/@heyimJoost/esports-governance-and-its-failures-9ac7b3ec37ea
- https://www.google.com/search?q=adderall+abuse+in+esports&oq=adderall+abuse+in+esports&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCDU2MDdqMGo5qAIAsAIB&sourceid=chrome&ie=UTF-8
- https://americanaddictioncenters.org/blog/esports-adderall-abuse#:~:text=A%202020%20piece%20by%20the,it%20because%20everyone%20was%20using

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Introduction
In an alarming event, one of India’s premier healthcare institutes, AIIMS Delhi, has fallen victim to a malicious cyberattack for the second time in the year. The Incident serves as a clear-cut reminder of the escalating threat landscape faced by the healthcare organisation in this digital age. In the attack, which unfolded with grave implications, the attackers not only explored the vulnerabilities present in the healthcare sector, but this also raised the concern about the security of patient data and the uninterrupted delivery of critical healthcare services. In this blog post, we will explore the incident, what happened, and what safety measures can be taken.
Backdrop
The cyber-security systems deployed in AIIMS, New Delhi, recently detected a malware attack. The nature and scope of the attack were both sophisticated and targeted. This second hack acts as a wake-up call for healthcare organisations nationwide. As the healthcare business increasingly depends on digital technology to improve patient care and operational efficiency, cybersecurity must be prioritised to protect sensitive data. To minimise cyber-attack dangers, healthcare organisations must invest in robust defences such as multi-factor authentication, network security, frequent system upgrades, and employee training.
The attempt was successfully prevented, and the deployed cyber-security systems neutralised the threat. The e-Hospital services remain to be fully secure and are functioning normally.
Impact on AIIMS
Healthcare services have been under hackers’ radar worldwide, and the healthcare sector has been impacted badly. The attack on AIIMS Delhi’s effects has been both immediate and far-reaching. The organisation, which is recognised for delivering excellent healthcare services and performing breakthrough medical research, faced significant interruptions in its everyday operations. Patient care and treatment processes were considerably impeded, resulting in delays, cancellations, and the inability to access essential medical documents. The stolen data raises serious concerns about patient privacy and confidentiality, raising doubts about the institution’s capacity to protect sensitive information. Furthermore, the financial ramifications of the assault, such as the cost of recovery, deploying more robust cybersecurity measures, and potential legal penalties and forensic analyses, contribute to the scale of the effect. The event has also generated public concerns about the institution’s ability to preserve personal information, undermining confidence and degrading AIIMS Delhi’s image.
Impact on Patients: The attacks not only impact the institutes but also have serious implications for the patients and here are some key highlights:
Healthcare Service Disruption: The hack has affected the seamless delivery of healthcare services at AIIMS Delhi. Appointments, surgeries, and other medical treatments may be delayed, cancelled, or rescheduled. This disturbance can result in longer wait times, longer treatment periods, and potential problems from delayed or interrupted therapy.

Patient Privacy and Confidentiality are jeopardised because of the breach of sensitive patient data. Medical data, test findings, and treatment plans may have been compromised. This breach may diminish patient faith in the institution’s capacity to safeguard their personal information, discouraging them from seeking care or submitting sensitive information in the future.
As a result of the cyberattack, patients may endure mental anguish and worry. Fear of possible exploitation of personal health information, confusion about the scope of the breach, and concerns about the security of their healthcare data can all have a negative impact on their mental health. This stress might aggravate pre-existing medical issues and impede total recovery.
Trust at stake: A data breach may harm patients’ faith and confidence in AIIMS Delhi and the healthcare system. Patients rely on healthcare facilities to keep their information secure and confidential while providing safe, high-quality care. A hack can doubt the institution’s ability to safeguard patient data, affecting patients’ overall faith in the organisation and potentially leading to patients seeking care elsewhere.
Cybersecurity Measures
To avoid future hacks and protect patient data, AIIMS Delhi must prioritize enhancing its cybersecurity procedures. The institution can strengthen its resistance to changing threats by establishing strong security practices. The following steps can be considered.
Using Multi-factor Authentication: By forcing users to submit several forms of identity to access systems and data, multi-factor authentication offers an extra layer of protection. AIIMS Delhi may considerably lower the danger of unauthorised access by applying this precaution, even in the case of leaked passwords or credentials. Biometrics and one-time passwords, for example, should be integrated into the institution’s authentication systems.
Improving Network Security and Firewalls: AIIMS Delhi should improve network security by implementing strong firewalls, intrusion detection and prevention systems, and network segmentation. These techniques serve to construct barriers between internal systems and external threats, reducing attackers’ lateral movement within the network. Regular network traffic monitoring and analysis can assist in recognising and mitigating any security breaches.
Risk Assessment: Regular penetration testing and vulnerability assessments are required to uncover possible flaws and vulnerabilities in AIIMS Delhi’s systems and infrastructure. Security professionals can detect vulnerabilities and offer remedial solutions by carrying out controlled simulated assaults. This proactive strategy assists in identifying and addressing any security flaws before attackers exploit them.
Educating and training Healthcare Professionals: Education and training have a crucial role in enhancing cybersecurity practices in healthcare facilities. Healthcare workers, including physicians, nurses, administrators, and support staff, must be well-informed about the importance of cybersecurity and trained in risk-mitigation best practices. This will empower healthcare professionals to actively contribute to protecting the patient’s data and maintaining the trust and confidence of patients.
Learnings from Incidents
AIIMS Delhi should embrace cyber-attacks as learning opportunities to strengthen its security posture. Following each event, a detailed post-incident study should be performed to identify areas for improvement, update security policies and procedures, and improve employee training programs. This iterative strategy contributes to the institution’s overall resilience and preparation for future cyber-attacks. AIIMS Delhi can effectively respond to cyber incidents, minimise the impact on operations, and protect patient data by establishing an effective incident response and recovery plan, implementing data backup and recovery mechanisms, conducting forensic analysis, and promoting open communication. Proactive measures, constant review, and regular revisions to incident response plans are critical for staying ahead of developing cyber threats and ensuring the institution’s resilience in the face of potential future assaults.

Conclusion
To summarise, developing robust healthcare systems in the digital era is a key challenge that healthcare organisations must prioritise. Healthcare organisations can secure patient data, assure the continuation of key services, and maintain patients’ trust and confidence by adopting comprehensive cybersecurity measures, building incident response plans, training healthcare personnel, and cultivating a security culture. Adopting a proactive and holistic strategy for cybersecurity is critical to developing a healthcare system capable of withstanding and successfully responding to digital-age problems.