Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

A report by MarketsandMarkets in 2024 showed that the global AI market size is estimated to grow from USD 214.6 billion in 2024 to USD 1,339.1 billion in 2030, at a CAGR of 35.7%. AI has become an enabler of productivity and innovation. A Forbes Advisor survey conducted in 2023 reported that 56% of businesses use AI to optimise their operations and drive efficiency. Further, 51% use AI for cybersecurity and fraud management, 47% employ AI-powered digital assistants to enhance productivity and 46% use AI to manage customer relationships.
AI has revolutionised business functions. According to a Forbes survey, 40% of businesses rely on AI for inventory management, 35% harness AI for content production and optimisation and 33% deploy AI-driven product recommendation systems for enhanced customer engagement. This blog addresses the opportunities and challenges posed by integrating AI into operational efficiency.
Artificial Intelligence and its resultant Operational Efficiency
AI has exemplary optimisation or efficiency capabilities and is widely used to do repetitive tasks. These tasks include payroll processing, data entry, inventory management, patient registration, invoicing, claims processing, and others. AI use has been incorporated into such tasks as it can uncover complex patterns using NLP, machine learning, and deep learning beyond human capabilities. It has also shown promise in improving the decision-making process for businesses in time-critical, high-pressure situations.
AI-driven efficiency is visible in industries such as the manufacturing industry for predictive maintenance, in the healthcare industry for streamlining diagnostics and in logistics for route optimisation. Some of the most common real-world examples of AI increasing operational efficiency are self-driving cars (Tesla), facial recognition (Apple Face ID), language translation (Google Translate), and medical diagnosis (IBM Watson Health)
Harnessing AI has advantages as it helps optimise the supply chain, extend product life cycles, and ultimately conserve resources and cut operational costs.
Policy Implications for AI Deployment
Some of the policy implications for development for AI deployment are as follows:
- Develop clear and adaptable regulatory frameworks for the ongoing and future developments in AI. The frameworks need to ensure that innovation is not hindered while managing the potential risks.
- As AI systems rely on high-quality data that is accessible and interoperable to function effectively and without proper data governance, these systems may produce results that are biased, inaccurate and unreliable. Therefore, it is necessary to ensure data privacy as it is essential to maintain trust and prevent harm to individuals and organisations.
- Policy developers need to focus on creating policies that upskill the workforce which complements AI development and therefore job displacement.
- To ensure cross-border applicability and efficiency of standardising AI policies, the policy-makers need to ensure that international cooperation is achieved when developing the policies.
Addressing Challenges and Risks
Some of the main challenges that emerge with the development of AI are algorithmic bias, cybersecurity threats and the dependence on exclusive AI solutions or where the company retains exclusive control over the source codes. Some policy approaches that can be taken to mitigate these challenges are:
- Having a robust accountability mechanism.
- Establishing identity and access management policies that have technical controls like authentication and authorisation mechanisms.
- Ensure that the learning data that AI systems use follows ethical considerations such as data privacy, fairness in decision-making, transparency, and the interpretability of AI models.
Conclusion
AI can contribute and provide opportunities to drive operational efficiency in businesses. It can be an optimiser for productivity and costs and foster innovation for different industries. But this power of AI comes with its own considerations and therefore, it must be balanced with proactive policies that address the challenges that emerge such as the need for data governance, algorithmic bias and risks associated with cybersecurity. A solution to overcome these challenges is establishing an adaptable regulatory framework, fostering workforce upskilling and promoting international collaborations. As businesses integrate AI into core functions, it becomes necessary to leverage its potential while safeguarding fairness, transparency, and trust. AI is not just an efficiency tool, it has become a stimulant for organisations operating in a rapidly evolving digital world.
References
- https://indianexpress.com/article/technology/artificial-intelligence/ai-indian-businesses-long-term-gain-operational-efficiency-9717072/
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.forbes.com/councils/forbestechcouncil/2024/08/06/smart-automation-ais-impact-on-operational-efficiency/
- https://www.processexcellencenetwork.com/ai/articles/ai-operational-excellence
- https://www.leewayhertz.com/ai-for-operational-efficiency/
- https://www.forbes.com/councils/forbestechcouncil/2024/11/04/bringing-ai-to-the-enterprise-challenges-and-considerations/
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms
.webp)
Introduction
The recent events in Mira Road, a bustling suburb on the outskirts of Mumbai, India, unfold like a modern-day parable, cautioning us against the perils of unverified digital content. The Mira Road incident, a communal clash that erupted into the physical realm, has been mirrored and magnified through the prism of social media. The Maharashtra Police, in a concerted effort to quell the spread of discord, issued stern warnings against the dissemination of rumours and fake messages. These digital phantoms, they stressed, have the potential to ignite law and order conflagrations, threatening the delicate tapestry of peace.
The police's clarion call came in the wake of a video, mischievously edited, that falsely claimed anti-social elements had set the Mira Road railway station ablaze. This digital doppelgänger of reality swiftly went viral, its tendrils reaching into the ubiquitous realm of WhatsApp, ensnaring the unsuspecting in its web of deceit.
In this age of information overload, where the line between fact and fabrication blurs, the police urged citizens to exercise discernment. The note they issued was not merely an advisory but a plea for vigilance, a reminder that the act of sharing unauthenticated messages is not a passive one; it is an act that can disturb the peace and unravel the fabric of society.
The Massacre
The police's response to this crisis was multifaceted. Administrators and members of social media groups found to be the harbingers of such falsehoods would face legal repercussions. The Thane District, a mosaic of cultural and religious significance, has been marred by a series of violent incidents, casting a shadow over its storied history. The police, in their role as guardians of order, have detained individuals, scoured social media for inauthentic posts, and maintained a vigilant presence in the region.
The Maharashtra cyber cell, a digital sentinel, has unearthed approximately 15 posts laden with videos and messages designed to sow discord among the masses. These findings were shared with the Mira-Bhayandar, Vasai-Virar (MBVV) police, who stand ready to take appropriate action. Inspector General Yashasvi Yadav of the Maharashtra cyber cell issued an appeal to the public, urging them to refrain from circulating such unverified messages, reinforcing the notion that the propagation of inauthentic information is, in itself, a crime.
The MBVV police, in their zero-tolerance stance, have formed a team dedicated to scrutinizing social media posts. The message is clear: fake news will be met with strict action. The right to free speech on social media comes with the responsibility not to share information that could incite mischief. The Indian Penal Code and Information Technology Act serve as the bulwarks against such transgressions.
The Aftermath
In the aftermath of the clashes, the police have worked tirelessly to restore calm. A young man, whose video replete with harsh and obscene language went viral, was apprehended and has since apologised for his actions. The MBVV police have also taken to social media to reassure the public that the situation is under control, urging them to avoid circulating messages that could exacerbate tensions.
The Thane district has witnessed acts of vandalism targeting shops, further escalating tensions. In response, the police have apprehended individuals linked to these acts, hoping that such measures will expedite the return of peace. Advisories have been issued, warning against the dissemination of provocative messages and rumours.
In total, 19 individuals have been taken into custody in relation to numerous incidents of violence. The Mira-Bhayandar and Vasai-Virar police have underscored their commitment to legal action against those who spread rumours through fake messages. The authorities have also highlighted the importance of brotherhood and unity, reminding citizens that above all, they are Indians first.
Conclusion
In a world where old videos, stripped of context, can fuel tensions, the police have issued a note referring to the aforementioned fake video message. They urge citizens to exercise caution, to neither believe nor circulate such messages. Police Authorities have assured that no one involved in the violence will be spared, and peace committees are being convened to restore harmony. The Mira Road incident serves as a sign of the prowess of information and responsibility that comes with it. In the digital age, where the ephemeral and the eternal collide, we must navigate the waters of truth with care. Ultimately, it is not just the image of a locality that is at stake, but the essence of our collective humanity.
References
- https://youtu.be/gK2Ac1qP-nE?feature=shared
- https://www.mid-day.com/mumbai/mumbai-crime-news/article/mira-road-communal-clash-those-spreading-fake-messages-to-face-strict-action-say-mira-bhayandar-vasai-virar-cops-23331572
- https://www.mid-day.com/mumbai/mumbai-news/article/mira-road-communal-clash-cybercops-on-alert-for-fake-clips-23331653
- https://www.theweek.in/wire-updates/national/2024/01/24/bom43-mh-shops-3rdld-vandalism.html