#FactCheck- Delhi Metro Rail Corporation Price Hike
Executive Summary:
Recently, a viral social media post alleged that the Delhi Metro Rail Corporation Ltd. (DMRC) had increased ticket prices following the BJP’s victory in the Delhi Legislative Assembly elections. After thorough research and verification, we have found this claim to be misleading and entirely baseless. Authorities have asserted that no fare hike has been declared.
Claim:
Viral social media posts have claimed that the Delhi Metro Rail Corporation Ltd. (DMRC) increased metro fares following the BJP's victory in the Delhi Legislative Assembly elections.


Fact Check:
After thorough research, we conclude that the claims regarding a fare hike by the Delhi Metro Rail Corporation Ltd. (DMRC) following the BJP’s victory in the Delhi Legislative Assembly elections are misleading. Our review of DMRC’s official website and social media handles found no mention of any fare increase.Furthermore, the official X (formerly Twitter) handle of DMRC has also clarified that no such price hike has been announced. We urge the public to rely on verified sources for accurate information and refrain from spreading misinformation.

Conclusion:
Upon examining the alleged fare hike, it is evident that the increase pertains to Bengaluru, not Delhi. To verify this, we reviewed the official website of Bangalore Metro Rail Corporation Limited (BMRCL) and cross-checked the information with appropriate evidence, including relevant images. Our findings confirm that no fare hike has been announced by the Delhi Metro Rail Corporation Ltd. (DMRC).

- Claim: Delhi Metro price Hike after BJP’s victory in election
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
According to Statista, the global artificial intelligence software market is forecast to grow by around 126 billion US dollars by 2025. This will include a 270% increase in enterprise adoption over the past four years. The top three verticals in the Al market are BFSI (Banking, Financial Services, and Insurance), Healthcare & Life Sciences, and Retail & e-commerce. These sectors benefit from vast data generation and the critical need for advanced analytics. Al is used for fraud detection, customer service, and risk management in BFSI; diagnostics and personalised treatment plans in healthcare; and retail marketing and inventory management.
The Chairperson of the Competition Commission of India’s Chief, Smt. Ravneet Kaur raised a concern that Artificial Intelligence has the potential to aid cartelisation by automating collusive behaviour through predictive algorithms. She explained that the mere use of algorithms cannot be anti-competitive but in case the algorithms are manipulated, then that is a valid concern about competition in markets.
This blog focuses on how policymakers can balance fostering innovation and ensuring fair competition in an AI-driven economy.
What is the Risk Created by AI-driven Collusion?
AI uses predictive algorithms, and therefore, they could lead to aiding cartelisation by automating collusive behaviour. AI-driven collusion could be through:
- The use of predictive analytics to coordinate pricing strategies among competitors.
- The lack of human oversight in algorithm-induced decision-making leads to tacit collusion (competitors coordinate their actions without explicitly communicating or agreeing to do so).
AI has been raising antitrust concerns and the most recent example is the partnership between Microsoft and OpenAI, which has raised concerns among other national competition authorities regarding potential competition law issues. While it is expected that the partnership will potentially accelerate innovation, it also raises concerns about potential anticompetitive effects such as market foreclosure or the creation of barriers to entry for competitors and, therefore, has been under consideration in the German and UK courts. The problem here is in detecting and proving whether collusion is taking place.
The Role of Policy and Regulation
The uncertainties induced by AI regarding its effects on competition create the need for algorithmic transparency and accountability in mitigating the risks of AI-driven collusion. It leads to the need to build and create regulatory frameworks that mandate the disclosure of algorithmic methodologies and establish a set of clear guidelines for the development of AI and its deployment. These frameworks or guidelines should encourage an environment of collaboration between competition watchdogs and AI experts.
The global best practices and emerging trends in AI regulation already include respect for human rights, sustainability, transparency and strong risk management. The EU AI Act could serve as a model for other jurisdictions, as it outlines measures to ensure accountability and mitigate risks. The key goal is to tailor AI regulations to address perceived risks while incorporating core values such as privacy, non-discrimination, transparency, and security.
Promoting Innovation Without Stifling Competition
Policymakers need to ensure that they balance regulatory measures with innovation scope and that the two priorities do not hinder each other.
- Create adaptive and forward-thinking regulatory approaches to keep pace with technological advancements that take place at the pace of development and allow for quick adjustments in response to new AI capabilities and market behaviours.n
- Competition watchdogs need to recruit domain experts to assess competition amid rapid changes in the technology landscape. Create a multi-stakeholder approach that involves regulators, industry leaders, technologists and academia who can create inclusive and ethical AI policies.
- Businesses can be provided incentives such as recognition through certifications, grants or benefits in acknowledgement of adopting ethical AI practices.
- Launch studies such as the CCI’s market study to study the impact of AI on competition. This can lead to the creation of a driving force for sustainable growth with technological advancements.
Conclusion: AI and the Future of Competition
We must promote a multi-stakeholder approach that enhances regulatory oversight, and incentivising ethical AI practices. This is needed to strike a delicate balance that safeguards competition and drives sustainable growth. As AI continues to redefine industries, embracing collaborative, inclusive, and forward-thinking policies will be critical to building an equitable and innovative digital future.
The lawmakers and policymakers engaged in the drafting of the frameworks need to ensure that they are adaptive to change and foster innovation. It is necessary to note that fair competition and innovation are not mutually exclusive goals, they are complementary to each other. Therefore, a regulatory framework that promotes transparency, accountability, and fairness in AI deployment must be established.
References
- https://www.thehindu.com/sci-tech/technology/ai-has-potential-to-aid-cartelisation-fair-competition-integral-for-sustainable-growth-cci-chief/article69041922.ece
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.ey.com/en_in/insights/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation#:~:text=Six%20regulatory%20trends%20in%20Artificial%20Intelligence&text=These%20include%20respect%20for%20human,based%20approach%20to%20AI%20regulation.
- https://www.business-standard.com/industry/news/ai-has-potential-to-aid-fair-competition-for-sustainable-growth-cci-chief-124122900221_1.html

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
In the labyrinthine corridors of the digital age, where information zips across the globe with the ferocity of a tempest, the truth often finds itself ensnared in a web of deception. It is within this intricate tapestry of reality and falsehood that we find ourselves examining two distinct yet equally compelling cases of misinformation, each a testament to the pervasive challenges that beset our interconnected world.
Case 1: The Deceptive Video: Originating in Malaysia, Misattributed to Indian Railway Development
A misleading video claiming to showcase Indian railway construction has been debunked as footage from Malaysia's East Coast Rail Link (ECRL). Fact-checking efforts by India TV traced the video's origin to Malaysia, revealing deceptive captions in Tamil and Hindi. The video was initially posted on Twitter on January 9, 2024, announcing the commencement of track-laying for Malaysia's East Coast Railway. Further investigation reveals the ECRL as a joint venture between Malaysia and China, involving the laying of tracks along the east coast, challenging assertions of Indian railway development. The ECRL's track-laying initiative, initiated in December 2023, is part of China's Belt and Road initiative, covering 665 kilometers across states like Kelantan, Terengganu, Pahang, and Selangor, with a completion target set for 2025.
The video in question, a digital chameleon, had its origins not in the bustling landscapes of India but within the verdant bounds of Malaysia. Specifically, it was a scene captured from the East Coast Rail Link (ECRL) project, a monumental joint venture between Malaysia and China, unfurling across 665 kilometers of Malaysian terrain. This ambitious endeavor, part of the grand Belt and Road initiative, is a testament to the collaborative spirit that defines our era, with tracks stretching from Kelantan to Selangor, and a completion horizon set for the year 2025.
The unveiling of this grand project was graced by none other than Malaysia’s King Sultan Abdullah Sultan Ahmad Shah, in Pahang, underscoring the strategic alliance with China and the infrastructural significance of the ECRL. Yet, despite the clarity of its origins, the video found itself cloaked in a narrative of Indian development, a falsehood that spread like wildfire across the digital savannah.
Through the meticulous application of keyframe analysis and reverse image searches, the truth was laid bare. Reports from reputable sources such as the Associated Press and the Global Times, featuring the very same machinery, corroborated the video's true lineage. This revelation not only highlighted the ECRL's geopolitical import but also served as a clarion call for the critical role of fact-checking in an era where misinformation proliferates with reckless abandon.
Case 2: Kerala's Incident: Investigating Fake Narratives
Kerala Chief Minister Pinarayi Vijayan has registered 53 cases related to spreading fake narratives on social media to incite communal sentiments following the blasts at a Christian religious gathering in October 2023. Vijayan said cases have been registered against online news portals, editors, and Malayalam television channels. The state police chief has issued directions to monitor social media to stop fake news spread and take appropriate actions.
In a different corner of the world, the serene backdrop of Kerala was shattered by an event that would ripple through the fabric of its society. The Kalamassery blast, a tragic occurrence at a Christian religious gathering, claimed the lives of eight individuals and left over fifty wounded. In the wake of this calamity, a man named Dominic Martin surrendered, claiming responsibility for the heinous act.
Yet, as the investigation unfolded, a different kind of violence emerged—one that was waged not with explosives but with words. A barrage of fake narratives began to circulate through social media, igniting communal tensions and distorting the narrative of the incident. The Kerala Chief Minister, Pinarayi Vijayan, informed the Assembly that 53 cases had been registered across the state, targeting individuals and entities that had fanned the flames of discord through their digital utterances.
The Kerala police, vigilant guardians of truth, embarked on a digital crusade to quell the spread of these communally instigative messages. With a particular concentration of cases in Malappuram district, the authorities worked tirelessly to dismantle the network of fake profiles that propagated religious hatred. Social media platforms were directed to assist in this endeavor, revealing the IP addresses of the culprits and enabling the cyber cell divisions to take decisive action.
In the aftermath of the blasts, the Chief Minister and the state police chief ordered special instructions to monitor social media platforms for content that could spark communal uproar. Cyber patrolling became the order of the day, as a 20-member probe team was constituted to deeply investigate the incident.
Conclusion
These two cases, disparate in their nature and geography, converge on a singular point: the fragility of truth in the digital age. They highlight the imperative for vigilance and the pursuit of accuracy in a world where misinformation can spread like wildfire. As we navigate this intricate cyberscape, it is imperative to be mindful of the power of fact-checking and the importance of media literacy, for they are the light that guides us through the fog of falsehoods to the shores of veracity.
These narratives are not merely stories of deception thwarted; they are a call to action, a reminder of our collective responsibility to safeguard the integrity of our shared reality. Let us, therefore, remain steadfast in our quest for the truth, for it is only through such diligence that we can hope to preserve the sanctity of our discourse and the cohesion of our societies.
References:
- https://www.indiatvnews.com/fact-check/fact-check-misleading-video-claims-malaysian-rail-project-indian-truth-ecrl-india-railway-development-pm-modi-2024-01-29-914282
- https://sahilonline.org/kalamasserry-blast-53-cases-registered-across-kerala-for-spreading-fake-news