#FactCheck: Fake Claim on Delhi Authority Culling Dogs After Supreme Court Stray Dog Ban Directive 11 Aug 2025
Executive Summary:
A viral claim alleges that following the Supreme Court of India’s August 11, 2025 order on relocating stray dogs, authorities in Delhi NCR have begun mass culling. However, verification reveals the claim to be false and misleading. A reverse image search of the viral video traced it to older posts from outside India, probably linked to Haiti or Vietnam, as indicated by the use of Haitian Creole and Vietnamese language respectively. While the exact location cannot be independently verified, it is confirmed that the video is not from Delhi NCR and has no connection to the Supreme Court’s directive. Therefore, the claim lacks authenticity and is misleading
Claim:
There have been several claims circulating after the Supreme Court of India on 11th August 2025 ordered the relocation of stray dogs to shelters. The primary claim suggests that authorities, following the order, have begun mass killing or culling of stray dogs, particularly in areas like Delhi and the National Capital Region. This narrative intensified after several videos purporting to show dead or mistreated dogs allegedly linked to the Supreme Court’s directive—began circulating online.

Fact Check:
After conducting a reverse image search using a keyframe from the viral video, we found similar videos circulating on Facebook. Upon analyzing the language used in one of the posts, it appears to be Haitian Creole (Kreyòl Ayisyen), which is primarily spoken in Haiti. Another similar video was also found on Facebook, where the language used is Vietnamese, suggesting that the post associates the incident with Vietnam.
However, it is important to note that while these posts point towards different locations, the exact origin of the video cannot be independently verified. What can be established with certainty is that the video is not from Delhi NCR, India, as is being claimed. Therefore, the viral claim is misleading and lacks authenticity.


Conclusion:
The viral claim linking the Supreme Court’s August 11, 2025 order on stray dogs to mass culling in Delhi NCR is false and misleading. Reverse image search confirms the video originated outside India, with evidence of Haitian Creole and Vietnamese captions. While the exact source remains unverified, it is clear the video is not from Delhi NCR and has no relation to the Court’s directive. Hence, the claim lacks credibility and authenticity.
Claim: Viral fake claim of Delhi Authority culling dogs after the Supreme Court directive on the ban of stray dogs as on 11th August 2025
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
On September 27, 2024, the Indian government took a significant step toward enhancing national security by amending business allocation rules through an extraordinary gazette notification. This amendment, which assigns specific roles to different Union Ministries and Departments regarding telecom network security, cybersecurity, and cybercrime, aims to clarify and streamline efforts in these critical areas. With India's evolving cybersecurity landscape, the need for a structured regulatory framework is pressing, as threats grow in complexity. Recent developments, such as the July 2024 global cyber outage and increasing cyber crimes like SMS scams, highlight the urgency of such reforms. Under Article 77 clause (3), the President amended the Government of India (Allocation of Business) Rules, 1961, to designate clearer responsibilities, reinforcing India's readiness to tackle emerging digital threats.
Key Highlights of the Gazette Notification
- Telecom Networks Security: A new entry ‘1A’ matters relating to the security of telecom networks" has been added under the Department of Telecommunications, highlighting an increased focus on securing the nation's telecom infrastructure.
- Cyber Security Responsibilities: Cyber security responsibilities have been added as a new entry under the Ministry of Electronics and Information Technology (MeitY), "5B. This assigns responsibility to MeitY for cybersecurity issues, concerning the Information Technology Act of 2000, giving the ministry the mandate to support other ministries or departments regarding cybersecurity matters.
- Oversight for Cyber Crime: Under the Ministry of Home Affairs, Department of Internal Security, a new entry "36A Matters relating to Cyber Crime" is introduced. This emphasises that the MHA will handle cybercrime issues, highlighting the government's attention toward enhancing internal security against cyber threats.
- Cyber Security Strategic Coordination: Any matter related to the "overall coordination and strategic direction for Cyber Security," has been given to the National Security Council Secretariat (NSCS). This consolidates the role of the NSCS in guiding cybersecurity strategies at the national level.
Impact on Policy and Governance
The amendments introduced through the notification are poised to significantly enhance the Indian government's cybersecurity framework by clarifying the roles of various ministries. The clear separation of responsibilities, telecom network security to the Department of Telecommunications, cybercrime to the Ministry of Home Affairs, and overall cyber strategy to the National Security Council Secretariat could seen as better coordination between ministries. This clarity is expected to reduce bureaucratic delays, allowing for quicker response times in addressing cyber threats, cybercrimes, and telecom vulnerabilities. Such efficient handling is crucial, especially in the evolving landscape of digital threats. These changes have been largely welcomed as it recognises the potential for improved regulatory oversight and faster policy implementation and a step forward in bolstering India’s cyber resilience.
Conclusion
The Government of India (Allocation of Business) Rules, 1961 amendments mark a critical step in strengthening India's cybersecurity framework. By setting out specific responsibilities for telecom network security, cybercrime, and overall cybersecurity strategy among key ministries, the government seeks to improve coordination and reduce bureaucratic delays. This policy shift is poised to enhance India’s digital resilience, providing a foundation for rapid responses to emerging cyber threats. However, success hinges on effective implementation, resource allocation, and collaboration across ministries. Addressing concerns like potential jurisdictional overlap and ensuring the inclusion of bodies like NCIIPC will be pivotal to ensuring comprehensive cyber protection. The complexity of cyber crimes and threats is evolving every day and the government's ability and preparedness to handle them with regulatory insight is a high priority.
References
- https://egazette.gov.in/(S(4r5oclueuwrjypfvr5b4vtzg))/ViewPDF.aspx
- https://www.ptinews.com/story/national/govt-specifies-roles-on-matters-related-to-security-of-telecom-network-cyber-security-and-cyber-crime/1856627
- https://www.thehindubusinessline.com/economy/centre-to-further-streamline-mechanism-to-deal-with-cyber-security-cyber-crime/article68694330.ece
- https://telecom.economictimes.indiatimes.com/news/policy/govt-specifies-roles-on-matters-related-to-security-of-telecom-network-cyber-security-and-cyber-crime/113754501

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations

Introduction
In the wake of the Spy Loan scandal, more than a dozen malicious loan apps were downloaded on Android phones from the Google Play Store, However, the number is significantly higher because they are also available on third-party marketplaces and questionable websites.
Unmasking the Scam
When a user borrows money, these predatory lending applications capture large quantities of information from their smartphone, which is then used to blackmail and force them into returning the total with hefty interest levels. While the loan amount is disbursed to users, these predatory loan apps request sensitive information by granting access to the camera, contacts, messages, logs, images, Wi-Fi network details, calendar information, and other personal information. These are then sent to loan shark servers.
The researchers have disclosed facts about the applications used by loan sharks to mislead consumers, as well as the numerous techniques used to circumvent some of the limitations imposed on the Play Store. Malware is often created with appealing user interfaces and promotes simple and rapid access to cash with high-interest payback conditions. The revelation of the Spy Loan scandal has triggered an immediate response from law enforcement agencies worldwide. There is an urgency to protect millions of users from becoming victims of malicious loan apps, it has become extremely important for law enforcement to unmask the culprits and dismantle the cyber-criminal network.
Aap’s banned: here is the list of the apps banned by Google Play Store :
- AA Kredit: इंस्टेंट लोन ऐप (com.aa.kredit.android)
- Amor Cash: Préstamos Sin Buró (com.amorcash.credito.prestamo)
- Oro Préstamo – Efectivo rápido (com.app.lo.go)
- Cashwow (com.cashwow.cow.eg)
- CrediBus Préstamos de crédito (com.dinero.profin.prestamo.credito.credit.credibus.loan.efectivo.cash)
- ยืมด้วยความมั่นใจ – ยืมด่วน (com.flashloan.wsft)
- PréstamosCrédito – GuayabaCash (com.guayaba.cash.okredito.mx.tala)
- Préstamos De Crédito-YumiCash (com.loan.cash.credit.tala.prestmo.fast.branch.mextamo)
- Go Crédito – de confianza (com.mlo.xango)
- Instantáneo Préstamo (com.mmp.optima)
- Cartera grande (com.mxolp.postloan)
- Rápido Crédito (com.okey.prestamo)
- Finupp Lending (com.shuiyiwenhua.gl)
- 4S Cash (com.swefjjghs.weejteop)
- TrueNaira – Online Loan (com.truenaira.cashloan.moneycredit)
- EasyCash (king.credit.ng)
- สินเชื่อปลอดภัย – สะดวก (com.sc.safe.credit)
Risks with several dimensions
SpyLoan's loan application violates Google's Financial Services policy by unilaterally shortening the repayment period for personal loans to a few days or any other arbitrary time frame. Additionally, the company threatens users with public embarrassment and exposure if they do not comply with such unreasonable demands.
Furthermore, the privacy rules presented by SpyLoan are misleading. While ostensibly reasonable justifications are provided for obtaining certain permissions, they are very intrusive practices. For instance, camera permission is ostensibly required for picture data uploads for Know Your Customer (KYC) purposes, and access to the user's calendar is ostensibly required to plan payment dates and reminders. However, both of these permissions are dangerous and can potentially infringe on users' privacy.
Prosecution Strategies and Legal Framework
The law enforcement agencies and legal authorities initiated prosecution strategies against the individuals who are involved in the Spy Loan Scandal, this multifaced approach involves international agreements and the exploration of innovative legal avenues. Agencies need to collaborate with International agencies to work on specific cyber-crime, leveraging the legal frameworks against digital fraud furthermore, the cross-border nature of the spy loan operation requires a strong legal framework to exchange information, extradition requests, and the pursuit of legal actions across multiple jurisdictions.
Legal Protections for Victims: Seeking Compensation and Restitution
As the legal battle unfolds in the aftermath of the Spy loan scam the focus shifts towards the victims, who suffer financial loss from such fraudulent apps. Beyond prosecuting culprits, the pursuit of justice should involve legal safeguards for victims. Existing consumer protection laws serve as a crucial shield for Spy Loan victims. These laws are designed to safeguard the rights of individuals against unfair practices.
Challenges in legal representation
As the legal hunt for justice in the Spy Loan scam progresses, it encounters challenges that demand careful navigation and strategic solutions. One of the primary obstacles in the legal pursuit of the Spy loan app lies in the jurisdictional complexities. Within the national borders, it’s quite challenging to define the jurisdiction that holds the authority, and a unified approach in prosecuting the offenders in various regions with the efforts of various government agencies.
Concealing the digital identities
One of the major challenges faced is the anonymity afforded by the digital realm poses a challenge in identifying and catching the perpetrators of the scam, the scammers conceal their identity and make it difficult for law enforcement agencies to attribute to actions against the individuals, this challenge can be overcome by joint effort by international agencies and using the advance digital forensics and use of edge cutting technology to unmask these scammers.
Technological challenges
The nature of cyber threats and crime patterns are changing day by day as technology advances this has become a challenge for legal authorities, the scammers explore vulnerabilities, making it essential, for law enforcement agencies to be a step ahead, which requires continuous training of cybercrime and cyber security.
Shaping the policies to prevent future fraud
As the scam unfolds, it has become really important to empower users by creating more and more awareness campaigns. The developers of the apps need to have a transparent approach to users.
Conclusion
It is really important to shape the policies to prevent future cyber frauds with a multifaced approach. Proposals for legislative amendments, international collaboration, accountability measures, technology protections, and public awareness programs all contribute to the creation of a legal framework that is proactive, flexible, and robust to cybercriminals' shifting techniques. The legal system is at the forefront of this effort, playing a critical role in developing regulations that will protect the digital landscape for years to come.
Safeguarding against spyware threats like SpyLoan requires vigilance and adherence to best practices. Users should exclusively download apps from official sources, meticulously verify the authenticity of offerings, scrutinize reviews, and carefully assess permissions before installation.