#Factcheck-Viral Image of Men Riding an Elephant Next to a Tiger in Bihar is Misleading
Executive Summary:
A post on X (formerly Twitter) featuring an image that has been widely shared with misleading captions, claiming to show men riding an elephant next to a tiger in Bihar, India. This post has sparked both fascination and skepticism on social media. However, our investigation has revealed that the image is misleading. It is not a recent photograph; rather, it is a photo of an incident from 2011. Always verify claims before sharing.

Claims:
An image purporting to depict men riding an elephant next to a tiger in Bihar has gone viral, implying that this astonishing event truly took place.

Fact Check:
After investigation of the viral image using Reverse Image Search shows that it comes from an older video. The footage shows a tiger that was shot after it became a man-eater by forest guard. The tiger killed six people and caused panic in local villages in the Ramnagar division of Uttarakhand in January, 2011.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that men rode an elephant alongside a tiger in Bihar is false. The photo presented as recent actually originates from the past and does not depict a current event. Social media users should exercise caution and verify sensational claims before sharing them.
- Claim: The video shows people casually interacting with a tiger in Bihar
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Overview:
The National Payments Corporation of India (NPCI) officially revealed on the 31st of July 2024 that its client C-Edge Technologies had been subject to a ransomware attack. These circumstances have caused C-Edge to be separated from retail payment systems to eliminate more threats to the national payment systems. More than 200 cooperative and regional rural banks have been affected leading to disruptions in normal services including ATM withdrawals and UPI transactions.
About C-Edge Technologies:
C-Edge Technologies was founded in the year 2010 especially to meet the specific requirements of the Indian banking and other allied sectors accentuating more on the cooperative and the regional rural banks. The company offers a range of services such as Core Banking Solutions by functioning as the center of a bank where customers’ records are managed and accounting of transactions takes place, Payment Solutions through the implementation of payment gateways and mobile banking facilities, cybersecurity through threat detection and incident response to protect banking organizations, data analytics and AI through the analytics of big banking data to reduce risks and detect frauds.
Details of Ransomware attack:
Reports say, this ransomware attack has been attributed by the RansomEXX group which primarily targeted Brontoo Technology Solutions, a key collaborator with C-Edge, through a misconfigured Jenkins server, which allowed unauthorized access to the systems.
The RansomExx group also known as Defray777 or Ransom X utilized a sophisticated variant known as RansomEXX v2.0 to execute the attack. This group often targets large organizations and demands substantial ransoms. RansomEXX uses various malware tools such as IcedID, Vatet Loader, and PyXie RAT. It typically infiltrates systems through phishing emails, exploiting vulnerabilities in applications and services, including Remote Desktop Protocol (RDP). The ransomware encrypts files using the Advanced Encryption Standard (AES), with the encryption key further secured using RSA encryption. This dual-layer encryption complicates recovery efforts for victims. RansomEXX operates on a ransomware-as-a-service model, allowing affiliates to conduct attacks using its infrastructure. Earlier in 2021, it attacked StarHub and Gigabyte’s servers for ransome.
Impact due to the attack:
The immediate consequences of the ransomware attack include:
- Service Disruption: This has negative implications to consumers especially the citizens who use the banks to do their day to day banking activities such as withdrawals and online transactions. Among the complaints some of them relate to cases where the sender’s account has been debited without the corresponding credit to the receiver account.
- Isolation Measures: Likely, NPCI is already following the right measures as it had disconnected C-Edge from its networks to contain the proliferation of the ransomware. This decision was made as a precautionary measure so that all functional aspects in a larger financial system are safeguarded.
Operations resumed:
The National Payments Corporation of India (NPCI) said it has restored connectivity with C-Edge Technologies Ltd after the latter’s network connection was severed by NPCI over security concerns that were evaluated by an external forensic auditing firm. The audit affirmed that all affected systems were contained in order to avoid the occurrence of ransomware attack contagion. All the affected systems were localized in C-Edge’s data center and no repercussion was evidenced regarding the infrastructure of the cooperative banks or the regional rural banks that are involved in the business. Both NPCI and C-Edge Technologies have resumed normalcy so that the banking and financial services being offered by these banks remain safe and secure.
Major Implications for Banking Sector:
The attack on C-Edge Technologies raises several critical concerns for the Indian banking sector:
- Cybersecurity Vulnerabilities: It also shows the weak linkages which are present within the technology system that help smaller sized banks. Nevertheless, the service has been offered by C-Edge regarding their cybersecurity solution, this attack evidence that the securities required should improve in all types of banks and banking applications.
- Financial Inclusion Risks: Co operative and regional rural banks also have its importance in the financial inclusion especially in rural and semi urban areas. Gradually, interruptions to their services pose a risk to signal diminished improvement in financial literacy for the excluded groups contrary to the common year advancement.
- Regulatory Scrutiny: After this event, agencies such as the Reserve Bank of India (RBI) may enhance the examination of the banking sector’s cybersecurity mechanisms. Some of the directives may even require institutions to adhere to higher compliance measures regarding the defense against cyber threats.
Way Forward: Mitigation
- Strengthening Cybersecurity: It is important to enhance the cyber security to eliminate this kind of attacks in the future. This may include using better threat detection systems, penetration testing to find the vulnerabilities, system hardening, and network monitoring from time to time.
- Transition to Cloud-Based Solutions: The application of adaptations in cloud solutions can contribute to the enhancement in operative efficiency as well as optimization in the utilization of resources. The security features of cloud should be implemented for safety and protection against cyber threats for SMEs in the banking sector.
- Leveraging AI and Data Analytics: Development of the AI-based solutions for fraud and risk control means that bank organizations get the chance to address threats and to regain clients’ trust.
Conclusion:
This ransomware attack in C-Edge Technologies in the banking sector provides a warning for all the infrastructures. Initial cleanup methodologies and quarantining are effective. The continuous monitoring of cyber security features in the infrastructure and awareness between employees helps to avoid these kinds of attacks. Building up cyber security areas will also effectively safeguard the institution against other cyber risks in the future and fortify the confidence and reliability of the financial system, especially the regional rural banks.
Reference:
- https://www.businesstoday.in/technology/news/story/c-edge-technologies-a-deep-dive-into-the-indian-fintech-powerhouse-hit-by-major-cyberattack-439657-2024-08-01
- https://www.thehindu.com/sci-tech/technology/customers-at-several-small-sized-banks-affected-as-tech-provider-c-edge-suffers-ransomware-attack/article68470198.ece
- https://www.cnbctv18.com/technology/ransomware-attack-disrupts-over-200-co-operative-banks-regional-rural-banks-19452521.htm
- https://timesofindia.indiatimes.com/city/ahmedabad/ransomware-breach-at-c-edge-impacts-transactions-for-cooperative-banks/articleshow/112180914.cms
- https://www.emsisoft.com/en/blog/41027/ransomware-profile-ransomexx/

Introduction
The recent inauguration of the Google Safety Engineering Centre (GSEC) in Hyderabad on 18th June, 2025, marks a pivotal moment not just for India, but for the entire Asia-Pacific region’s digital future. As only the fourth such centre in the world after Munich, Dublin, and Málaga, its presence signals a shift in how AI safety, cybersecurity, and digital trust are being decentralised, leading to a more globalised and inclusive tech ecosystem. India’s digitisation over the years has grown at a rapid scale, introducing millions of first-time internet users, who, depending on their awareness, are susceptible to online scams, phishing, deepfakes, and AI-driven fraud. The establishment of GSEC is not just about launching a facility but a step towards addressing AI readiness, user protection, and ecosystem resilience.
Building a Safer Digital Future in the Global South
The GSEC is set to operationalise the Google Safety Charter, designed around three core pillars: empowering users by protecting them from online fraud, strengthening government cybersecurity and enterprise, and advancing responsible AI in the platform design and execution. This represents a shift from the standard reactive safety responses to proactive, AI-driven risk mitigation. The goal is to make safety tools not only effective, but tailored to threats unique to the Global South, from multilingual phishing to financial fraud via unofficial lending apps. This centre is expected to stimulate regional cybersecurity ecosystems by creating jobs, fostering public-private partnerships, and enabling collaboration across academia, law enforcement, civil society, and startups. In doing so, it positions Asia-Pacific not as a consumer of the standard Western safety solutions but as an active contributor to the next generation of digital safeguards and customised solutions.
Previous piloted solutions by Google include DigiKavach, a real-time fraud detection framework, and tools like spam protection in mobile operating systems and app vetting mechanisms. What GSEC might aid with is the scaling and integration of these efforts into systems-level responses, where threat detection, safety warnings, and reporting mechanisms, etc., would ensure seamless coordination and response across platforms. This reimagines safety as a core design principle in India’s digital public infrastructure rather than focusing on attack-based response.
CyberPeace Insights
The launch aligns with events such as the AI Readiness Methodology Conference recently held in New Delhi, which brought together researchers, policymakers, and industry leaders to discuss ethical, secure, and inclusive AI implementation. As the world grapples with how to deal with AI technologies ranging from generative content to algorithmic decisions, centres like GSEC can play a critical role in defining the safeguards and governance structures that can support rapid innovation without compromising public trust and safety. The region’s experiences and innovations in AI governance must shape global norms, and the role of Tech firms in doing so is significant. Apart from this, efforts with respect to creating digital infrastructure and safety centres addressing their protection resonate with India’s vision of becoming a global leader in AI.
References
- https://www.thehindu.com/news/cities/Hyderabad/google-safety-engineering-centre-india-inaugurated-in-hyderabad/article69708279.ece
- https://www.businesstoday.in/technology/news/story/google-launches-safety-charter-to-secure-indias-ai-future-flags-online-fraud-and-cyber-threats-480718-2025-06-17?utm_source=recengine&utm_medium=web&referral=yes&utm_content=footerstrip-1&t_source=recengine&t_medium=web&t_content=footerstrip-1&t_psl=False
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://blog.google/intl/en-in/company-news/googles-safety-charter-for-indias-ai-led-transformation/
- https://economictimes.indiatimes.com/magazines/panache/google-rolls-out-hyderabad-hub-for-online-safety-launches-first-indian-google-safety-engineering-centre/articleshow/121928037.cms?from=mdr