#Factcheck-Viral Image of Men Riding an Elephant Next to a Tiger in Bihar is Misleading
Executive Summary:
A post on X (formerly Twitter) featuring an image that has been widely shared with misleading captions, claiming to show men riding an elephant next to a tiger in Bihar, India. This post has sparked both fascination and skepticism on social media. However, our investigation has revealed that the image is misleading. It is not a recent photograph; rather, it is a photo of an incident from 2011. Always verify claims before sharing.

Claims:
An image purporting to depict men riding an elephant next to a tiger in Bihar has gone viral, implying that this astonishing event truly took place.

Fact Check:
After investigation of the viral image using Reverse Image Search shows that it comes from an older video. The footage shows a tiger that was shot after it became a man-eater by forest guard. The tiger killed six people and caused panic in local villages in the Ramnagar division of Uttarakhand in January, 2011.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that men rode an elephant alongside a tiger in Bihar is false. The photo presented as recent actually originates from the past and does not depict a current event. Social media users should exercise caution and verify sensational claims before sharing them.
- Claim: The video shows people casually interacting with a tiger in Bihar
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Overview:
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.

Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"

Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.


Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.

Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.


- Botnet activity indicating network hosts compromised by malware.

- Credentials from third-party government and non-governmental websites linked to official institutional emails

- Details of software applications, drivers installed on compromised hosts.

- Sensitive cookie data exfiltrated from various browsers.


- IP addresses of compromised systems.
- Login credentials for different Android applications.

Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.

Risk Changes:

Risk by Categories:

Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.

Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
- Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
- Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
- Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
- Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
- Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
- Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
- Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
- Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
- Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
- Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.

Introduction:
The G7 Summit is an international forum that includes member states from France, the United States, the United Kingdom, Germany, Japan, Italy, Canada and the European Union (EU). The annual G7 meeting that is held every year was hosted by Japan this year in May 2023. It took place in Hiroshima. Artificial Intelligence (AI) was the major theme of this G7 summit. Key takeaways from this G7 summit highlight that leaders together focused on escalating the adoption of AI for beneficial use cases across the economy and the government and improving the governing structure to mitigate the potential risks of AI.
Need for fair and responsible use of AI:
The G7 recognises that they really need to work together to ensure the responsible and fair use of AI to help establish technical standards for the same. Members of the G7 countries agreed to adopt an open and enabling environment for the development of AI technologies. They also emphasized that AI regulations should be based on democratic values. G7 summit calls for the responsible use of AI. The ministers discussed the risks involved in AI technology programs like ChatGPT. They came up with an action plan for promoting responsible use of AI with human beings leading the efforts.
Further Ministers from the Group of Seven (G7) countries (Canada, France, Germany, Italy, Japan, the UK, the US, and the EU) met virtually on 7 September 2023 and committed to creating ‘international guiding principles applicable for all AI actors’, and a code of conduct for organisations developing ‘advanced’ AI systems.
What is HAP (Hiroshima AI Process)
Hiroshima AI Process (HAP) aims to establish trustworthy AI technical standards at the international level. The G7 agreed on creating a ministerial forum to prompt the fair use of AI. Hiroshima AI Process (HAP) is an effort by G7 to determine a way forward to regulate AI. The HAP establishes a forum for international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI at the global level.
The HAP will be operating in close connection with organisations including the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
This Hiroshima AI Process (HAP) initiated at the Annual G7 Summit held in Hiroshima, Japan is a significant step towards regulating AI and the Hiroshima AI Process (HAP) is likely to conclude by December 2023.
G7 leaders emphasized fostering an environment where trustworthy AI systems are designed, developed and deployed for the common good worldwide. They advocated for international standards and interoperable tools for trustworthy AI that enable Innovation by creating a comprehensive policy framework, including overall guiding principles for all AI actors in the AI ecosystem.
Stressing upon fair use of advanced technologies:
The impact and misuse of generative AI was also discussed by the G7 leaders. The G7 members also stressed misinformation and disinformation in the realm of generative AI models. As they are capable of creating synthetic content such as deepfakes. In particular, they noted that the next generation of interactive generative media will leverage targeted influence content that is highly personalized, localized, and conversational.
In the digital landscape, there is a rapid advancement of technologies such as generative
Artificial Intelligence (AI), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities, hence certain regulatory mechanisms at the global level will ensure and advocate for the ethical, reasonable and fair use of such advanced technologies.
Conclusion:
The G7 summit held in May 2023 focused on advanced international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI, in line with shared democratic values. AI governance has become a global issue, countries around the world are coming forward and advocating for the responsible and fair use of AI and influence on global AI governance and standards. It is significant to establish a regulatory framework that defines AI capabilities and identifies areas prone to misuse. And set forth reasonable technical standards while also fostering innovations. Hence overall prioritizing data privacy, integrity, and security in the evolving nature of advanced technologies.
References:
- https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf
- https://www.g7hiroshima.go.jp/en/summit/about/
- https://www.drishtiias.com/daily-updates/daily-news-analysis/the-hiroshima-ai-process-for-global-ai-governance
- https://www.businesstoday.in/technology/news/story/hiroshima-ai-process-g7-calls-for-adoption-of-international-technical-standards-for-ai-382121-2023-05-20

Introduction
Microsoft has unveiled its ambitious roadmap for developing a quantum supercomputer with AI features, acknowledging the transformative power of quantum computing in solving complex societal challenges. Quantum computing has the potential to revolutionise AI by enhancing its capabilities and enabling breakthroughs in different fields. Microsoft’s groundbreaking announcement of its plans to develop a quantum supercomputer, its potential applications, and the implications for the future of artificial intelligence (AI). However, there is a need for regulation in the realms of quantum computing and AI and significant policies and considerations associated with these transformative technologies. This technological advancement will help in the successful development and deployment of quantum computing, along with the potential benefits and challenges associated with its implementation.
What isQuantum computing?
Quantum computing is an emerging field of computer science and technology that utilises principles from quantum mechanics to perform complex calculations and solve certain types of problems more efficiently than classical computers. While classical computers store and process information using bits, quantum computers use quantum bits or qubits.
Interconnected Future
Quantum computing promises to significantly expand AI’s capabilities beyond its current limitations. Integrating these two technologies could lead to profound advancements in various sectors, including healthcare, finance, and cybersecurity. Quantum computing and artificial intelligence (AI) are two rapidly evolving fields that have the potential to revolutionise technology and reshape various industries. This section explores the interdependence of quantum computing and AI, highlighting how integrating these two technologies could lead to profound advancements across sectors such as healthcare, finance, and cybersecurity.
- Enhancing AI Capabilities:
Quantum computing holds the promise of significantly expanding the capabilities of AI systems. Traditional computers, based on classical physics and binary logic, need help solving complex problems due to the exponential growth of computational requirements. Quantum computing, on the other hand, leverages the principles of quantum mechanics to perform computations on quantum bits or qubits, which can exist in multiple states simultaneously. This inherent parallelism and superposition property of qubits could potentially accelerate AI algorithms and enable more efficient processing of vast amounts of data.
- Solving Complex Problems:
The integration of quantum computing and AI has the potential to tackle complex problems that are currently beyond the reach of classical computing methods. Quantum machine learning algorithms, for example, could leverage quantum superposition and entanglement to analyse and classify large datasets more effectively. This could have significant applications in healthcare, where AI-powered quantum systems could aid in drug discovery, disease diagnosis, and personalised medicine by processing vast amounts of genomic and clinical data.
- Advancements in Finance and Optimisation:
The financial sector can benefit significantly from integrating quantum computing and AI. Quantum algorithms can be employed to optimise portfolios, improve risk analysis models, and enhance trading strategies. By harnessing the power of quantum machine learning, financial institutions can make more accurate predictions and informed decisions, leading to increased efficiency and reduced risks.
- Strengthening Cybersecurity:
Quantum computing can also play a pivotal role in bolstering cybersecurity defences. Quantum techniques can be employed to develop new cryptographic protocols that are resistant to quantum attacks. In conjunction with quantum computing, AI can further enhance cybersecurity by analysing massive amounts of network traffic and identifying potential vulnerabilities or anomalies in real time, enabling proactive threat mitigation.
- Quantum-Inspired AI:
Beyond the direct integration of quantum computing and AI, quantum-inspired algorithms are also being explored. These algorithms, designed to run on classical computers, draw inspiration from quantum principles and can improve performance in specific AI tasks. Quantum-inspired optimisation algorithms, for instance, can help solve complex optimisation problems more efficiently, enabling better resource allocation, supply chain management, and scheduling in various industries.
How Quantum Computing and AI Should be Regulated-
As quantum computing and artificial intelligence (AI) continues to advance, questions arise regarding the need for regulations to govern these technologies. There is debate surrounding the regulation of quantum computing and AI, considering the potential risks, ethical implications, and the balance between innovation and societal protection.
- Assessing Potential Risks: Quantum computing and AI bring unprecedented capabilities that can significantly impact various aspects of society. However, they also pose potential risks, such as unintended consequences, privacy breaches, and algorithmic biases. Regulation can help identify and mitigate these risks, ensuring these technologies’ responsible development and deployment.
- Ethical Implications: AI and quantum computing raise ethical concerns related to privacy, bias, accountability, and the impact on human autonomy. For AI, issues such as algorithmic fairness, transparency, and decision-making accountability must be addressed. Quantum computing, with its potential to break current encryption methods, requires regulatory measures to protect sensitive information. Ethical guidelines and regulations can provide a framework to address these concerns and promote responsible innovation.
- Balancing Innovation and Regulation: Regulating quantum computing and AI involves balancing fostering innovation and protecting society’s interests. Excessive regulation could stifle technological advancements, hinder research, and impede economic growth. On the other hand, a lack of regulation may lead to the proliferation of unsafe or unethical applications. A thoughtful and adaptive regulatory approach is necessary, considering the dynamic nature of these technologies and allowing for iterative improvements based on evolving understanding and risks.
- International Collaboration: Given the global nature of quantum computing and AI, international collaboration in regulation is essential. Harmonising regulatory frameworks can avoid fragmented approaches, ensure consistency, and facilitate ethical and responsible practices across borders. Collaborative efforts can also address data privacy, security, and cross-border data flow challenges, enabling a more unified and cooperative approach towards regulation.
- Regulatory Strategies: Regulatory strategies for quantum computing and AI should adopt a multidisciplinary approach involving stakeholders from academia, industry, policymakers, and the public. Key considerations include:
- Risk-based Approach: Regulations should focus on high-risk applications while allowing low-risk experimentation and development space.
- Transparency and Explainability: AI systems should be transparent and explainable to enable accountability and address concerns about bias, discrimination, and decision-making processes.
- Privacy Protection: Regulations should safeguard individual privacy rights, especially in quantum computing, where current encryption methods may be vulnerable.
- Testing and Certification: Establishing standards for the testing and certification of AI systems can ensure their reliability, safety, and adherence to ethical principles.
- Continuous Monitoring and Adaptation: Regulatory frameworks should be dynamic, regularly reviewed, and adapted to keep pace with the evolving landscape of quantum computing and AI.
Conclusion:
Integrating quantum computing and AI holds immense potential for advancing technology across diverse domains. Quantum computing can enhance the capabilities of AI systems, enabling the solution of complex problems, accelerating data processing, and revolutionising industries such as healthcare, finance, and cybersecurity. As research and development in these fields progress, collaborative efforts among researchers, industry experts, and policymakers will be crucial in harnessing the synergies between quantum computing and AI to drive innovation and shape a transformative future.The regulation of quantum computing and AI is a complex and ongoing discussion. Striking the right balance between fostering innovation, protecting societal interests, and addressing ethical concerns is crucial. A collaborative, multidisciplinary approach to regulation, considering international cooperation, risk assessment, transparency, privacy protection, and continuous monitoring, is necessary to ensure these transformative technologies' responsible development and deployment.