#Factcheck-Viral Image of Men Riding an Elephant Next to a Tiger in Bihar is Misleading
Executive Summary:
A post on X (formerly Twitter) featuring an image that has been widely shared with misleading captions, claiming to show men riding an elephant next to a tiger in Bihar, India. This post has sparked both fascination and skepticism on social media. However, our investigation has revealed that the image is misleading. It is not a recent photograph; rather, it is a photo of an incident from 2011. Always verify claims before sharing.

Claims:
An image purporting to depict men riding an elephant next to a tiger in Bihar has gone viral, implying that this astonishing event truly took place.

Fact Check:
After investigation of the viral image using Reverse Image Search shows that it comes from an older video. The footage shows a tiger that was shot after it became a man-eater by forest guard. The tiger killed six people and caused panic in local villages in the Ramnagar division of Uttarakhand in January, 2011.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that men rode an elephant alongside a tiger in Bihar is false. The photo presented as recent actually originates from the past and does not depict a current event. Social media users should exercise caution and verify sensational claims before sharing them.
- Claim: The video shows people casually interacting with a tiger in Bihar
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Romance scams have been rised in India. A staggering 66 percent of individuals in India have been ensnared by the siren songs of deceitful online dating schemes. These are not the attempts of yesteryears but rather a new breed of scams, seamlessly weaving the threads of traditional deceit with the sinew of cutting-edge technologies such as generative AI and deep fakes. A report by Tenable highlights the rise of romance scams in India, which now combine traditional tactics with advanced technologies like generative AI and deepfakes. Over 69% of Indians struggle to distinguish between artificial and authentic human voices. Scammers are using celebrity impersonations and platforms like Facebook to lure victims into a false sense of security.
The Romance Scam
A report by Tenable, the exposure management company, illuminates the disturbing evolution of these romance scams. It reveals a reality: AI-generated deep lakes have attained a level of sophistication where an astonishing 69 percent of Indians confess to struggling to discern between artificial and authentic human voices. This technological prowess has armed scammers with the tools to craft increasingly convincing personas, enabling them to perpetrate their nefarious acts with alarming success.
In 2023 alone, 43 percent of Indians reported falling victim to AI voice scams, with a staggering 83 percent of those targeted suffering financial loss. The scammers, like puppeteers, manipulate their digital marionettes with a deftness that is both awe-inspiring and horrifying. They have mastered the art of impersonating celebrities and fabricating personas that resonate with their targets, particularly preying on older demographics who may be more susceptible to their charms.
Social media platforms, which were once heralded as the town squares of the 21st century, have unwittingly become fertile grounds for these fraudulent activities. They lure victims into a false sense of security before the scammers orchestrate their deceitful symphonies. Chris Boyd, a staff research engineer at Tenable, issues a stern warning against the lure of private conversations, where the protective layers of security are peeled away, leaving individuals exposed to the machinations of these digital charlatans.
The Vulnerability of Individuals
The report highlights the vulnerability of certain individuals, especially those who are older, widowed, or experiencing memory loss. These individuals are systematically targeted by heartless criminals who exploit their longing for connection and companionship. The importance of scrutinising requests for money from newfound connections is underscored, as is the need for meticulous examination of photographs and videos for any signs of manipulation or deceit.
'Increasing awareness and maintaining vigilance are our strongest weapons against these heartless manipulations, 'safeguarding love seekers from the treacherous web of AI-enhanced deception.'
The landscape of love has been irrevocably altered by the prevalence of smartphones and the deep proliferation of mobile internet. Finding love has morphed into a digital odyssey, with more and more Indians turning to dating apps like Tinder, Bumble, and Hinge. Yet, as with all technological advancements, there lurks a shadowy underbelly. The rapid adoption of dating sites has provided potential scammers with a veritable goldmine of opportunity.
It is not uncommon these days to hear tales of individuals who have lost their life savings to a person they met on a dating site or who have been honey-trapped and extorted by scammers on such platforms. A new study, titled 'Modern Love' and published by McAfee ahead of Valentine's Day 2024, reveals that such scams are rampant in India, with 39 percent of users reporting that their conversations with a potential love interest online turned out to be with a scammer.
The study also found that 77 percent of Indians have encountered fake profiles and photos that appear AI-generated on dating websites or apps or on social media, while 26 percent later discovered that they were engaging with AI-generated bots rather than real people. 'The possibilities of AI are endless, and unfortunately, so are the perils,' says Steve Grobman, McAfee’s Chief Technology Officer.
Steps to Safeguard
Scammers have not limited their hunting grounds to dating sites alone. A staggering 91 percent of Indians surveyed for the study reported that they, or someone they know, have been contacted by a stranger through social media or text message and began to 'chat' with them regularly. Cybercriminals exploit the vulnerability of those seeking love, engaging in long and sophisticated attempts to defraud their victims.
McAfee offers some steps to protect oneself from online romance and AI scams:
- Scrutinise any direct messages you receive from a love interest via a dating app or social media.
- Be on the lookout for consistent, AI-generated messages which often lack substance or feel generic.
- Avoid clicking on any links in messages from someone you have not met in person.
- Perform a reverse image search of any profile pictures used by the person.
- Refrain from sending money or gifts to someone you haven’t met in person, even if they send you money first.
- Discuss your new love interest with your trusted friend. It can be easy to overlook red flags when you are hopeful and excited.
Conclusion
The path is fraught with illusions, and only by arming oneself with knowledge and scepticism can one hope to find true connection without falling prey to the mirage of deceit. As we navigate this treacherous terrain, let us remember that the most profound connections are often those that withstand the test of time and the scrutiny of truth.
References
- https://www.businesstoday.in/technology/news/story/valentine-day-alert-deepfakes-genai-amplifying-romance-scams-in-india-warn-researchers-417245-2024-02-13
- https://www.indiatimes.com/amp/news/india/valentines-day-around-40-per-cent-indians-have-been-scammed-while-looking-for-love-online-627324.html
- https://zeenews.india.com/technology/valentine-day-deepfakes-in-romance-scams-generative-ai-in-scams-romance-scams-in-india-online-dating-scams-in-india-ai-voice-scams-in-india-cyber-security-in-india-2720589.html
- https://www.mcafee.com/en-us/consumer-corporate/newsroom/press-releases/2023/20230209.html

Introduction
The banking and finance sector worldwide is among the most vulnerable to cybersecurity attacks. Moreover, traditional threats such as DDoS attacks, ransomware, supply chain attacks, phishing, and Advanced Persistent Threats (APTs) are becoming increasingly potent with the growing adoption of AI. It is crucial for banking and financial institutions to stay ahead of the curve when it comes to their cybersecurity posture, something that is possible only through a systematic approach to security. In this context, the Reserve Bank of India’s latest Financial Stability Report (June 2025) acknowledges that cybersecurity risks are systemic to the sector, particularly the securities market, and have to be treated as such.
What the Financial Stability Report June 2025 Says
The report notes that the increasing scale of digital financial services, cloud-based architecture, and interconnected systems has expanded the cyberattack surface across sectors. It calls for building cybersecurity resilience by improving Security Operations Center (SOC) efficacy, undertaking “risk-based supervision”, implementing “zero-trust approaches”, and “AI-aware defense strategies”. It also recommends the implementation of graded monitoring systems, employing behavioral analytics for threat detection, building adequate skill through hands-on training, engaging in continuous learning and simulation-based exercises like Continuous Assessment-Based Red Teaming (CART), conducting scenario-based resilience drills, and establishing consistent incident reporting frameworks. In addition, it suggests that organizations need to adopt quantifiable benchmarks like SOC Efficacy and Cyber Capability Index to guarantee efficient governance and readiness.
Implications
Firstly, even though the report doesn’t break new ground in identifying cyber risk, it does sharpen its urgency and lays the groundwork for giving more weight to cybersecurity in macroprudential supervision. In the face of emerging threats, it positions cyberattacks as a systemic financial risk that can affect India’s financial stability with the same seriousness as traditional threats like NPAs and capital inadequacy.
Secondly, by calling to “ensure cyber resilience”, it reflects the RBI’s dedication to values-based compliance to cybersecurity policies where effectiveness and adaptability matter more than box-ticking. This approach caters to an organisation’s/ sector’s unique nature, governance requirements, and updates to rising risks. It checks not only if certain measures were used, but also if they were effective, through constant self-assessment, scenario-based training, cyber drills, dynamic risk management, and value-driven audits. In the face of a rapidly expanding digital transactions ecosystem with integration of new technologies such as AI, this approach is imperative to building cyber resilience. The RBI’s report suggests exactly this need for banks and NBFCs to update its parameters for resilience.
Conclusion
While the RBI’s 2016 guidelines focus on core cybersecurity concerns and has issued guidelines on IT governance, outsourcing, and digital payment security, none explicitly codify “AI-aware,” “zero-trust,” or a full “risk-based supervision” mechanism. The more recent emphasis on these concepts comes from the 2025 Financial Stability Report, which uses them as forward-looking policy orientations. How the RBI chooses to operationalize these frameworks is yet to be seen. Further, RBI’s vision cannot operate in a silo. Cross-sector regulators like SEBI, IRDAI, and DoT must align on cyber standards and incident reporting protocols.
In the meanwhile, highly vulnerable sectors like education and healthcare, which have weaker cybersecurity capabilities, can take a leaf from RBI’s book by ensuring that cybersecurity is treated as a continuously evolving issue . Many institutions in these sectors are known to perform goals-based compliance through a simple checklist approach. Institutions that take the lead in implementing zero-trust, diversifying vendor dependencies, and investing in cyber resilience will not only meet regulatory expectations but build long-term competitive advantage.
References
- https://economictimes.indiatimes.com/news/economy/policy/adopt-risk-based-supervision-zero-trust-approach-to-curb-cyberfrauds-rbi/articleshow/122164631.cms?from=mdr-%20500
- https://paramountassure.com/blog/value-driven-cybersecurity/
- https://www.rbi.org.in/commonman/english/Scripts/Notification.aspx?Id=1721
- https://rbidocs.rbi.org.in/rdocs//PublicationReport/Pdfs/0FSRJUNE20253006258AE798B4484642AD861CC35BC2CB3D8E.PDF

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445