#FactCheck - AI-Generated Image Falsely Shared as Bees Attacking Shankaracharya Swami Avimukteshwaranand Saraswati
Executive Summary
During the Gau Raksha Yatra of Shankaracharya Swami Avimukteshwaranand Saraswati, bees reportedly attacked a discourse event in Rohania area of Varanasi, Uttar Pradesh. Following the incident, a picture has gone viral on social media showing bees attacking Swami Avimukteshwaranand Saraswati. Several users are sharing the image as genuine while targeting the Shankaracharya online. CyberPeace Research Wing investigated the viral image and found it to be fake. Our research revealed that the picture was created using Artificial Intelligence (AI). While it is true that a bee attack occurred during Swami Avimukteshwaranand Saraswati’s discourse program, the viral image itself is fabricated.
Claim
A Facebook user named “Sanjay Chaudhary” shared the viral image on May 15, 2026, with the caption: “Prakritik kop ka bhajan bana Shukracharya Umashankar alias Avimukteshwaranand… This Kaalnemi was delivering false sermons in Rohania, Varanasi in the name of religion… The bees from a nearby hive did not like it and collectively attacked, creating chaos. Even insects and nature no longer like the opposition’s politics disguised as Sanatan Dharma. Calling Yogi Ji Aurangzeb, Akbar and butcher is not acceptable even to nature and insects.”
Post link and archive link are given below:
- https://www.facebook.com/sanjaychaudhary073/posts/pfbid02kgts8igKDwgctz3MamECMGoGfQR5aWPTdsDgLeux3pD9jwP7ADfgNpoPfHvMb9Zul
- https://perma.cc/E6SE-BAXZ

Fact-Check
To verify the viral claim, we used Google Open Search tools and found reports related to the incident on the YouTube channel of News18 UP Uttarakhand. A report published on May 13, 2026 stated that bees attacked the discourse event during Swami Avimukteshwaranand’s Gau Raksha Yatra in Rohania, Varanasi. The incident created panic at the venue, forcing the Swami to end his discourse midway. The channel also uploaded a YouTube Shorts video related to the incident.

As part of the research, we further analyzed the viral image using AI detection tools. First, we used the tool “Sight Engine,” which indicated an 88 percent probability that the image was AI-generated.

We then examined the image using another AI detection tool called “Undetectable,” which also suggested that the photo was likely created using AI.

Conclusion
Our research found that the viral image is AI-generated. The picture was created using artificial intelligence tools. While bees did attack during Swami Avimukteshwaranand Saraswati’s Gau Raksha Yatra on May 13, 2026, the viral image circulating on social media is fictional and not real.
Related Blogs
%20(1).webp)
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases

Executive Summary
A video circulating widely on social media shows a man interacting with a humanoid robot and using abusive language, after which the robot asks him to maintain politeness. Several users shared the clip claiming that the incident took place during a recent AI summit in New Delhi. The video triggered strong reactions online, with some users demanding legal action against the individual. However, research by CyberPeace found the claim to be misleading.
Claim
Social media users claimed that the viral video showing a man abusing a robot was recorded during an AI summit in New Delhi, India.

Fact Check
To verify the claim, we conducted a reverse image search of the individual seen in the video. The search led us to an Instagram post uploaded by a Pakistani account identifying the individual as Kashif Zameer.

Further keyword searches helped us locate his Instagram profile, where the same video had been uploaded on February 17, 2026. The post included hashtags such as “Dubai,” indicating the actual location of the incident. The profile also lists Lahore, Pakistan, as the user’s location and describes him as a businessman and social media personality.

To confirm the location shown in the video, we conducted additional searches using keywords such as “Dubai” and “humanoid robot.” The research revealed that the robot featured in the clip is “Ameca,” located at the Museum of the Future in Dubai.

Conclusion
The viral claim is false. The video is not related to any AI summit held in New Delhi. The incident occurred in Dubai, and the person seen in the video is not an Indian citizen.

Introduction
India is reaching a turning point in its technological development when the AI Impact Summit 2026 is held in New Delhi. Artificial Intelligence (AI)is transforming economies, labour markets, governance structures and even the grammar of public discourse. It is no longer a frontier of speculation. The challenge facing the Summit is not whether AI will change our societies, it has already done so but rather whether inclusiveness and human dignity will serve as the foundation for this change.
India’s AI journey is defined by scale. The nation has one of the biggest user bases for cutting edge AI systems worldwide. According to projections, AI may create millions of new technology-driven occupations by 2030 and change the nature of millions more. This is a structural reconfiguration rather than an incremental alteration. The stakes are high for a country with a large youth population and diverse socioeconomic diversity.
India’s Tryst with Artificial Intelligence
India’s tryst with AI is a developmental imperative occurring at a civilisational scale not a show put on for a western favour. AI is still portrayed in many international storylines as a competition between China’s state backed rapidity, Europe’s sophisticated regulations and Silicon Valley’s capital. India is far too frequently a huge consumer market rather than a significant force behind the AI era. Such evaluations undervalue a nation that has already proven its capacity to implement technology at a democratic scale through its digital public infrastructure. AI in India is about more than just improving algorithms, it’s about giving millions more people access to social safety, healthcare, agriculture and education.
The scepticism overlooks a deeper truth, India innovates not from abundance but from urgency. India remains certain that technical advancement must be in line with social justice and inclusive growth. The recollections from history suggest that India’s greatest technological strides have often followed underestimation.
A Conclave of Contagious Ideas
India has long been the favourite underestimation of certain western observers, a nation of 1.4 billion people, the world’s fifth largest economy, a noisy democracy with inconvenient geopolitical realities, often assessed by counterparts governing populations smaller than many of its states. Advice follows in spades, sometimes from cities that mastered the art of strategic improvisation long before they preached restraint and sometimes with lectures on innovation, governance and order.
However, there are times when hierarchies need to be rearranged. It was hard to overlook the symbolism when Ranvir Sachdeva, the youngest keynote speaker at the AI Impact Summit, 2026, took the stage, “I’m here as the youngest keynote speaker at the Indian AI Impact Summit,” he said, discussing how he’s connecting ancient Indian beliefs to contemporary technology and the various strategies that other countries are doing to develop AI. In that simple articulation lay a quiet rebuttal, a civilization that once debated metaphysics under banyan trees is now debating ethics in plenary halls. History constantly demonstrates that India’s permanent address has never been underestimation.
From New Delhi to Geneva: The Global Arc of AI Governance
Now that the AI Impact Summit, 2026 is coming to an end, what’s left is not just the recollection of its size but also the form of new international dialogue. The New Delhi Declaration, a remarkable highlight of the Summit, was signed by eighty-eight nations and international organisations to support the democratic spread of AI.
The increasing complexity of the AI order was also made clear by the Summit. Pledges for investments totalled hundred of billions. The U.S. led Pax Silica effort was joined by India. SovereignLLMs in the country were introduced. At the same time, spectators were reminded that the politics of AI are inextricably linked to its promise via logistical challenges, protest disruptions and business rivalries. Although nations are not bound by the New Delhi Declaration it does represent a growing consensus that acceleration must be accompanied by governance.
The revelation that the 2027 AI Impact Summit will be in Geneva represents a significant shift in this regard. Guy Parmelin, the president of Switzerland, described the upcoming chapter as one that is primarily concerned with international law and good governance in an attempt to guarantee that the future of AI is not entirely in the hands of powerful nations. From scale and ambition in New Delhi to normative consolidation in Europe, Geneva, longtime hotbed of multilateral diplomacy, provides symbolic continuity.
Concluding Confluence
It is tempting to view the Global CyberPeace Summit (GCS), a Pre-Summit Event of AI Impact Summit held in close succession at Bharat Mandapam on 10th February, 2026. They formed a strong intellectual arc. At GCS, inclusion was not ornamental. A deeper message was conveyed by India Signing Hands’ involvement and purposeful emphasis on accessibility, digital systems must be created with, not just for, those on margins. Resilience must start at the economic level, according to the AI-enabled cybersecurity engagement for MSMEs. Participants were reminded during the talks on Technology Facilitated Gender-Based Violence (TFGBV), CSAM prevention and child safety that technological arguments only gain significance when they are connected to real-world outcomes.
When Geneva takes over in 2027, the issue will not just be how AI should be regulated, but also what ethical foundation that governance is built upon. New Delhi’s belief that wisdom and power must coexist may be its contribution to this developing narrative. That persistence has content than spectacle, as well as possibly the faint form of technical conscience.