#FactCheck - Viral Image of AIMIM President Asaduddin Owaisi Holding Lord Rama Portrait Proven Fake
Executive Summary:
In recent times an image showing the President of AIMIM, Asaduddin Owaisi holding a portrait of Hindu deity Lord Rama, has gone viral on different social media platforms. After conducting a reverse image search, CyberPeace Research Team then found that the picture was fake. The screenshot of the Facebook post made by Asaduddin Owaisi in 2018 reveals him holding Ambedkar’s picture. But the photo which has been morphed shows Asaduddin Owaisi holding a picture of Lord Rama with a distorted message gives totally different connotations in the political realm because in the 2024 Lok Sabha elections, Asaduddin Owaisi is a candidate from Hyderabad. This means there is a need to ensure that before sharing any information one must check it is original in order to eliminate fake news.

Claims:
AIMIM Party leader Asaduddin Owaisi standing with the painting of Hindu god Rama and the caption that reads his interest towards Hindu religion.



Fact Check:
In order to investigate the posts, we ran a reverse search of the image. We identified a photo that was shared on the official Facebook wall of the AIMIM President Asaduddin Owaisi on 7th April 2018.

Comparing the two photos we found that the painting Asaduddin Owaisi is holding is of B.R Ambedkar whereas the viral image is of Lord Rama, and the original photo was posted in the year 2018.


Hence, it was concluded that the viral image was digitally modified to spread false propaganda.
Conclusion:
The photograph of AIMIM President Asaduddin Owaisi holding up one painting of Lord Rama is fake as it has been morphed. The photo that Asaduddin Owaisi uploaded on a Facebook page on 7 Apr 2018 depicted him holding a picture of Bhimrao Ramji Ambedkar. This photograph was digitally altered and the false captions were written to give an altogether different message of Asaduddin Owaisi. It has even highlighted the necessity of fighting fake news that has spread widely through social media platforms especially during the political realm.
- Claim: AIMIM President Asaduddin Owaisi was holding a painting of the Hindu god Lord Rama in his hand.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Since the inception of the Internet and social media platforms like Facebook, X (Twitter), Instagram, etc., the government and various other stakeholders in both foreign jurisdictions and India have looked towards the intermediaries to assume responsibility for the content floated on these platforms, and various legal provisions showcase that responsibility. For the first time in many years, these intermediaries come together to moderate the content by setting a standard for the creators and propagators of this content. The influencer marketing industry in India is at a crucial juncture, with its market value projected to exceed Rs. 3,375 crore by 2026. But every industry is coupled with its complications; like in this scenario, there is a section of content creators who fail to maintain the standard of integrity and propagate content that raises concerns of authenticity and transparency, often violating intellectual property rights (IPR) and privacy.
As influencer marketing continues to shape digital consumption, the need for ethical and transparent content grows stronger. To address this, the India Influencer Governing Council (IIGC) has released its Code of Standards, aiming to bring accountability and structure to the fast-evolving online space.
Bringing Accountability to the Digital Fame Game
The India Influencer Governing Council (IIGC), established on 15th February, 2025, is founded with the objective to empower creators, advocate for fair policies, and promote responsible content creation. The IIGC releases the Code of Standard, not a moment too soon; it arrives just in time, a necessary safeguard before social media devolves into a chaotic marketplace where anything and everything is up for grabs. Without effective regulation, digital platforms become the marketplace for misinformation and exploitation.
The IIGC leads the movement with clarity, stating that the Code is a significant piece that spans across 20 crucial sections governing key areas such as paid partnership disclosures, AI-generated personas, content safety, and financial compliance.
Highlights from the Code of Standard
- The Code exhibits a technical understanding of the industry of content creation and influencer marketing. The preliminary sections advocate for accuracy, transparency, and maintaining credibility with the audience that engages with the content. Secondly, the most fundamental development is with regard to the “Paid Partnership Disclosure” included in Section 2 of the Code that mandates disclosure of any material connection, such as financial agreements or collaboration with the brand.
- Another development, which potently comes at a befitting hour, is the disclosure of “AI Influencers”, which establishes that the nature of the influencer has to be disclosed, and such influencers, whether fully virtual or partially AI-enhanced, must maintain the same standards as any human influencer.
- The code ranges across various other aspects of influencer marketing, such as expressing unpaid “Admiration” for the brand and public criticism of the brand, being free from personal bias, honouring financial agreements, non-discrimination, and various other standards that set the stage for a safe and fair digital sphere.
- The Code also necessitates that the platform users and the influencers handle sexual and sensitive content with sincere deliberation, and usage of such content shall be for educational and health-related contexts and must not be used against community standards. The Code includes various other standards that work towards making digital platforms safer for younger generations and impressionable minds.
A Code Without Claws? Challenges in Enforcement
The biggest obstacle to the effective implementation of the code is distinguishing between an honest promotion and a paid brand collaboration without any explicit mention of such an agreement. This makes influencer marketing susceptible to manipulation, and the manipulation cannot be tackled with a straitjacket formula, as it might be found in the form of exaggerated claims or omission of critical information.
Another hurdle is the voluntary compliance of the influencers with the advertising standards. Influencer marketing is an exercise in a borderless digital cyberspace, where the influencers often disregard the dignified standards to maximise their earnings and commercial motives.
The debate between self-regulation and government oversight is constantly churning, where experience tells us that overreliance on self-regulation has proven to be inadequate, and succinct regulatory oversight is imperative in light of social media platforms operating as a transnational commercial marketplace.
CyberPeace Recommendations
- Introduction of a licensing framework for influencers that fall into the “highly followed” category with high engagement, who are more likely to shape the audience’s views.
- Usage of technology to align ethical standards with influencer marketing practices, ensuring that misleading advertisements do not find a platform to deceive innocent individuals.
- Educating the audience or consumers on the internet about the ramifications of negligence and their rights in the digital marketplace. Ensuring a well-established grievance redressal mechanism via digital regulatory bodies.
- Continuous and consistent collaboration and cooperation between influencers, brands, regulators, and consumers to establish an understanding and foster transparency and a unified objective to curb deceptive advertising practices.
References
- https://iigc.org/code-of-standards/influencers/code-of-standards-v1-april.pdf
- https://legalonus.com/the-impact-of-influencer-marketing-on-consumer-rights-and-false-advertising/
- https://exhibit.social/news/india-influencer-governing-council-iigc-launched-to-shape-the-future-of-influencer-marketing/
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.

Executive Summary:
This report discloses a new cyber threat contributing to the list of threats targeting internet users in the name of "Aarong Ramadan Gifts". The fraudsters are imitating the popular Bangladeshi brand Aarong, which is known for its Bengali ethnic wear and handicrafts, and allure the victims with the offer of exclusive gifts for Ramadan. The moment when users click on the link, they are taken through a fictitious path of quizzes, gift boxes, and social proof, that simply could damage their personal information and system devices. Through knowing how this is done we can educate users to take caution and stop themselves from falling into cyber threats.
False Claim:
The false message accompanied by a link on social media, claims that Aarong, one of the most respected brands in Bangladesh for their exquisite ethnic wear and handicrafts, is providing Ramadan gifts exclusively through online promotion. And while that may be the facade of the scam, its real aim is to lead users to click on harmful links that may end up in their personal data and devices being compromised.

The Deceptive Journey:
- The Landing page starts with a salutation and a catchy photo of Aarong store, and later moves ahead encouraging the visitors to take a part of a short quiz to claim the gift. This is designed for the purpose of creating a false image of authenticity and trustworthiness.
- A certain area at the end of the page looks like a social media comment section, and users are posting the positive impacts the claim has on them. This is one of the techniques to build the image of a solid base of support and many partakers.
- The quiz starts with a few easy questions on how much the user knows about Aarong and their demographics. This data is vital in the development of more complex threats and can be used to address specific targets in the future.
- After the user hits the OK button, the screen displays a matrix of the Gift boxes, and the user then needs to make at least 3 attempts to attain the reward. This is a commonly used approach which allows the scammer to keep users engaged longer and increases the chances of making them comply with the fraudulent scheme.
- The user is instructed to share the campaign on WhatsApp from this point of the campaign, and the user must keep clicking the WhatsApp button until the progress bar is complete. This is a way to both expand and perpetuate the scam, affecting many more users.
- After completing the steps, the user is shown instructions on how to claim the prize.
The Analysis:
- The home page and quiz are structured to maintain a false impression of genuineness and proficiency, thus allowing the victims to partake in the fraudulent design. The compulsion to forward the message in WhatsApp is the way they inspire more and more users and eventually get into the scam.
- The final purpose of the scam could be to obtain personal data from the user and eventually enter their devices, which could lead to a higher risk of cyber threats, such as identity theft, financial theft, or malware installation.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by Aarong.
- The campaign is hosted on a third party domain instead of the official Website, this raised suspicion. Also the domain has been registered recently.
- The intercepted request revealed a connection to a China-linked analytical service, Baidu in the backend.

- Domain Name: apronicon.top
- Registry Domain ID: D20231130G10001G_13716168-top
- Registrar WHOIS Server: whois.west263[.]com
- Registrar URL: www.west263[.]com
- Updated Date: 2024-02-28T07:21:18Z
- Creation Date: 2023-11-30T03:27:17Z (Recently created)
- Registry Expiry Date: 2024-11-30T03:27:17Z
- Registrar: Chengdu west dimension digital
- Registrant State/Province: Hei Long Jiang
- Registrant Country: CN (China)
- Name Server: amos.ns.cloudflare[.]com
- Name Server: zara.ns.cloudflare[.]com
Note: Cybercriminal used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
- Do not open those messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never, in any case, reveal such sensitive data as your login credentials and banking details to entities you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- For the sake of the truthfulness of offers and messages, find the official sources and companies directly. Verify the authenticity of alluring offers before taking any action.
Conclusion:
Aarong Ramadan Gift scam is a fraudulent act that takes advantage of the victims' loyalty to a reputable brand. The realization of the mechanism used to make the campaign look real, can actually help us become more conscious and take measures to our community not to be inattentive against cyberthreats. Be aware, check the credibility, and spread awareness to others wherever you can, to contribute in building a security conscious digital space.