#FactCheck - Viral Image of AIMIM President Asaduddin Owaisi Holding Lord Rama Portrait Proven Fake
Executive Summary:
In recent times an image showing the President of AIMIM, Asaduddin Owaisi holding a portrait of Hindu deity Lord Rama, has gone viral on different social media platforms. After conducting a reverse image search, CyberPeace Research Team then found that the picture was fake. The screenshot of the Facebook post made by Asaduddin Owaisi in 2018 reveals him holding Ambedkar’s picture. But the photo which has been morphed shows Asaduddin Owaisi holding a picture of Lord Rama with a distorted message gives totally different connotations in the political realm because in the 2024 Lok Sabha elections, Asaduddin Owaisi is a candidate from Hyderabad. This means there is a need to ensure that before sharing any information one must check it is original in order to eliminate fake news.

Claims:
AIMIM Party leader Asaduddin Owaisi standing with the painting of Hindu god Rama and the caption that reads his interest towards Hindu religion.



Fact Check:
In order to investigate the posts, we ran a reverse search of the image. We identified a photo that was shared on the official Facebook wall of the AIMIM President Asaduddin Owaisi on 7th April 2018.

Comparing the two photos we found that the painting Asaduddin Owaisi is holding is of B.R Ambedkar whereas the viral image is of Lord Rama, and the original photo was posted in the year 2018.


Hence, it was concluded that the viral image was digitally modified to spread false propaganda.
Conclusion:
The photograph of AIMIM President Asaduddin Owaisi holding up one painting of Lord Rama is fake as it has been morphed. The photo that Asaduddin Owaisi uploaded on a Facebook page on 7 Apr 2018 depicted him holding a picture of Bhimrao Ramji Ambedkar. This photograph was digitally altered and the false captions were written to give an altogether different message of Asaduddin Owaisi. It has even highlighted the necessity of fighting fake news that has spread widely through social media platforms especially during the political realm.
- Claim: AIMIM President Asaduddin Owaisi was holding a painting of the Hindu god Lord Rama in his hand.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html

About Global Commission on Internet Governance
The Global Commission on Internet Governance was established in January 2014 with the goal of formulating and advancing a strategic vision for Internet governance going forward. Independent research on Internet-related issues of international public policy is carried out and supported over the two-year initiative. An official commission report with particular policy recommendations for the future of Internet governance will be made available as a result of this initiative.
There are two goals for the Global Commission on Internet Governance. First, it will encourage a broad and inclusive public discussion on how Internet governance will develop globally. Second, through its comprehensive policy-oriented report and the subsequent marketing of this final report, the Global Commission on Internet Governance will present its findings to key stakeholders at major Internet governance events.
The Internet: exploring the world wide web and the deep web
The Internet can be thought of as a vast networking infrastructure, or network of networks. By linking millions of computers worldwide, it creates a network that allows any two computers, provided they are both online, to speak with one another.
The Hypertext Transfer Protocol is the only language spoken over the Internet and is used by the Web to transfer data. Email, which depends on File Transfer Protocol, Usenet newsgroups, Simple Mail Transfer Protocol, and instant messaging, is also used on the Internet—not the Web. Thus, even though it's a sizable chunk, the Web is only a part of the Internet [1]. In summary, the deep Web is the portion of the Internet that is not visible to the naked eye. It is stuff from the World Wide Web that isn't available on the main Web. Standard search engines cannot reach it. More than 500 times larger than the visible Web is this enormous subset of the Internet [1-2].
The Global Commission on Internet Governance will concentrate on four principal themes:
• Improving the legitimacy of government, including standards and methods for regulation;
• Promoting economic innovation and expansion, including the development of infrastructure, competition laws, and vital Internet resources;
• Safeguarding online human rights, including establishing the idea of technological neutrality for rights to privacy, human rights, and freedom of expression;
• Preventing systemic risk includes setting standards for state behaviour, cooperating with law enforcement to combat cybercrime, preventing its spread, fostering confidence, and addressing disarmament-related issues.
Dark Web
The part of the deep Web that has been purposefully concealed and is unreachable using conventional Web browsers is known as the "dark Web." Dark Web sites are a platform for Internet users who value their anonymity since they shield users from prying eyes and typically utilize encryption to thwart monitoring. The Tor network is a well-known source for content that may be discovered on the dark web. Only a unique Web browser known as the Tor browser is required to access the anonymous Tor network (Tor 2014). It was a technique for anonymous online communication that the US Naval Research Laboratory first introduced as The Onion Routing (Tor) project in 2002. Many of the functionality offered by Tor are also available on I2P, another network. On the other hand, I2P was intended to function as a network inside the Internet, with traffic contained within its boundaries. Better anonymous access to the open Internet is offered by Tor, while a more dependable and stable "network within the network" is provided by I2P [3].
Cybersecurity in the dark web
Cyber crime is not any different than crime in the real world — it is just executed in a new medium: “Virtual criminality’ is basically the same as the terrestrial crime with which we are familiar. To be sure, some of the manifestations are new. But a great deal of crime committed with or against computers differs only in terms of the medium. While the technology of implementation, and particularly its efficiency, may be without precedent, the crime is fundamentally familiar. It is less a question of something completely different than a recognizable crime committed in a completely different way [4].”
Dark web monitoring
The dark Web, in general, and the Tor network, in particular, offer a secure platform for cybercriminals to support a vast amount of illegal activities — from anonymous marketplaces to secure means of communication, to an untraceable and difficult to shut down infrastructure for deploying malware and botnets.
As such, it has become increasingly important for security agencies to track and monitor the activities in the dark Web, focusing today on Tor networks, but possibly extending to other technologies in the near future. Due to its intricate webbing and design, monitoring the dark Web will continue to pose significant challenges. Efforts to address it should be focused on the areas discussed below [5].
Hidden service directory of dark web
A domain database used by both Tor and I2P is based on a distributed system called a "distributed hash table," or DHT. In order for a DHT to function, its nodes must cooperate to store and manage a portion of the database, which takes the shape of a key-value store. Owing to the distributed character of the domain resolution process for hidden services, nodes inside the DHT can be positioned to track requests originating from a certain domain [6].
Conclusion
The deep Web, and especially dark Web networks like Tor (2004), offer bad actors a practical means of transacting in products anonymously and lawfully.
The absence of discernible activity in non-traditional dark web networks is not evidence of their nonexistence. As per the guiding philosophy of the dark web, the actions are actually harder to identify and monitor. Critical mass is one of the market's driving forces. It seems unlikely that operators on the black Web will require a great degree of stealth until the repercussions are severe enough, should they be caught. It is possible that certain websites might go down, have a short trading window, and then reappear, which would make it harder to look into them.
References
- Ciancaglini, Vincenzo, Marco Balduzzi, Max Goncharov and Robert McArdle. 2013. “Deepweb and Cybercrime: It’s Not All About TOR.” Trend Micro Research Paper. October.
- Coughlin, Con. 2014. “How Social Media Is Helping Islamic State to Spread Its Poison.” The Telegraph, November 5.
- Dahl, Julia. 2014. “Identity Theft Ensnares Millions while the Law Plays Catch Up.” CBS News, July 14.
- Dean, Matt. 2014. “Digital Currencies Fueling Crime on the Dark Side of the Internet.” Fox Business, December 18.
- Falconer, Joel. 2012. “A Journey into the Dark Corners of the Deep Web.” The Next Web, October 8.
- Gehl, Robert W. 2014. “Power/Freedom on the Dark Web: A Digital Ethnography of the Dark Web Social Network.” New Media & Society, October 15. http://nms.sagepub.com/content/early/2014/ 10/16/1461444814554900.full#ref-38.
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.