#FactCheck - Viral Image of AIMIM President Asaduddin Owaisi Holding Lord Rama Portrait Proven Fake
Executive Summary:
In recent times an image showing the President of AIMIM, Asaduddin Owaisi holding a portrait of Hindu deity Lord Rama, has gone viral on different social media platforms. After conducting a reverse image search, CyberPeace Research Team then found that the picture was fake. The screenshot of the Facebook post made by Asaduddin Owaisi in 2018 reveals him holding Ambedkar’s picture. But the photo which has been morphed shows Asaduddin Owaisi holding a picture of Lord Rama with a distorted message gives totally different connotations in the political realm because in the 2024 Lok Sabha elections, Asaduddin Owaisi is a candidate from Hyderabad. This means there is a need to ensure that before sharing any information one must check it is original in order to eliminate fake news.

Claims:
AIMIM Party leader Asaduddin Owaisi standing with the painting of Hindu god Rama and the caption that reads his interest towards Hindu religion.



Fact Check:
In order to investigate the posts, we ran a reverse search of the image. We identified a photo that was shared on the official Facebook wall of the AIMIM President Asaduddin Owaisi on 7th April 2018.

Comparing the two photos we found that the painting Asaduddin Owaisi is holding is of B.R Ambedkar whereas the viral image is of Lord Rama, and the original photo was posted in the year 2018.


Hence, it was concluded that the viral image was digitally modified to spread false propaganda.
Conclusion:
The photograph of AIMIM President Asaduddin Owaisi holding up one painting of Lord Rama is fake as it has been morphed. The photo that Asaduddin Owaisi uploaded on a Facebook page on 7 Apr 2018 depicted him holding a picture of Bhimrao Ramji Ambedkar. This photograph was digitally altered and the false captions were written to give an altogether different message of Asaduddin Owaisi. It has even highlighted the necessity of fighting fake news that has spread widely through social media platforms especially during the political realm.
- Claim: AIMIM President Asaduddin Owaisi was holding a painting of the Hindu god Lord Rama in his hand.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.

Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
AI has penetrated most industries and telecom is no exception. According to a survey by Nvidia, enhancing customer experiences is the biggest AI opportunity for the telecom industry, with 35% of respondents identifying customer experiences as their key AI success story. Further, the study found nearly 90% of telecom companies use AI, with 48% in the piloting phase and 41% actively deploying AI. Most telecom service providers (53%) agree or strongly agree that adopting AI would provide a competitive advantage. AI in telecom is primed to be the next big thing and Google has not ignored this opportunity. It is reported that Google will soon add “AI Replies” to the phone app’s call screening feature.
How Does The ‘AI Call Screener’ Work?
With the busy lives people lead nowadays, Google has created a helpful tool to answer the challenge of responding to calls amidst busy schedules. Google Pixel smartphones are now fitted with a new feature that deploys AI-powered calling tools that can help with call screening, note-making during an important call, filtering and declining spam, and most importantly ending the frustration of being on hold.
In the official Google Phone app, users can respond to a caller through “new AI-powered smart replies”. While “contextual call screen replies” are already part of the app, this new feature allows users to not have to pick up the call themselves.
- With this new feature, Google Assistant will be able to respond to the call with a customised audio response.
- The Google Assistant, responding to the call, will ask the caller’s name and the purpose of the call. If they are calling about an appointment, for instance, Google will show the user suggested responses specific to that call, such as ‘Confirm’ or ‘Cancel appointment’.
Google will build on the call-screening feature by using a “multi-step, multi-turn conversational AI” to suggest replies more appropriate to the nature of the call. Google’s Gemini Nano AI model is set to power this new feature and enable it to handle phone calls and messages even if the phone is locked and respond even when the caller is silent.
Benefits of AI-Powered Call Screening
This AI-powered call screening feature offers multiple benefits:
- The AI feature will enhance user convenience by reducing the disruptions caused by spam calls. This will, in turn, increase productivity.
- It will increase call privacy and security by filtering high-risk calls, thereby protecting users from attempts of fraud and cyber crimes such as phishing.
- The new feature can potentially increase efficiency in business communications by screening for important calls, delegating routine inquiries and optimising customer service.
Key Policy Considerations
Adhering to transparent, ethical, and inclusive policies while anticipating regulatory changes can establish Google as a responsible innovator in AI call management. Some key considerations for AI Call Screener from a policy perspective are:
- The AI screen caller will process and transcribe sensitive voice data, therefore, the data handling policies for such need to be transparent to reassure users of regulatory compliance with various laws.
- AI has been at a crossroads in its ethical use and mitigation of bias. It will require the algorithms to be designed to avoid bias and reflect inclusivity in its understanding of language.
- The data that the screener will be using is further complicated by global and regional regulations such as data privacy regulations like the GDPR, DPDP Act, CCPA etc., for consent to record or transcribe calls while focussing on user rights and regulations.
Conclusion: A Balanced Approach to AI in Telecommunications
Google’s AI Call Screener offers a glimpse into the future of automated call management, reshaping customer service and telemarketing by streamlining interactions and reducing spam. As this technology evolves, businesses may adopt similar tools, balancing customer engagement with fewer unwanted calls. The AI-driven screening will also impact call centres, shifting roles toward complex, human-centred interactions while automation handles routine calls. They could have a potential effect on support and managerial roles. Ultimately, as AI call management grows, responsible design and transparency will be in demand to ensure a seamless, beneficial experience for all users.
References
- https://resources.nvidia.com/en-us-ai-in-telco/state-of-ai-in-telco-2024-report
- https://store.google.com/intl/en/ideas/articles/pixel-call-assist-phone-screen/
- https://www.thehindu.com/sci-tech/technology/google-working-on-ai-replies-for-call-screening-feature/article68844973.ece
- https://indianexpress.com/article/technology/artificial-intelligence/google-ai-replies-call-screening-9659612/