#FactCheck - AI-Generated Image Falsely Linked to Mira–Bhayandar Bridge
Executive Summary
Mumbai’s Mira–Bhayandar bridge has recently been in the news due to its unusual design. In this context, a photograph is going viral on social media showing a bus seemingly stuck on the bridge. Some users are also sharing the image while claiming that it is from Sonpur subdivision in Bihar. However, an research by the CyberPeace has found that the viral image is not real. The bridge shown in the image is indeed the Mira–Bhayandar bridge, which is under discussion because its design causes it to suddenly narrow from four lanes to two lanes. That said, the bridge is not yet operational, and the viral image showing a bus stuck on it has been created using Artificial Intelligence (AI).
Claim
An Instagram user shared the viral image on January 29, 2026, with the caption:“Are Indian taxpayers happy to see that this is funded by their money?” The link, archive link, and screenshot of the post can be seen below.

Fact Check:
To verify the claim, we first conducted a Google Lens reverse image search. This led us to a post shared by X (formerly Twitter) user Manoj Arora on January 29. While the bridge structure in that image matches the viral photo, no bus is visible in the original post.This raised suspicion that the viral image had been digitally manipulated.

We then ran the viral image through the AI detection tool Hive Moderation, which flagged it as over 99% likely to be AI-generated

Conclusion
The CyberPeace research confirms that while the Mira–Bhayandar bridge is real and has been in the news due to its design, the viral image showing a bus stuck on the bridge has been created using AI tools. Therefore, the image circulating on social media is misleading.
Related Blogs

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.
.webp)
Introduction
In an era where organisations are increasingly interdependent through global supply chains, outsourcing and digital ecosystems, third-party risk has become one of the most vital aspects of enterprise risk management. The SolarWinds hack, the MOVEit vulnerabilities and recent software vendor attacks all serve as a reminder of the necessity to enhance Third-Party Risk Management (TPRM). As cyber risks evolve and become more sophisticated and as regulatory oversight sharpens globally, 2025 is a transformative year for the development of TPRM practices. This blog explores the top trends redefining TPRM in 2025, encompassing real-time risk scoring, AI-driven due diligence, harmonisation of regulations, integration of ESG, and a shift towards continuous monitoring. All of these trends signal a larger movement towards resilience, openness and anticipatory defence in an increasingly dependent world.
Real-Time and Continuous Monitoring becomes the Norm
The old TPRM methods entailed point-in-time testing, which typically was an annual or onboarding process. By 2025, organisations are shifting towards continuous, real-time monitoring of their third-party ecosystems. Now, authentic advanced tools are making it possible for companies to take a real-time pulse of the security of their vendors by monitoring threat indicators, patching practices and digital footprint variations. This change has been further spurred by the growth in cyber supply chain attacks, where the attackers target vendors to gain access to bigger organisations. Real-time monitoring software enables the timely detection of malicious activity, equipping organisations with a faster defence response. It also guarantees dynamic risk rating instead of relying on outdated questionnaire-based scoring.
AI and Automation in Risk Assessment and Due Diligence
Manual TPRM processes aren't sustainable anymore. In 2025, AI and machine learning are reshaping the TPRM lifecycle from onboarding and risk classification to contract review and incident handling. AI technology can now analyse massive amounts of vendor documentation and automatically raise red flags on potential issues. Natural language processing (NLP) is becoming more common for automated contract intelligence, which assists in the detection of risky clauses or liability gaps or data protection obligations. In addition, automation is increasing scalability for large organisations that have hundreds or thousands of third-party relationships, eliminating human errors and compliance fatigue. However, all of this must be implemented with a strong focus on security, transparency, and ethical AI use to ensure that sensitive vendor and organisational data remains protected throughout the process.
Risk Quantification and Business Impact Mapping
Risk scoring in isolation is no longer adequate. One of the major trends for 2025 is the merging of third-party risk with business impact analysis (BIA). Organisations are using tools that associate vendors to particular business processes and assets, allowing better knowledge of how a compromise of a vendor would impact operations, customer information or financial position. This movement has resulted in increased use of risk quantification models, such as FAIR (Factor Analysis of Information Risk), which puts dollar values on risks associated with vendors. By using the language of business value, CISOs and risk officers are more effective at prioritising risks and making resource allocations.
Environmental, Social, and Governance (ESG) enters into TPRM
As ESG keeps growing on the corporate agenda, organisations are taking TPRM one step further than cybersecurity and legal risks and expanding it to incorporate ESG-related factors. In 2025, organisations evaluate if their suppliers have ethical labour practices, sustainable supply chains, DEI (Diversity, Equity, Inclusion) metrics and climate impact disclosures. This growth is not only a reputational concern, but also a third-party non-compliance with ESG can now invoke regulatory or shareholder action. ESG risk scoring software and vendor ESG audits are becoming components of onboarding and performance evaluations.
Shared Assessments and Third-Party Exchanges
With the duplication of effort by having multiple vendors respond to the same security questionnaires, the trend is moving toward shared assessments. Systems such as the SIG Questionnaire (Standardised Information Gathering) and the Global Vendor Exchange allow vendors to upload once and share with many clients. This change not only simplifies the due diligence process but also enhances data accuracy, standardisation and vendor experience. In 2025, organisations are relying more and more on industry-wide vendor assurance platforms to minimise duplication, decrease costs and maximise trust.
Incident Response and Resilience Partnerships
Another trend on the rise is bringing vendors into incident response planning. In 2025, proactive organisations address major vendors as more than suppliers but as resilience partners. This encompasses shared tabletop exercises, communication procedures and breach notification SLAs. With the increasing ransomware attacks and cloud reliance, organisations are now calling for vendor-side recovery plans, RTO and RPO metrics. TPRM is transforming into a comprehensive resilience management function where readiness and not mere compliance takes centre stage.
Conclusion
Third-Party Risk Management in 2025 is no longer about checklists and compliance audits; it's a dynamic, intelligence-driven and continuous process. With regulatory alignment, AI automation, real-time monitoring, ESG integration and resilience partnerships leading the way, organisations are transforming their TPRM programs to address contemporary threat landscapes. As digital ecosystems grow increasingly complex and interdependent, managing third-party risk is now essential. Early adopters who invest in tools, talent and governance will be more likely to create secure and resilient businesses for the AI era.
References
- https://finance.ec.europa.eu/publications/digital-operational-resilience-act-dora_en
- https://digital-strategy.ec.europa.eu/en/policies/nis2-directive
- https://www.meity.gov.in/data-protection-framework
- https://securityscorecard.com
- https://sharedassessments.org/sig/
- https://www.fairinstitute.org/fair-model

March 3rd 2023, New Delhi: If you have received any message that contains a link asking users to download an application to avail Income Tax Refund or KYC benefits with the name of Income Tax Department or reputed Banks, Beware!
CyberPeace Foundation and Autobot Infosec Private Limited along with the academic partners under CyberPeace Center of Excellence (CCoE) recently conducted five different studies on phishing campaigns that have been circulating on the internet by using misleading tactics to convince users to install malicious applications on their devices. The first campaign impersonates the Income Tax Department, while the rest of the campaigns impersonate ICICI Bank, State Bank of India, IDFC Bank and Axis bank respectively. The phishing campaigns aim to trick users into divulging their personal and financial information.
After a detailed study, the research team found that:
- All campaigns appear to be an offer from reputed entities, however hosted on third-party domains instead of the official website of the Income Tax Department or the respective Banks, raising suspicion.
- The applications ask several access permissions of the device. Moreover some of them seek users to provide full control of the device. Allowing such access permission could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications.
- Cybercriminals created malicious applications using icons that closely resemble those of legitimate entities with the intention of enticing users into downloading the malicious applications.
- The applications collect user’s personal and banking information. Getting into this type of trap could lead users to face significant financial losses.
- While investigating the impersonated Income Tax Department’s application, the Research team identified the application sends http traffic to a remote server which acts as a Command and Control (CnC/C2) for the application.
- Customers who desire to avail benefits or refunds from respective banks, download relevant apps, believing that the chosen app will assist them. However, they are not always aware that the app may be fraudulent.
“The Research highlights the importance of being vigilant while browsing the internet and not falling prey to such phishing attacks. It is crucial to be cautious when clicking on links or downloading attachments from unknown sources, as they may contain malware that can harm the device or compromise the data.” spokesperson, CyberPeace added.
In addition to this in an earlier report released in last month, the same research team had drawn attention to the WhatsApp messages masquerading as an offer from Tanishq Jewellers with links luring unsuspecting users with the promise of free valentine’s day presents making the rounds on the app.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app’s permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it’s best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.