#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs
.webp)
Introduction
The scam involving "drugs in parcels' has resurfaced again with a new face. Cybercriminals impersonating and acting as FedEx, Police and various other authorities and in actuality, they are the perpetrators or bad actors behind the renewed "drugs in parcel" scam, which entails pressuring victims into sending money and divulging private information in order to escape fictitious legal repercussions.
Modus operandi
The modus operandi followed in this scam usually begins with a hacker calling someone on their cell phone posing as FedEx. They say that they are the recipients of a package under their name that includes illegal goods like jewellery, narcotics, or other items. The victim would feel afraid and apprehensive by now. Then there will be a video call with someone else who is posing as a police officer. The victim will be asked to keep the matter confidential while it is being investigated by this "fake officer."
After the call, they would get falsified paperwork from the CBI and RBI stating that an arrest warrant had been issued. Once the victim has fallen entirely under their sway, they would claim that the victim's Aadhaar has been used to carry out the unlawful conduct. They then request that the victim submit their bank account information and Aadhaar data for investigation. Subsequently, the hackers request that the victim transfer funds to a bank account for RBI validation. The victims thus submit money to the hackers believing it to be true for clearing their name.
Recent incidence:
In the most recent instance of a "drug-in-parcel" scam, an IT expert in Pune was defrauded of Rs 27.9 lakh by internet con artists acting as members of the Mumbai police's Cyber Crime Cell. The victim filed the First Information Report (FIR) in this matter at the police station. The victim stated that on November 11, 2023, the complainant received a call from a fraudster posing as a Mumbai police Cyber Crime Cell officer. The scammer falsely claimed to have discovered illegal narcotics in a package addressed to the complainant sent from Mumbai to Taiwan, along with an expired passport and an SBI card. To avoid arrest in a fabricated drug case, the fraudster coerced the complainant into providing bank account information under the guise of "verification." The victim, fearing legal consequences, transferred Rs 27,98,776 in ten online transactions to two separate bank accounts as instructed. Upon realizing the deception, the complainant reported the incident to the police, leading to an investigation.
In another such incident, the victim received an online bogus identity card from the scammers who had phoned him on the phone in October 2023. In an attempt to "clear the case" and issue a "no-objection certificate (NOC)," the fraudster persuaded the victim to wire money to a bank account, claiming to have seized narcotics in a shipment shipped from Mumbai to Thailand under his name. Fraudsters threatened to arrest the victim for mailing the narcotics package if money was not provided.
Furthermore, In August 2023, fraudsters acting as police officers and executives of courier companies defrauded a 25-year-old advertising student of Rs 53 lakh. They extorted money from her under the guise of avoiding legal action, which would include arrest, and informed her that narcotics had been discovered in a package she had delivered to Taiwan. According to the police, callers acting as police officers threatened to arrest the girl and forced her to complete up to 34 transactions totalling Rs 53.63 lakh from her and her mother's bank accounts to different bank accounts.
Measures to protect oneself from such scams
Call Verification:
- Be sure to always confirm the legitimacy of unexpected calls, particularly those purporting to be from law enforcement or delivery services. Make use of official contact information obtained from reliable sources to confirm the information presented.
Confidentiality:
- Use caution while disclosing personal information online or over the phone, particularly Aadhaar and bank account information. In general, legitimate authorities don't ask for private information in this way.
Official Documentation:
- Request official documents via the appropriate means. Make sure that any documents—such as arrest warrants or other government documents—are authentic by getting in touch with the relevant authorities.
No Haste in Transactions:
- Proceed with caution when responding hastily to requests for money or quick fixes. Creating a sense of urgency is a common tactic used by scammers to coerce victims into acting quickly.
Knowledge and Awareness:
- Remain up to date on common fraud schemes and frauds. Keep up with the most recent strategies employed by online fraudsters to prevent falling for fresh scam iterations.
Report Suspicious Activity:
- Notify the local police or other appropriate authorities of any suspicious calls or activities. Reports received in a timely manner can help investigations and shield others from falling for the same fraud.
2fA:
- Enable two-factor authentication (2FA) wherever you can to provide online accounts and transactions an additional degree of protection. This may lessen the chance of unwanted access.
Cybersecurity Software:
- To defend against malware, phishing attempts, and other online risks, install and update reputable antivirus and anti-malware software on a regular basis.
Educate Friends and Family:
- Inform friends and family about typical scams and how to avoid falling victim to fraud. A safer online environment can be achieved through increased collective knowledge.
Be skeptical
- Whenever anything looks strange or too good to be true, it most often is. Trust your instincts. Prior to acting, follow your gut and confirm the information.
By taking these precautions and exercising caution, people may lessen their vulnerability to scams and safeguard their money and personal data from online fraudsters.
Conclusion:
Verifying calls, maintaining secrecy, checking official papers, transacting cautiously, and keeping up to date are all examples of protective measures for protecting ourselves from such scams. Using cybersecurity software, turning on two-factor authentication, and reporting suspicious activity are essential in stopping these types of frauds. Raising awareness and working together are essential to making the internet a safer place and resisting the activities of cybercriminals.
References:
- https://indianexpress.com/article/cities/pune/pune-cybercrime-drug-in-parcel-cyber-scam-it-duping-9058298/#:~:text=In%20August%20this%20year%2C%20a,avoiding%20legal%20action%20including%20arrest.
- https://www.the420.in/pune-it-professional-duped-of-rs-27-9-lakh-in-drug-in-parcel-scam/
- https://www.newindianexpress.com/states/tamil-nadu/2023/oct/16/the-return-of-drugs-in-parcel-scam-2624323.html
- https://timesofindia.indiatimes.com/city/hyderabad/2-techies-fall-prey-to-drug-parcel-scam/articleshow/102786234.cms
.webp)
Introduction
Social media has emerged as a leading source of communication and information; its relevance cannot be ignored during natural disasters since it is relied upon by governments and disaster relief organisations as a tool for disseminating aid and relief-related resources and communications instantly. During disaster times, social media has emerged as a primary source for affected populations to access information on relief resources; community forums offering aid resources and official government channels for government aid have enabled efficient and timely administration of relief initiatives.
However, given the nature of social media, misinformation risks during natural disasters has also emerged as a primary concern that severely hampers aid administration during natural disasters. The disaster-disinformation network offers some sensationalised influential campaigns against communities at their most vulnerable. Victims who seek reliable resources during natural calamities often reach out to inhospitable campaigns and may experience delayed or lack of access to necessary healthcare, significantly impacting their recovery and survival. This delay can lead to worsening medical conditions and an increased death toll among those affected by the disaster. Victims may lack clear information on the appropriate agencies to seek assistance from, causing confusion and delays in receiving help.
Misinformation Threat Landscape during Natural Disaster
During the 2018 floods in Kerala, it was noted that a fake video on water leakage from the Mullaperyar Dam created panic among the citizens and negatively impacted the rescue operations. Similarly, in 2017, reports emerged claiming that Hurricane Irma had caused sharks to be displaced onto a Florida highway. Similar stories, accompanied by the same image, resurfaced following Hurricanes Harvey and Florence. The disaster-affected nation may face international criticism and fail to receive necessary support due to its perceived inability to manage the crisis effectively. This lack of confidence from the global community can further exacerbate the challenges faced by the nation, leaving it more vulnerable and isolated in its time of need.
The spread of misinformation through social media severely hinders the administration of aid and relief operations during natural disasters since it hinders first responders' efforts to counteract and reduce the spread of misinformation, rumours, and false information and declines public trust in government, media, and non-governmental organisations (NGOs), who are often the first point of contact for both victims and officials due to their familiarity with the region and the community. In Moldova, it was noted that foreign influence has exploited the ongoing drought to create divisions between the semi-autonomous regions of Transnistria and Gagauzia and the central government in Chisinau. News coverage critical of the government leverages economic and energy insecurities to incite civil unrest in this already unstable region. Additionally, First responders may struggle to locate victims and assist them to safety, complicating rescue operations. The inability to efficiently find and evacuate those in need can result in prolonged exposure to dangerous conditions and a higher risk of injury or death.
Further, international aid from other countries could be impeded, affecting the overall relief effort. Without timely and coordinated support from the global community, the disaster response may be insufficient, leaving many needs unmet. Further, misinformation also impedes military, reducing the effectiveness of rescue and relief operations. Military assistance often plays a crucial role in disaster response, and any delays can hinder efforts to provide immediate and large-scale aid.
Misinformation also creates problems of allocation of relief resources to unaffected areas which resultantly impacts aid processes for regions in actual need. Following the April 2015 earthquake in Nepal, a Facebook post claimed that 300 houses in Dhading needed aid. Shared over 1,000 times, it reached around 350,000 people within 48 hours. The originator aimed to seek help for Ward #4’s villagers via social media. Given the average Facebook user has 350 contacts, the message was widely viewed. However, the need had already been reported on quakemap.org, a crisis-mapping database managed by Kathmandu Living Labs, a week earlier. Helping Hands, a humanitarian group was notified on May 7, and by May 11, Ward #4 received essential food and shelter. The re-sharing and sensationalisation of outdated information could have wasted relief efforts since critical resources would have been redirected to a region that had already been secured.
Policy Recommendations
Perhaps the most important step in combating misinformation during natural disasters is the increasing public education and the rapid, widespread dissemination of early warnings. This was best witnessed in the November 1970 tropical cyclone in southeastern Bangladesh, combined with a high tide, struck southeastern Bangladesh, leaving more than 300,000 people dead and 1.3 million homeless. In May 1985, when a comparable cyclone and storm surge hit the same area, local dissemination of disaster warnings was much improved and the people were better prepared to respond to them. The loss of life, while still high (at about 10,000), the numbers were about 3% of that in 1970. On a similar note, when a devastating cyclone struck the same area of Bangladesh in May 1994, fewer than 1,000 people died. In India, the 1977 cyclone in Andra Pradesh killed 10,000 people, but a similar storm in the same area 13 years later killed only 910. The dramatic difference in mortalities was owed to a new early-warning system connected with radio stations to alert people in low-lying areas.
Additionally, location-based filtering for monitoring social media during disasters is considered as another best practice to curb misinformation. However, agencies should be aware that this method may miss local information from devices without geolocation enabled. A 2012 Georgia Tech study found that less than 1.4 percent of Twitter content is geolocated. Additionally, a study by Humanity Road and Arizona State University on Hurricane Sandy data indicated a significant decline in geolocation data during weather events.
Alternatively, Publish frequent updates to promote transparency and control the message. In emergency management and disaster recovery, digital volunteers—trusted agents who provide online support—can assist overwhelmed on-site personnel by managing the vast volume of social media data. Trained digital volunteers help direct affected individuals to critical resources and disseminate reliable information.
Enhancing the quality of communication requires double-verifying information to eliminate ambiguity and reduce the impact of misinformation, rumors, and false information must also be emphasised. This approach helps prevent alert fatigue and "cry wolf" scenarios by ensuring that only accurate, relevant information is disseminated. Prioritizing ground truth over assumptions and swiftly releasing verified information or acknowledging the situation can bolster an agency's credibility. This credibility allows the agency to collaborate effectively with truth amplifiers. Prebunking and Debunking methods are also effective way to counter misinformation and build cognitive defenses to recognise red flags. Additionally, evaluating the relevance of various social media information is crucial for maintaining clear and effective communication.
References
- https://www.nature.com/articles/s41598-023-40399-9#:~:text=Moreover%2C%20misinformation%20can%20create%20unnecessary,impacting%20the%20rescue%20operations29.
- https://www.redcross.ca/blog/2023/5/why-misinformation-is-dangerous-especially-during-disasters
- https://www.soas.ac.uk/about/blog/disinformation-during-natural-disasters-emerging-vulnerability
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-M dia-Disasters-Emergencies_Mar2018-508.pdf
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/