#FactCheck : Old images of US sailors falsely linked to ongoing Iran tensions
Executive Summary
After Donald Trump said that US Navy ships would soon begin escorting tankers through the Strait of Hormuz, several old images resurfaced on social media with claims that they show American sailors recently captured by Iran amid the ongoing Middle East tensions. Research by CyberPeace found that the viral posts are misleading. The images being circulated are nearly a decade old and have no connection to the ongoing situation in the Middle East.
Claim:
Posts circulating on Facebook alleged that Iran had captured 10 US Navy personnel — nine men and one woman — and detained them at a military base on Farsi Island. The caption further claimed that the incident was reported by Iranian official Ali Larijani and denied by Donald Trump.
https://www.facebook.com/photo/?fbid=1381610870661566&set=pcb.1381611363994850

Fact Check
A reverse image search revealed that the viral images are not recent. They were published as early as January 13, 2016, by ABC News in a report titled “Iran Releases 10 Navy Sailors Held After Drifting Into Iranian Waters.”

Further checks showed that the same images were distributed by AFP, with credits to Sepah News, the media wing of Iran’s Revolutionary Guards.

Context
The images relate to a 2016 incident in which two US Navy patrol boats accidentally entered Iranian waters. The crew was detained and taken to Farsi Island. Iran later released the sailors after determining that the intrusion was unintentional and that there was no hostile intent.
Conclusion
The viral posts are misleading. The images being shared are nearly a decade old and unrelated to the ongoing situation in the Middle East.
Related Blogs

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

Executive Summary
A video clip bearing the logo of News18 is being widely shared on social media with the claim that a serving Indian Army brigadier and his son were attacked in Delhi by an RSS-supporting mob for criticising the government over “Operation Sindoor.” The clip features an anchor allegedly explaining the motive behind the assault. However, research by the CyberPeace Research Wing found the claim to be false. The viral video has been digitally manipulated, with its audio altered to include misleading information.
Claim
An X user (@Mohammad776157) shared a video clip from Network18 on April 13, claiming that a serving Indian Army brigadier and his son were attacked in Delhi by an RSS-supporting mob for criticising the government over “Operation Sindoor.”
- https://x.com/Mohammad776157/status/2043691737609347166?s=20
- https://archive.ph/5EpbJ

To verify the claim, we extracted multiple keyframes from the viral video using the InVid tool and conducted reverse image searches via Google Lens. The same clip was found circulating across several social media platforms with similar claims.
- https://www.facebook.com/reel/2397972117364665
- https://www.instagram.com/reels/DXE4FFdjcnq/
- https://archive.ph/hjG3b
- https://archive.ph/9IkTY
Fact Check
Since the video carried the News18 logo, we examined the outlet’s official social media handles. We found the original video on its X account, where the visuals matched the viral clip. However, a detailed analysis of the original footage showed that the anchor never stated that the brigadier and his son were attacked for criticising the government over “Operation Sindoor.”
In the authentic version, the anchor reported that the assault took place in Delhi’s Vasant Enclave after the brigadier objected to two individuals consuming alcohol inside a car parked outside his residence. This clearly indicates that the audio in the viral clip was tampered with to insert a false narrative.

For further verification, we extracted the audio segment from the viral clip and analysed it using Resemble AI. The tool indicated that the portion describing the motive behind the attack had been digitally manipulated.

Conclusion
The viral claim is false. The video has been altered by modifying its audio to mislead viewers. In reality, the assault was not related to “Operation Sindoor” but occurred after the brigadier objected to public drinking near his residence.

Introduction
US President Biden takes a step by signing a key executive order to manage the risks posed by AI. The new presidential order on Artificial intelligence (AI) sets rules on the rapidly growing technology that has big potential but also covers risks. The presidential order was signed on 30th October 2023. It is a strong action that the US president has taken on AI safety and security. This order will require the developers to work on the most powerful AI model to share their safety test results with the government before releasing their product to the public. It also includes developing standards for ethically using AI and for detecting AI-generated content and labelling it as such. Tackling the many dangers of AI as it rapidly advances, the technology poses certain risks by replacing human workers, spreading misinformation and stealing people's data. The white house is also making clear that this is not just America’s problem and that the US needs to work with the world to set standards here and to ensure the responsible use of AI. The white house is also urging Congress to do more and pass comprehensive privacy legislation. The order includes new safety guidelines for AI developers, standards to disclose AI-generated content and requirements for federal agencies that are utilising AI. The white house says that it is the strongest action that any government has taken on AI safety and security. In the most recent events, India has reported the biggest ever data breach, where data of 815 million Indians has been leaked. ICMR is the Indian Council of Medical Research and is the imperial medical research institution of India.
Key highlights of the presidential order
The presidential order requires developers to share safety test results. It focuses on developing standards, tools & tests to ensure safe AI. It will ensure protection from AI-enabled frauds and protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, protect against risks of using AI to engineer dangerous material and provide guidelines for detecting AI -AI-generated content and establishing overall standards for AI safety and security.
Online content authentication and labelling
Biden administration has asked the Department of Commerce to set guidelines to help authenticate content coming from the government, meaning the American people should be able to trust official documents coming from the government. So, focusing on content authentication, they have also talked about labelling AI-generated content, making the differentiation between a real authentic piece of content and something that has been manipulated or generated using AI.
ICMR Breach
On 31/10/2023, an American intelligence and cybersecurity agency flagged the biggest-ever data breach, putting the data of 81.5 crore Indians at stake and at at potential risk of making its way to the dark market. The cyber agency has informed that a ‘threat actor’, also known as ‘pwn001’ shared a thread on Breach Forums, which is essentially claimed as the ‘premier Databreach discussion and leaks forum’. The forum confirms a breach of 81.5 crore Indians. As of today,, ICRM has not issued any official statement, but it has informed the government that the prestigious Central Bureau of Investigation (CBI) will be taking on the investigation and apprehending the cybercriminals behind the cyber attack. The bad actor’s alias, 'pwn001,' made a post on X (formerly Twitter); the post informed that Aadhaar and passport information, along with personal data such as names, phone numbers, and addresses. It is claimed that the data was extracted from the COVID-19 test details of citizens registered with ICMR. This poses a serious threat to the Indian Netizen from any form of cybercrime from anywhere in the world.
Conclusion:
The US presidential order on AI is a move towards making Artificial intelligence safe and secure. This is a major step by the Biden administration, which is going to protect both Americans and the world from the considerable dangers of AI. The presidential order requires developing standards, tools, and tests to ensure AI safety. The US administration will work with allies and global partners, including India, to develop a strong international framework to govern the development and use of AI. It will ensure the responsible use of AI. With the passing of legislation such as the Digital Personal Data Protection Act, 2023, it is pertinent that the Indian government works towards creating precautionary and preventive measures to protect Indian data. As the evolution of cyber laws is coming along, we need to keep an eye on emerging technologies and update/amend our digital routines and hygienes to stay safe and secure.
References:
- https://m.dailyhunt.in/news/india/english/lokmattimes+english-epaper-lokmaten/biden+signs+landmark+executive+order+to+manage+ai+risks-newsid-n551950866?sm=Y
- https://www.hindustantimes.com/technology/in-indias-biggest-data-breach-personal-information-of-81-5-crore-people-leaked-101698719306335-amp.html?utm_campaign=fullarticle&utm_medium=referral&utm_source=inshorts