#FactCheck: Fake Claim that US has used Indian Airspace to attack Iran
Executive Summary:
An online claim alleging that U.S. bombers used Indian airspace to strike Iran has been widely circulated, particularly on Pakistani social media. However, official briefings from the U.S. Department of Defense and visuals shared by the Pentagon confirm that the bombers flew over Lebanon, Syria, and Iraq. Indian authorities have also refuted the claim, and the Press Information Bureau (PIB) has issued a fact-check dismissing it as false. The available evidence clearly indicates that Indian airspace was not involved in the operation.
Claim:
Various Pakistani social media users [archived here and here] have alleged that U.S. bombers used Indian airspace to carry out airstrikes on Iran. One widely circulated post claimed, “CONFIRMED: Indian airspace was used by U.S. forces to strike Iran. New Delhi’s quiet complicity now places it on the wrong side of history. Iran will not forget.”

Fact Check:
Contrary to viral social media claims, official details from U.S. authorities confirm that American B2 bombers used a Middle Eastern flight path specifically flying over Lebanon, Syria, and Iraq to reach Iran during Operation Midnight Hammer.

The Pentagon released visuals and unclassified briefings showing this route, with Joint Chiefs of Staff Chair Gen. Dan Caine explained that the bombers coordinated with support aircraft over the Middle East in a highly synchronized operation.

Additionally, Indian authorities have denied any involvement, and India’s Press Information Bureau (PIB) issued a fact-check debunking the false narrative that Indian airspace was used.

Conclusion:
In conclusion, official U.S. briefings and visuals confirm that B-2 bombers flew over the Middle East not India to strike Iran. Both the Pentagon and Indian authorities have denied any use of Indian airspace, and the Press Information Bureau has labeled the viral claims as false.
- Claim: Fake Claim that US has used Indian Airspace to attack Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
A photo circulating on social media depicting modified tractors is being misrepresented as part of the 'Delhi Chalo' farmers' protest narrative. In the recent swirl of misinformation surrounding the 'Delhi Chalo' farmers' protest. A photo, ostensibly showing a phalanx of modified tractors, has been making the rounds on social media platforms, falsely tethered to the ongoing protests. This image, accompanied by a headline suggesting a mechanical metamorphosis to resist police barricades, was allegedly published by a news agency. However, beneath the surface of this viral phenomenon lies a more complex and fabricated reality.
The Movement
The 'Delhi Chalo' movement, a clarion call that resonated with thousands of farmers from the fertile plains of Punjab, the verdant fields of Haryana, and the sprawling expanses of Uttar Pradesh, has been a testament to the agrarian community's demand for assured crop prices and legal guarantees for the Minimum Support Price (MSP). The protest, which has seen the fortification of borders and the chaos at the Punjab-Haryana border on February 13, 2024, has become a crucible for the farmers' unyielding spirit.
Yet, amidst this backdrop of civil demonstration and discourse, a nefarious narrative of misinformation has taken root. The viral image, which has been shared with the fervour of wildfire, was accompanied by a screenshot of an article allegedly published by the news agency. This article, dated February 11, 2024, quoted an anonymous official who claimed that intelligence agencies had alerted the police to the protesters' plans to outfit tractors with hydraulic tools. The implication was clear: these machines had been transformed into battering rams against the bulwark of law enforcement.
The Pursuit of Truth
However, the India TV Fact Check team, in their relentless pursuit of truth, unearthed that the viral photo of these so-called modified tractors is nothing but a chimerical creation, a figment of artificial intelligence. Visual discrepancies betrayed its AI-generated nature.
This is not the first time that the misinformation has loomed over the farmers' protest. Previous instances, including a viral video of a modified tractor, have been debunked by the same fact-checking team. These efforts are a bulwark against the tide of false narratives that seek to muddy the waters of public understanding.
The claim that the photo depicted modified tractors intended for use in the ‘Delhi Chalo’ farmers' protest rally in Delhi on February 13, 2024, was a mirage.
The Fact Check
OpIndia, in their article, clarified that the photo used was a representative image created by AI and not a real photograph. To further scrutinize this viral photo, the HIVE AI detector tool was employed, indicating a 99.4% likelihood of the image being AI-generated. Thus, the claim made in the post was misleading.
The viral photo claiming that farmers had modified their tractors to avoid tear gas shells and remove barricades put up by the police during the rally was a digital illusion. The internet has become a fertile ground for the rapid spread of misinformation, reaching millions in an instant. Social media, with its complex algorithms, amplifies this spread, as any interaction, even those intended to debunk false information, inadvertently increases its reach. This phenomenon is exacerbated by 'echo chambers,' where users are exposed to a homogenous stream of content that reinforces their pre-existing beliefs, making it difficult to encounter and consider alternative perspectives.
Conclusion
The viral image depicting modified tractors for the ‘Delhi Chalo’ farmers' protest rally was a digital fabrication, a testament to the power of AI in creating convincing yet false narratives. As we navigate the labyrinth of information in the digital era, it is imperative to remain vigilant, to question the veracity of what we see and hear, and to rely on the diligent work of fact-checkers in discerning the truth. The mirage of modified machines serves as a stark reminder of the potency of misinformation and the importance of critical thinking in the age of artificial intelligence.
References
- https://www.indiatvnews.com/fact-check/fact-check-ai-generated-tractor-photo-misrepresented-delhi-chalo-farmers-protest-narrative-msp-police-barricades-punjab-haryana-uttar-pradesh-2024-02-15-917010
- https://factly.in/this-viral-image-depicting-modified-tractors-for-the-delhi-chalo-farmers-protest-rally-is-created-using-ai/

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

The World Wide Web was created as a portal for communication, to connect people from far away, and while it started with electronic mail, mail moved to instant messaging, which let people have conversations and interact with each other from afar in real-time. But now, the new paradigm is the Internet of Things and how machines can communicate with one another. Now one can use a wearable gadget that can unlock the front door upon arrival at home and can message the air conditioner so that it switches on. This is IoT.
WHAT EXACTLY IS IoT?
The term ‘Internet of Things’ was coined in 1999 by Kevin Ashton, a computer scientist who put Radio Frequency Identification (RFID) chips on products in order to track them in the supply chain, while he worked at Proctor & Gamble (P&G). And after the launch of the iPhone in 2007, there were already more connected devices than people on the planet.
Fast forward to today and we live in a more connected world than ever. So much so that even our handheld devices and household appliances can now connect and communicate through a vast network that has been built so that data can be transferred and received between devices. There are currently more IoT devices than users in the world and according to the WEF’s report on State of the Connected World, by 2025 there will be more than 40 billion such devices that will record data so it can be analyzed.
IoT finds use in many parts of our lives. It has helped businesses streamline their operations, reduce costs, and improve productivity. IoT also helped during the Covid-19 pandemic, with devices that could help with contact tracing and wearables that could be used for health monitoring. All of these devices are able to gather, store and share data so that it can be analyzed. The information is gathered according to rules set by the people who build these systems.
APPLICATION OF IoT
IoT is used by both consumers and the industry.
Some of the widely used examples of CIoT (Consumer IoT) are wearables like health and fitness trackers, smart rings with near-field communication (NFC), and smartwatches. Smartwatches gather a lot of personal data. Smart clothing, with sensors on it, can monitor the wearer’s vital signs. There are even smart jewelry, which can monitor sleeping patterns and also stress levels.
With the advent of virtual and augmented reality, the gaming industry can now make the experience even more immersive and engrossing. Smart glasses and headsets are used, along with armbands fitted with sensors that can detect the movement of arms and replicate the movement in the game.
At home, there are smart TVs, security cameras, smart bulbs, home control devices, and other IoT-enabled ‘smart’ appliances like coffee makers, that can be turned on through an app, or at a particular time in the morning so that it acts as an alarm. There are also voice-command assistants like Alexa and Siri, and these work with software written by manufacturers that can understand simple instructions.
Industrial IoT (IIoT) mainly uses connected machines for the purposes of synchronization, efficiency, and cost-cutting. For example, smart factories gather and analyze data as the work is being done. Sensors are also used in agriculture to check soil moisture levels, and these then automatically run the irrigation system without the need for human intervention.
Statistics
- The IoT device market is poised to reach $1.4 trillion by 2027, according to Fortune Business Insight.
- The number of cellular IoT connections is expected to reach 3.5 billion by 2023. (Forbes)
- The amount of data generated by IoT devices is expected to reach 73.1 ZB (zettabytes) by 2025.
- 94% of retailers agree that the benefits of implementing IoT outweigh the risk.
- 55% of companies believe that 3rd party IoT providers should have to comply with IoT security and privacy regulations.
- 53% of all users acknowledge that wearable devices will be vulnerable to data breaches, viruses,
- Companies could invest up to 15 trillion dollars in IoT by 2025 (Gigabit)
CONCERNS AND SOLUTIONS
- Two of the biggest concerns with IoT devices are the privacy of users and the devices being secure in order to prevent attacks by bad actors. This makes knowledge of how these things work absolutely imperative.
- It is worth noting that these devices all work with a central hub, like a smartphone. This means that it pairs with the smartphone through an app and acts as a gateway, which could compromise the smartphone as well if a hacker were to target that IoT device.
- With technology like smart television sets that have cameras and microphones, the major concern is that hackers could hack and take over the functioning of the television as these are not adequately secured by the manufacturer.
- A hacker could control the camera and cyberstalk the victim, and therefore it is very important to become familiar with the features of a device and ensure that it is well protected from any unauthorized usage. Even simple things, like keeping the camera covered when it is not being used.
- There is also the concern that since IoT devices gather and share data without human intervention, they could be transmitting data that the user does not want to share. This is true of health trackers. Users who wear heart and blood pressure monitors have their data sent to the insurance company, who may then decide to raise the premium on their life insurance based on the data they get.
- IoT devices often keep functioning as normal even if they have been compromised. Most devices do not log an attack or alert the user, and changes like higher power or bandwidth usage go unnoticed after the attack. It is therefore very important to make sure the device is properly protected.
- It is also important to keep the software of the device updated as vulnerabilities are found in the code and fixes are provided by the manufacturer. Some IoT devices, however, lack the capability to be patched and are therefore permanently ‘at risk’.
CONCLUSION
Humanity inhabits this world that is made up of all these nodes that talk to each other and get things done. Users can harmonize their devices so that everything runs like a tandem bike – completely in sync with all other parts. But while we make use of all the benefits, it is also very important that one understands what they are using, how it is functioning, and how one can tackle issues should they come up. This is also important to understand because once people get used to IoT, it will be that much more difficult to give up the comfort and ease that these systems provide, and therefore it would make more sense to be prepared for any eventuality. A lot of times, good and sensible usage alone can keep devices safe and services intact. But users should be aware of any issues because forewarned is forearmed.