#FactCheck: Viral AI video claims Iran has destroyed Israel in an airstrike
Executive Summary:
A video is circulating on social media claiming to be footage of the aftermath of Iran's missile strikes on Israel. The video shows destruction, damaged infrastructure, and panic among civilian casualties. After our own digital verification, visual inspection, and frame-by-frame inspection, we have determined that the video is fake. The video is just AI-generated clips and not related to any incident.

Claim:
The viral video claims that a recent military strike by Iran resulted in the destruction of parts of Israel, following an initial missile attack launched by Iran. The footage appears current and depicts significant destruction of buildings and widespread chaos in the streets.

FACT CHECK:
We conducted our research on the viral video to determine if it was AI-generated. During the research we broke the video into individual still frames, and upon closely examining the frames, several of the visuals he showed us had odd-shaped visual features, abnormal body proportions, and flickering movements that don't occur in real footage. We took several still frames and checked them in image search sites to see if they had appeared before. The search results revealed that several clips in the video had appeared previously, in separate and unrelated circumstances, which indicates that they are neither recent nor original.

While examining the Instagram profile, we noticed that the account frequently shares visually dramatic AI content that appears digitally created. Many earlier posts from the same page include scenes that are unrealistic, such as wrecked aircraft in desolate areas or buildings collapsing in unnatural ways. In the current video, for instance, the fighter jets shown have multiple wings, which is not technically or aerodynamically possible in real life. The profile’s bio, which reads "Resistance of Artificial Intelligence," suggests that the page intentionally focuses on sharing AI-generated or fictional content.

We also ran the viral post through Tenorshare.AI for Deep-Fake detection, and the result came 94% AI. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring in Israel, and therefore debunked a false narrative propagated on social media.

Conclusion:
Our research found that the video is fake and contains AI-generated images and is not related to any real missile strike or destruction occurring in Israel. The source is specific to fuel the panic and misinformation in a context of already-heightened geopolitical tension. We call on viewers not to share this unverified information and to rely on trusted sources. When there are sensitive international developments, the dissemination of fake imagery can promote fear, confusion, and misinformation on a global scale.
- Claim: Real Footage of Iran’s Missile Strikes on Israel
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The Department of Telecommunications on 28th October 2024 notified an amendment to the Flight and Maritime Connectivity Rules, 2018 (FMCR 2018).
Rule 9 of the principle rules in FMCR 2018 stated:
“Restrictions–(1) The IFMC service provider shall provide the operation of mobile communication services in aircraft at minimum height of 3000 meters in Indian airspace to avoid interference with terrestrial mobile networks. (2) Internet services through Wi-Fi in aircraft shall be made available when electronic devices are permitted to be used only in airplane mode.”
In 2022, an amendment was made to the attached form in the Rules for obtaining authorisation to provide IFMC services.
Subsequently, the 2024 amendment substitutes sub-rule (2), namely :
“ (2) Notwithstanding the minimum height in Indian airspace referred to in sub-rule (1), internet services through Wi-Fi in aircraft shall be made available when electronic devices are permitted to be used in the aircraft.”
Highlights of the Amendment
These rules govern the use of Wi-Fi in airplanes and ships within or above India or Indian territorial waters through In Flight and Maritime Connectivity (IFMC) services provided by IFMC service providers responsible for establishing and maintaining them.
Airplanes are equipped with antennas, onboard servers, and routers to connect to signals received from ground towers via Direct Air-to-Ground Communications (DA2GC) or through satellites. The DA2GC system offers connectivity through various communication methods, supporting services like in-flight Internet access and mobile multimedia. Licensed In-Flight Mobile Connectivity (IFMC) providers must adhere to standards set by international organizations such as the International Telecommunications Union (ITU), the European Telecommunications Standards Institute (ETSI), and the Institute of Electrical and Electronics Engineers (IEEE), or by international forums like the 3rd Generation Partnership Project (3GPP) to offer In-Flight Connectivity. Providers using Indian or foreign satellite systems must obtain approval from the Department of Space.
The IFMC service provider must operate mobile communication services on aircrafts at a minimum altitude of 3,000 meters within Indian airspace to prevent interference with terrestrial mobile networks. However, Wi-Fi access can be enabled at any point during the flight when device use is permitted, not just after reaching 3,000 meters. This flexibility is intended to allow passengers to connect to Wi-Fi earlier in the flight. This amendment aims to ensure that passengers can access the internet while maintaining the safety standards critical to in-flight communication systems.
Implications
- Increased Data Security Needs: There will be a need for robust cybersecurity measures against potential threats and data breaches.
- Increased Costs: Airplanes will have to incur the initial costs for installing antennae. Since airfare pricing in India is market-driven and largely unregulated, these costing changes might find their way into ticket prices, making flight tickets more expensive.
- Interference Management: A framework regarding the conditions under which Wi-FI must be switched off to avoid interference with terrestrial communication systems can be determined by stakeholders and communicated to passengers.
- Enhanced Connectivity Infrastructure: Airlines may need to invest in better flight-connectivity infrastructure to handle increased network traffic as more passengers access Wi-fi at lower altitudes and for longer durations.
Conclusion
The Flight and Maritime Connectivity (Amendment) Rules, 2024, enhance passenger convenience and align India with global standards for in-flight connectivity while complying with international safety protocols. Access to the internet during flights and at sea provides valuable real-time information, enhances safety, and offers access to health support during aviation and maritime operations. However, new challenges including the need for robust cybersecurity measures, cost implications for airlines and passengers, and management of interference with terrestrial networks will have to be addressed through a collaborative approach between airlines, IFMC providers, and regulatory authorities.
Sources
- https://dot.gov.in/sites/default/files/2018_12_17%20AS%20IFMC_2.pdf?download=1
- https://dot.gov.in/sites/default/files/Amendment%20dated%2004112024%20in%20flight%20and%20maritime%20connectivity%20rules%202018%20to%20IFMC%20Service%20Provider.pdf
- https://www.t-mobile.com/dialed-in/wireless/how-does-airplane-wifi-work
- https://tec.gov.in/public/pdf/Studypaper/DA2GC_Paper%2008-10-2020%20v2.pdf
- https://www.indiatoday.in/india/story/wifi-use-flights-no-longer-linked-altitude-now-subject-permission-2628118-2024-11-05
- https://pib.gov.in/Pressreleaseshare.aspx?PRID=1843408#:~:text=With%20the%20repeal%20of%20Air,issue%20directions%20to%20such%20airline.

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company