#FactCheck: Viral Video Showing Pakistan Shot Down Indian Air Force' MiG-29 Fighter Jet
Executive Summary
Recent claims circulating on social media allege that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani forces during "Operation Sindoor." These reports suggest the incident involved a jet crash attributed to hostile action. However, these assertions have been officially refuted. No credible evidence supports the existence of such an operation or the downing of an Indian aircraft as described. The Indian Air Force has not confirmed any such event, and the claim appears to be misinformation.

Claim
A social media rumor has been circulating, suggesting that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani Air forces during "Operation Sindoor." The claim is accompanied by images purported to show the wreckage of the aircraft.

Fact Check
The social media posts have falsely claimed that a Pakistani Air Force shot down an Indian Air Force MiG-29 during "Operation Sindoor." This claim has been confirmed to be untrue. The image being circulated is not related to any recent IAF operations and has been previously used in unrelated contexts. The content being shared is misleading and does not reflect any verified incident involving the Indian Air Force.

After conducting research by extracting key frames from the video and performing reverse image searches, we successfully traced the original post, which was first published in 2024, and can be seen in a news article from The Hindu and Times of India.
A MiG-29 fighter jet of the Indian Air Force (IAF), engaged in a routine training mission, crashed near Barmer, Rajasthan, on Monday evening (September 2, 2024). Fortunately, the pilot safely ejected and escaped unscathed, hence the claim is false and an act to spread misinformation.

Conclusion
The claims regarding the downing of an Indian Air Force MiG-29 during "Operation Sindoor" are unfounded and lack any credible verification. The image being circulated is outdated and unrelated to current IAF operations. There has been no official confirmation of such an incident, and the narrative appears to be misleading. Peoples are advised to rely on verified sources for accurate information regarding defence matters.
- Claim: Pakistan Shot down an Indian Fighter Jet, MIG-29
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.

The European Union (EU) has made trailblazing efforts regarding protection and privacy, coming up with the most comprehensive and detailed regulation called the GDPR (General Data Protection Regulation). As countries worldwide continue to grapple with setting their laws, the EU is already taking on issues with tech giants and focusing on the road ahead. Its contentious issues with Meta and the launch of Meta’s AI assistant in the EU are thus seen as a complex process, shaped by stringent data privacy regulations, ongoing debates over copyright, and ethical AI practices. This development is considered important as previously, the EU and Meta have had issues (including fines and and also received a pushback concerning its services), which broadly include data privacy regarding compliance with GDPR, antitrust law concerns- targeting ads, facebook marketplace activities and content moderation with respect to the spread of misinformation.
Privacy and Data Protection Concerns
A significant part of operating Large Language Models (LLMs) is the need to train them with a repository of data/ plausible answers from which they can source. If it doesn’t find relevant information or the request is out of its scope, programmed to answer, it shall continue to follow orders, but with a reduction in the accuracy of its response. Meta's initial plans to train its AI models using publicly available content from adult users in the EU received a setback from privacy regulators. The Irish Data Protection Commission (DPC), acting as Meta's lead privacy regulator in Europe, raised the issue and requested a delay in the rollout to assess its compliance with GDPR. It has also raised similar concerns with Grok, the AI tool of X, to assess whether the EU users’ data was lawfully processed for training it.
In response, Meta stalled the release of this feature for around a year and agreed to exclude private messages and data from users under the age of 18 and implemented an opt-out mechanism for users who do not wish their public data to be used for AI training. This approach aligns with GDPR requirements, which mandate a clear legal basis for processing personal data, such as obtaining explicit consent or demonstrating legitimate interest, along with the option of removal of consent at a later stage, as the user wishes. The version/service available at the moment is a text-based assistant which is not capable of things like image generation, but can provide services and assistance which include brainstorming, planning, and answering queries from web-based information. However, Meta has assured its users of expansion and exploration regarding the AI features in the near future as it continues to cooperate with the regulators.
Regulatory Environment and Strategic Decisions
The EU's regulatory landscape, characterised by the GDPR and the forthcoming AI Act, presents challenges for tech companies like Meta. Citing the "unpredictable nature" of EU regulations, Meta has decided not to release its multimodal Llama AI model—capable of processing text, images, audio, and video—in the EU. This decision underscores the tension between innovation and regulatory compliance, as companies navigate the complexities of deploying advanced AI technologies within strict legal frameworks.
Implications and Future Outlook
Meta's experience highlights the broader challenges faced by AI developers operating in jurisdictions with robust data protection laws. The most critical issue that remains for now is to strike a balance between leveraging user data for AI advancement while respecting individual privacy rights.. As the EU continues to refine its regulatory approach to AI, companies need to adapt their strategies to ensure compliance while fostering innovation. Stringent measures and regular assessment also keep in check the accountability of big tech companies as they make for profit as well as for the public.
Reference:
- https://thehackernews.com/2025/04/meta-resumes-eu-ai-training-using.html
- https://www.thehindu.com/sci-tech/technology/meta-to-train-ai-models-on-european-users-public-data/article69451271.ece
- https://about.fb.com/news/2025/04/making-ai-work-harder-for-europeans/
- https://www.theregister.com/2025/04/15/meta_resume_ai_training_eu_user_posts/
- https://noyb.eu/en/twitters-ai-plans-hit-9-more-gdpr-complaints
- https://www.businesstoday.in/technology/news/story/meta-ai-finally-comes-to-europe-after-a-year-long-delay-but-with-some-limitations-468809-2025-03-21
- https://www.bloomberg.com/news/articles/2025-02-13/meta-opens-facebook-marketplace-to-rivals-in-eu-antitrust-clash
- https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html#:~:text=Many%20civil%20society%20groups%20and,million%20for%20a%20data%20leak.
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_5801
- https://www.thehindu.com/sci-tech/technology/european-union-accuses-facebook-owner-meta-of-breaking-digital-rules-with-paid-ad-free-option/article68358039.ece
- https://www.theregister.com/2025/04/14/ireland_investigation_into_x/
- https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations?utm_source=chatgpt.com
- https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu?utm_source=chatgpt.com

Introduction
Online dating platforms have become a common way for individuals to connect in today’s digital age. For many in the LGBTQ+ community, especially in environments where offline meeting spaces are limited, these platforms offer a way to find companionship and support. However, alongside these opportunities come serious risks. Users are increasingly being targeted by cybercrimes such as blackmail, sextortion, identity theft, and online harassment. These incidents often go unreported due to stigma and concerns about privacy. The impact of such crimes can be both emotional and financial, highlighting the need for greater awareness and digital safety.
Cybercrime On LGBTQ+ Dating Apps: A Threat Landscape
According to the NCRB 2022 report, there has been a 24.4% increase in cybercrimes. But unfortunately, the queer community-specific data is not available. Cybercrimes that target LGBTQ+ users in very organised and predatory. In several Indian cities, gangs actively monitor dating platforms to the point that potential victims, especially young queers and those who seem discreet about their identity, become targets. Once the contact is established, perpetrators use a standard operating process, building false trust, forcing private exchanges, and then gradually starting blackmail and financial exploitation. Many queer victims are blackmailed with threats of exposure to families or workplaces, often by fake police demanding bribes. Fear of stigma and insensitive policing discourages reporting. Cyber criminal gangs exploit these gaps on dating apps. Despite some arrests, under-reporting persists, and activists call for stronger platform safety.
Types of Cyber Crimes against Queer Community on Dating Apps
- Romance scam or “Lonely hearts scam”: Scammers build trust with false stories (military, doctors, NGO workers) and quickly express strong romantic interest. They later request money, claiming emergencies. They often try to create multiple accounts to avoid profile bans.
- Sugar daddy scam: In this type of scam, the fraudster offers money or allowance in exchange for things like chatting, sending photos, or other interactions. They usually offer a specific amount and want to use some uncommon payment gateways. After telling you they will send you a lot of money, they often make up a story like: “My last sugar baby cheated me, so now you must first send me a small amount to prove you are trustworthy.” This is just a trick to make you send them money first.
- Sextortion / Blackmail scam: Scammers record explicit chats or pretend to be underage, then threaten exposure unless you pay. Some target discreet users. Never send explicit content or pay blackmailers.
- Investment Scams: Scammers posing as traders or bankers convince victims to invest in fake opportunities. Some "flip" small amounts to build trust, then disappear with larger sums. Real investors won’t approach you on dating apps. Don’t share financial info or transfer money.
- Pay-Before-You-Meet scam: Scammer demands upfront payment (gift cards, gas money, membership fees) before meeting, then vanishes. Never pay anyone before meeting in person.
- Security app registration scam: Scammers ask you to register on fake "security apps" to steal your info, claiming it ensures your safety. Research apps before registering. Be wary of quick link requests.
- The Verification code scam: Scammers trick you into giving them SMS verification codes, allowing them to hijack your accounts. Never share verification codes with anyone.
- Third-party app links: Mass spam messages with suspicious links that steal info or infect devices. Don’t click suspicious links or “Google me” messages.
- Support message scam: Messages pretending to be from application support, offering prizes or fake shows to lure you to malicious sites.
Platform Accountability & Challenges
The issue of online dating platforms in India is characterised by weak grievance redressal, poor takedown of abusive profiles, and limited moderation practices. Most platforms appoint grievance officers or offer an in-app complaint portal, but complaints are often unanswered or receive only automated and AI-generated responses. This highlights the gap between policy and enforcement on the ground.
Abusive or fake profiles, often used for scams, hate crimes, and outing LGBTQ+ individuals, remain active long after being reported. In India, organised extortion gangs have exploited such profiles to lure, assault, rob, and blackmail queer men. Moderation teams often struggle with backlogs and lack the resources needed to handle even the most serious complaints.
Despite offering privacy settings and restricting profile visibility, moderation practices in India are still weak, leaving large segments of users vulnerable to impersonation, catfishing, and fraud. The concept of pseudonymisation can help protect vulnerable communities, but it is difficult to distinguish authentic users from malicious actors without robust, privacy-respecting verification systems.
Since many LGBTQ+ individuals prefer to maintain their confidentiality, while others are more vocal about their identities, in either case, the data shared by an individual with an online dating platform must be vigilantly protected. The Digital Personal Data Protection Act, 2023, mandates the protection of personal data. Section 8(4) provides: “A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.” Accordingly, digital platforms collecting such data should adopt the necessary technical and organisational measures to comply with data protection laws.
Recommendations
The Supreme Court has been proactive in this regard, through decisions like Navtej Singh Johar v. Union of India, which decriminalised same-sex relationships. Justice K.S. Puttaswamy (Retd.) v. Union of India and Ors., acknowledged the right to privacy as a fundamental right, and, most recently, the 2025 affirmation of the right to digital access. However, to protect LGBTQ+ people online, more robust legal frameworks are still required.
There is a requirement for a dedicated commission or an empowered LGBTQ+ cell. Like the National Commission for Women (NCW), which works to safeguard the rights of women, a similar commission would address community-specific issues, including cybercrime, privacy violations, and discrimination on digital platforms. It may serve as an institutional link between the victim, the digital platforms, the government, and the police. Dating Platforms must enhance their security features and grievance mechanisms to safeguard the users.
Best Practices
Scammers use data sets and plans to target individuals seeking specific interests, such as love, sex, money, or association. Do not make financial transactions, such as signing up for third-party platforms or services. Scammers may attempt to create accounts for others, which can be used to access dating platforms and harm legitimate users. Users should be vigilant about sharing sensitive information, such as private images, contact information, or addresses, as scammers can use this information to threaten users. Stay smart, stay cyber safe.
References
- https://www.hindustantimes.com/htcity/cinema/16yearold-queer-child-pranshu-dies-by-suicide-due-to-bullying-did-we-fail-as-a-society-mental-health-expert-opines-101701172202794.html#google_vignette
- https://www.ijsr.net/archive/v11i6/SR22617213031.pdf
- https://help.grindr.com/hc/en-us/articles/1500009328241-Scam-awareness-guide
- http://meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://mib.gov.in/sites/default/files/2024-02/IT%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20English.pdf