#FactCheck: IAF Shivangi Singh was captured by Pakistan army after her Rafale fighter jet was shot down
Executive Summary:
False information spread on social media that Flight Lieutenant Shivangi Singh, India’s first female Rafale pilot, had been captured by Pakistan during “Operation Sindoor”. The allegations are untrue and baseless as no credible or official confirmation supports the claim, and Singh is confirmed to be safe and actively serving. The rumor, likely originating from unverified sources, sparked public concern and underscored the serious threat fake news poses to national security.
Claim:
An X user posted stating that “ Initial image released of a female Indian Shivani singh Rafale pilot shot down in Pakistan”. It was falsely claimed that Flight Lieutenant Shivangi Singh had been captured, and that the Rafale aircraft was shot down by Pakistan.


Fact Check:
After doing reverse image search, we found an instagram post stating the two Indian Air Force pilots—Wing Commander Tejpal (50) and trainee Bhoomika (28)—who had ejected from a Kiran Jet Trainer during a routine training sortie from Bengaluru before it crashed near Bhogapuram village in Karnataka. The aircraft exploded upon impact, but both pilots were later found alive, though injured and exhausted.

Also we found a youtube channel which is showing the video from the past and not what it was claimed to be.

Conclusion:
The false claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down have been debunked. The image used was unrelated and showed IAF pilots from a separate training incident. Several media also confirmed that its video made no mention of Ms. Singh’s arrest. This highlights the dangers of misinformation, especially concerning national security. Verifying facts through credible sources and avoiding the spread of unverified content is essential to maintain public trust and protect the reputation of those serving in the armed forces.
- Claim: False claims about Flight Lieutenant Shivangi Singh being captured by Pakistan and her Rafale jet being shot down
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/

Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading