#FactCheck - False Claim of Hindu Sadhvi Marrying Muslim Man Debunked
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.

Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.


Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.

This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.

We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.

We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.

Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
Amid the ongoing conflict between the United States, Israel, and Iran, a video circulating widely on social media claims to show American soldiers kneeling and surrendering to Iranian forces. In the clip, several soldiers appear to be sitting on their knees in front of armed personnel, creating the impression that they have been captured on the battlefield.
The video is being shared with the claim that the Iranian military has taken US soldiers prisoner during the war.
However, an research by the CyberPeace found that the claim is false. The viral clip is not authentic and has been generated using artificial intelligence. There is no credible evidence to support the claim that American soldiers have been captured by Iranian forces.
Claim
A Facebook user named “News Tick” shared the video on March 12, 2026, claiming that Iran had released footage of captured US soldiers. In the clip, the soldiers can be seen kneeling while armed personnel stand around them, giving the scene a highly dramatic appearance.

Fact Check
To verify the claim, we first searched the internet using relevant keywords. We found no credible reports from reputable news organizations confirming that US soldiers had been captured by Iran during the conflict. A closer examination of the video revealed several visual inconsistencies. The weapons carried by the soldiers appear unclear and oddly shaped. Additionally, the background looks unusually blurred and overly dramatic. The lighting and textures in the footage also appear artificial—common indicators of AI-generated visuals.
To confirm this suspicion, we analyzed the clip using multiple AI detection tools. The tool Hive Moderation indicated a 99% probability that the video was created using artificial intelligence.

Further analysis using Sightengine also suggested that the video was likely AI-generated, estimating an 80% probability of AI creation.

Conclusion
Our research shows that the viral video claiming to depict American soldiers surrendering and being captured by Iranian forces is fake. The footage has been generated using AI and does not represent a real incident.

Introduction
Over the last few years, several public data breaches in Venezuela have revealed a lack of cohesion and progress in its data privacy system and left many people susceptible to fraud, identity theft and long-term harm via the internet. It is clear from these data breaches that when organizations fail to adequately protect their data, both through cybersecurity failures and weak legal protections, they can lead to problems throughout an entire system through which all individuals in the system could potentially suffer.
Among the more notable breaches are the Movistar Venezuela data breach from 2025 and the Cashea App data leak from earlier this year. Each of these examples demonstrates to some extent how the absence of an adequate privacy regulatory scheme can worsen the results of a data breach.
The Movistar Breach: A Regulatory Warning (2025)
Venezuelan digital rights group VE Sin Filtro published a report late in April 2025, which found a database revealed to have been opened onto the internet containing personal information belonging to over 3.2 million Movistar customers. The initial breach contained personal, and confidential, data of Venezuelan citizens such as national identification numbers, full names, city of residence, and phone numbers which could have been exploited to commit identity theft, SIM-swap fraud, and targeted scams.
One significant issue with this situation was that Movistar failed to disclose the breach publicly or contact impacted customers at the time of the disclosure. As a result, there appears to be a significant gap in Sanctions / Other Means of Enforcing Security Countermeasures Laws. Since there are numerous countries that enforce GDPR-style regulations and as such, this matter should lead to a complete investigation and possible fines against those responsible but in Venezuela there is still a lack of accountability.
Cashea App Leak: A 2026 Data Shock
A second alleged data breach came to light in February of 2026. It involved a Venezuelan buy-now-pay-later (BNPL) fintech called Cashea App, which is typically heavily utilized domestically. Reports have circulated that threat actors have been offering a database, believed to hold more than 79 million transaction records. This is more than double the size and sensitivity of the data involved in the Movistar Breach.
According to reports, the leaked data included:
- Bank account details and payment methods
- Merchant profiles and internal business identifiers
- Detailed transaction histories with names, national ID numbers, timestamps, and installment data
This level of exposure goes far beyond basic identifiers. Financial transaction histories combined with personal identifiers enable sophisticated fraud, targeted social engineering, and long-term misuse of financial identities. As with the Movistar breach, no official acknowledgment or notification was issued by Cashea at the time of reporting, again underscoring Venezuela’s weak enforcement environment.
Why These Breaches Matter: The Legal Dimension
The incidents show us that there is a bigger problem with the way Venezuela has set up its framework for protecting data. For instance, the Venezuelan Constitution recognises the principles of data protection and privacy; however, these rights only exist in a theoretical manner; they lack implementing legislation, procedural clarity, and institutional enforcement.
Constitutional Basis of Data Protection
The Supreme Tribunal of Justice (TSJ) stated the core principles for protecting data are found in the Venezuelan Constitution. After the TSJ issued its 2011 ruling, Article 28 of the Venezuelan Constitution gives individuals the right to know what data the state has about them, how the state uses that data, and to correct or delete any harmful data. Article 60 of the Venezuelan Constitution protects individuals' privacy and restricts excessive data collection by the state.
The Constitutional Chamber also put into place additional guiding principles for how to protect personal data, including:
- The data subject must give prior informed and revocable consent.
- The purpose for which the data is collected must be specified and only the minimum amount of information necessary can be collected.
- The data collected must be accurate and of good quality.
- There are confidentiality obligations for third parties regarding the use of the data.
- It is the government's responsibility to put into place procedures and mechanisms to monitor compliance with the data protection laws.
- There are civil, criminal and administrative liabilities for individuals and legal entities that violate the data protection laws.
But, in a civil law country, when courts make rulings, they usually are persuasive only as opposed to being legally binding, and even constitutional rulings cannot be implemented until enabling legislation is passed.
Absence of a Comprehensive Data Protection Law
In contrast to the European Union's GDPR (General Data Protection Regulation), the United States' sectoral approach, and emerging Latin American data protection systems such as the ones in Brazil, Chile and Colombia, Venezuela has no independent data protection law. This lack of law leads to numerous types of uncertainty in the realm of data protection laws:
- No defined data controller or processor obligations
- No standardized lawful bases for processing
- No clear breach notification timelines
- No independent data protection authority
- No procedural pathway for individuals to seek redress
As a result, data protection in Venezuela is not treated as an independent legal discipline but instead becomes derivative, arising incidentally within constitutional litigation or sector-specific disputes.
Regulatory Fragmentation and Institutional Weakness
Due to the TSJ decisions made in 2011, there has been a lack of regulatory action taken in a systematic fashion and instead most actions have been done on a case by case basis as valid incidents arise. The National Cybersecurity Council was established in 2024; however, its function is to support the establishment of cybersecurity infrastructure and has no defined powers regarding the enforcement of privacy.
This creates a fragmented institutional landscape where:
- Authorities lack clear jurisdiction over privacy violations
- Companies face minimal compliance guidance
- Individuals struggle to understand or enforce their rights
The Movistar and Cashea incidents highlight how this fragmentation translates into practical impunity following major data exposures.
What’s Next? A Legal Opportunity for Reform
The repercussions of insufficient safeguards for data protection extend past the damage incurred to a person's privacy:
- Loss of trust in both financial and digital services
- Heightened likelihood of financial fraud and crime
- Lack of willingness from foreign companies to conduct business with Venezuela’s platforms.
- Long-term negative impact on the reputation of domestic companies.
- Possible inability to access cross-border transfer of data due to other jurisdictions’ decisions to restrict transfers into jurisdictions without cutting-edge enforcement of protections for privacy.
In a digital economy that increasingly requires robust data protection to function successfully, a lack of action to create strong protections will cause a significant economic impact.
Conclusion
Major data breaches such as the ones at Movistar in 2025 and Cashea App in 2026 show that constitutional privacy rights alone are insufficient without enforceable legal framework. Privacy laws must move from being just a principle to being a law that has institutions, procedures, and accountability to make sure the privacy of the users is protected.
Now with the global digital economy being so interconnected, not having regulations creates openings for vulnerabilities for people. If Venezuela hopes to protect their citizens, create an innovation-friendly environment, and compete in the global market, they must implement comprehensive data privacy reforms as soon as possible.
REFERENCES
- https://iapp.org/news/a/venezuela-data-breach-highlights-scattered-privacy-regulation
- https://www.apolocybersecurity.com/en/blog-posts/ciberataque-a-movistar-que-ha-pasado-a-quien-afecta-y-como-proteger-tus-datos
- https://darknetsearch.com/knowledge/news/en/cashea-app-data-leak-79m-records-exposed-in-venezuela/
- https://www.binance.com/en-IN/square/post/294369884695410

Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/