#FactCheck: False Claim About Indian Flag Hoisted in Balochistan amid the success of Operation Sindoor
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In a world where social media dictates public perception and content created by AI dilutes the difference between fact and fiction, mis/disinformation has become a national cybersecurity threat. Today, disinformation campaigns are designed for their effect, with political manipulation, interference in public health, financial fraud, and even community violence. India, with its 900+ million internet users, is especially susceptible to this distortion online. The advent of deep fakes, AI-text, and hyper-personalised propaganda has made disinformation more plausible and more difficult to identify than ever.
What is Misinformation?
Misinformation is false or inaccurate information provided without intent to deceive. Disinformation, on the other hand, is content intentionally designed to mislead and created and disseminated to harm or manipulate. Both are responsible for what experts have termed an "infodemic", overwhelming people with a deluge of false information that hinders their ability to make decisions.
Examples of impactful mis/disinformation are:
- COVID-19 vaccine conspiracy theories (e.g., infertility or microchips)
- Election-related false news (e.g., EVM hacking so-called)
- Social disinformation (e.g., manipulated videos of riots)
- Financial scams (e.g., bogus UPI cashbacks or RBI refund plans)
How Misinformation Spreads
Misinformation goes viral because of both technology design and human psychology. Social media sites such as Facebook, X (formerly Twitter), Instagram, and WhatsApp are designed to amplify messages that elicit high levels of emotional reactions are usually polarising, sensationalistic, or fear-mongering posts. This causes falsehoods or misinformation to get much more attention and activity than authentic facts, and therefore prioritises virality over truth.
Another major consideration is the misuse of generative AI and deep fakes. Applications like ChatGPT, Midjourney, and ElevenLabs can be used to generate highly convincing fake news stories, audio recordings, or videos imitating public figures. These synthetic media assets are increasingly being misused by bad actors for political impersonation, propagating fabricated news reports, and even carrying out voice-based scams.
To this danger are added coordinated disinformation efforts that are commonly operated by foreign or domestic players with certain political or ideological objectives. These efforts employ networks of bot networks on social media, deceptive hashtags, and fabricated images to sway public opinion, especially during politically sensitive events such as elections, protests, or foreign wars. Such efforts are usually automated with the help of bots and meme-driven propaganda, which makes them scalable and traceless.
Why Misinformation is Dangerous
Mis/disinformation is a significant threat to democratic stability, public health, and personal security. Perhaps one of the most pernicious threats is that it undermines public trust. If it goes unchecked, then it destroys trust in core institutions like the media, judiciary, and electoral system. This erosion of public trust has the potential to destabilise democracies and heighten political polarisation.
In India, false information has had terrible real-world outcomes, especially in terms of creating violence. Misleading messages regarding child kidnappers on WhatsApp have resulted in rural mob lynching. As well, communal riots have been sparked due to manipulated religious videos, and false terrorist warnings have created public panic.
The pandemic of COVID-19 also showed us how misinformation can be lethal. Misinformation regarding vaccine safety, miracle cures, and the source of viruses resulted in mass vaccine hesitancy, utilisation of dangerous treatments, and even avoidable deaths.
Aside from health and safety, mis/disinformation has also been used in financial scams. Cybercriminals take advantage of the fear and curiosity of the people by promoting false investment opportunities, phishing URLs, and impersonation cons. Victims get tricked into sharing confidential information or remitting money using seemingly official government or bank websites, leading to losses in crypto Ponzi schemes, UPI scams, and others.
India’s Response to Misinformation
- PIB Fact Check Unit
The Press Information Bureau (PIB) operates a fact-checking service to debunk viral false information, particularly on government policies. In 3 years, the unit identified more than 1,500 misinformation posts across media.
- Indian Cybercrime Coordination Centre (I4C)
Working under MHA, I4C has collaborated with social media platforms to identify sources of viral misinformation. Through the Cyber Tipline, citizens can report misleading content through 1930 or cybercrime.gov.in.
- IT Rules (The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [updated as on 6.4.2023]
The Information Technology (Intermediary Guidelines) Rules were updated to enable the government to following aspects:
- Removal of unlawful content
- Platform accountability
- Detection Tools
There are certain detection tool that works as shields in assisting fact-checkers and enforcement bodies to:
- Identify synthetic voice and video scams through technical measures.
- Track misinformation networks.
- Label manipulated media in real-time.
CyberPeace View: Solutions for a Misinformation-Resilient Bharat
- Scale Digital Literacy
"Think Before You Share" programs for rural schools to teach students to check sources, identify clickbait, and not reshare fake news.
- Platform Accountability
Technology platforms need to:
- Flag manipulated media.
- Offer algorithmic transparency.
- Mark AI-created media.
- Provide localised fact-checking across diverse Indian languages.
- Community-Led Verification
Establish WhatsApp and Telegram "Fact Check Hubs" headed by expert organisations, industry experts, journalists, and digital volunteers who can report at the grassroots level fake content.
- Legal Framework for Deepfakes
Formulate targeted legislation under the Bhartiya Nyaya Sanhita (BNS) and other relevant laws to make malicious deepfake and synthetic media use a criminal offense for:
- Electoral manipulation.
- Defamation.
- Financial scams.
- AI Counter-Misinformation Infrastructure
Invest in public sector AI models trained specifically to identify:
- Coordinated disinformation patterns.
- Botnet-driven hashtag campaigns.
- Real-time viral fake news bursts.
Conclusion
Mis/disinformation is more than just a content issue, it's a public health, cybersecurity, and democratic stability challenge. As India enters the digitally empowered world, making a secure, informed, and resilient information ecosystem is no longer a choice; now, it's imperative. Fighting misinformation demands a whole-of-society effort with AI innovation, public education, regulatory overhaul, and tech responsibility. The danger is there, but so is the opportunity to guide the world toward a fact-first, trust-based digital age. It's time to act.
References
- https://www.pib.gov.in/factcheck.aspx
- https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf
- https://www.cyberpeace.org
- https://www.bbc.com/news/topics/cezwr3d2085t
- https://www.logically.ai
- https://www.altnews.in

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report