#FactCheck: False Claim About Indian Flag Hoisted in Balochistan amid the success of Operation Sindoor
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

Introduction
In a major policy shift aimed at synchronizing India's fight against cyber-enabled financial crimes, the government has taken a landmark step by bringing the Indian Cyber Crime Coordination Centre (I4C) under the ambit of the Prevention of Money Laundering Act (PMLA). In the notification released in the official gazette on 25th April, 2025, the Department of Revenue, Ministry of Finance, included the Indian Cyber Crime Coordination Centre (I4C) under Section 66 of the Prevention of Money Laundering Act, 2002 (hereinafter referred to as “PMLA”). The step comes as a significant attempt to resolve the asynchronous approach of different agencies (Enforcement Directorate (ED), State Police, CBI, CERT-In, RBI) set up under the government responsible for preventing and often possessing key information regarding cyber crimes and financial crimes. As it is correctly put, "When criminals sprint and the administration strolls, the finish line is lost.”
The gazetted notification dated 25th April, 2025, read as follows:
“In exercise of the powers conferred by clause (ii) of sub-section (1) of section 66 of the Prevention of Money-laundering Act, 2002 (15 of 2003), the Central Government, on being satisfied that it is necessary in the public interest to do so, hereby makes the following further amendment in the notification of the Government of India, in the Ministry of Finance, Department of Revenue, published in the Gazette of India, Extraordinary, Part II, section 3, sub-section (i) vide number G.S.R. 381(E), dated the 27th June, 2006, namely:- In the said notification, after serial number (26) and the entry relating thereto, the following serial number and entry shall be inserted, namely:— “(27) Indian Cyber Crime Coordination Centre (I4C).”.
Outrunning Crime: Strengthening Enforcement through Rapid Coordination
The usage of cyberspace to commit sophisticated financial crimes and white-collar crimes is a one criminal parallel passover that no one was looking forward to. The disenchanted reality of today’s world is that the internet is used for as much bad as it is for good. The internet has now entered the financial domain, facilitating various financial crimes. Money laundering is a financial crime that includes all processes or activities that are in connection with the concealment, possession, acquisition, or use of proceeds of crime and projecting it as untainted money. In the offence of money laundering, there is an intricate web and trail of financial transactions that are hard to track, as they are, and with the advent of the internet, the transactions are often digital, and the absence of crucial information hampers the evidentiary chain. With this new step, the Enforcement Directorate (ED) will now make headway into the investigation with the information exchange under PMLA from and to I4C, removing the obstacles that existed before this notification.
Impact
The decision of the finance ministry has to be seen in terms of all that is happening around the globe, with the rapid increase in sophisticated financial crimes. By formally empowering the I4C to share and receive information with the Enforcement Directorate under PMLA, the government acknowledges the blurred lines between conventional financial crime and cybercrime. It strengthens India’s financial surveillance, where money laundering and cyber fraud are increasingly two sides of the same coin. The assessment of the impact can be made from the following facilitations enabled by the decision:
- Quicker internet detection of money laundering
- Money trail tracking in real time across online platforms
- Rapid freeze of cryptocurrency wallets or assets obtained fraudulently
Another important aspect of this decision is that it serves as a signal that India is finally equipping itself and treating cyber-enabled financial crimes with the gravitas that is the need of the hour. This decision creates a two-way intelligence flow between cybercrime detection units and financial enforcement agencies.
Conclusion
To counter the fragmented approach in handling cyber-enabled white-collar crimes and money laundering, the Indian government has fortified its legal and enforcement framework by extending PMLA’s reach to the Indian Cyber Crime Coordination Centre (I4C). All the decisions and the brainstorming that led up to this notification are crucial at this point in time for the cybercrime framework that India needs to be on par with other countries. Although India has come a long way in designing a robust cybercrime intelligence structure, as long as it excludes and works in isolation, it will be ineffective. So, the current decision in discussion should only be the beginning of a more comprehensive policy evolution. The government must further integrate and devise a separate mechanism to track “digital footprints” and incorporate a real-time red flag mechanism in digital transactions suspected to be linked to laundering or fraud.

Introduction
Data protection has been a critical aspect of advocacy and governance all across the world. Data fuels our cyber-ecosystem and strengthens the era of emerging technologies. All industries and sectors are now dependent upon the data of the user. The governments across the world have been deliberating internally to address the issue and legality of Data protection and privacy. The Indian government has witnessed various draft bills and policies focusing on Data protection over the years, and the contemporary bill is the Digital Personal Data Protection Bill, 2023, which was tabled at the Lok Sabha (Lower House of Parliament) on 03 August for discussions and parliamentary assent.
What is DPDP, 2023?
The goal of the complete and comprehensive Digital Personal Data Protection Bill of 2023 is to establish a framework for the protection of personal data in India. The measure acknowledges the significance of protecting personal data and seeks to strike a balance between the necessity to process personal data for legitimate purposes and the right of individuals to do so. The bill establishes a number of crucial expressions and ideas associated with the protection of personal data, including “data fiduciary,” “data principal,” and “sensitive personal data.” It also emphasises the duties of data fiduciaries, including the need to establish suitable security measures to preserve personal data and the need to secure data principals’ consent before processing their personal information. The measure also creates the Data Protection Board of India, which would implement its requirements and guarantee data fiduciaries’ compliance. The board will have the authority to look into grievances, give directives, and impose sanctions for non-compliance.
Key Features of the Bill
The bill tabled at the parliament has the following key features:
- The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.
- Under the 2023 bill, a new Data Protection Board is established, which will ensure compliance, remedies and penalties.
- Under the new bill, the Board has been entrusted with the power equivalent to a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.
- The 2023 bill also secures more rights of Individuals and establishes a balance between user protection and growing innovations.
- The bill creates a transparent and accountable data governance framework by giving more rights to individuals.
- There is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
- The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.
- The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.
- The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.
- Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’ pertaining to the conditions in regard to Sovernity and Intergrity of India.
- There is an introduction of the negative list, which restricts cross-data transfer.
Additionally, the measure makes special provisions for the processing of children’s personal data and acknowledges the significance of protecting children’s privacy. Additionally, it highlights the rights of the data subjects, including their right to access their personal information, their right to have wrong information corrected, and their right to be forgotten.
Drive4CyberPeace
A campaign was undertaken by CyberPeace to gain a critical understanding of what people understand about Data privacy and protection in India. The 4-month long campaign led to a pan-India interaction with netizens from different areas and backgrounds. The thoughts and opinions of the netizens were understood and collated in the form of a whitepaper which was, in turn, presented to Parliamentarians and government officials. The whitepaper laid the foundation of the recommendations submitted to the Ministry of Electronics and Information Technology as part of the stakeholder consultation.
Conclusion
Overall, the Digital Personal Data Protection Bill of 2023 is an important step towards safeguarding Indian citizens’ privacy and personal data. It creates a regulatory agency to guarantee compliance and enforcement and offers a thorough framework for data protection. The law includes special measures for the protection of sensitive personal data and the personal data of children and acknowledges the significance of striking a balance between the right to privacy and the necessity of data processing.