#FactCheck: False Social Media Claim on six Army Personnel were killed in retaliatory attack by ULFA in Myanmar
Executive Summary:
A widely circulated claim on social media indicates that six soldiers of the Assam Rifles were killed during a retaliatory attack carried out by a Myanmar-based breakaway faction of the United Liberation Front of Asom (Independent), or ULFA (I). The post included a photograph of coffins covered in Indian flags with reference to soldiers who were part of the incident where ULFA (I) killed six soldiers. The post was widely shared, however, the fact-check confirms that the photograph is old, not related, and there are no trustworthy reports to indicate that any such incident took place. This claim is therefore false and misleading.

Claim:
Social media users claimed that the banned militant outfit ULFA (I) killed six Assam Rifles personnel in retaliation for an alleged drone and missile strike by Indian forces on their camp in Myanmar with captions on it “Six Indian Army Assam Rifles soldiers have reportedly been killed in a retaliatory attack by the Myanmar-based ULFA group.”. The claim was accompanied by a viral post showing coffins of Indian soldiers, which added emotional weight and perceived authenticity to the narrative.

Fact Check:
We began our research with a reverse image search of the image of coffins in Indian flags, which we saw was shared with the viral claim. We found the image can be traced to August 2013. We found the traces in The Washington Post, which confirms the fact that the viral snap is from the Past incident where five Indian Army soldiers were killed by Pakistani intruders in Poonch, Jammu, and Kashmir, on August 6, 2013.

Also, The Hindu and India Today offered no confirmation of the death of six Assam Rifles personnel. However, ULFA (I) did issue a statement dated July 13, 2025, claiming that three of its leaders had been killed in a drone strike by Indian forces.

However, by using Shutterstock, it depicts that the coffin's image is old and not representative of any current actions by the United Liberation Front of Asom (ULFA).

The Indian Army denied it, with Defence PRO Lt Col Mahendra Rawat telling reporters there were "no inputs" of such an operation. Assam Chief Minister Himanta Biswa Sarma also rejected that there was cross-border military action whatsoever. Therefore, the viral claim is false and misleading.

Conclusion:
The assertion that ULFA (I) killed six soldiers from the 6th Assam Rifles in a retaliation strike is incorrect. The viral image used in these posts is from 2013 in Jammu & Kashmir and has no relevance to the present. There have been no verified reports of any such killings, and both the Indian Army and the Assam government have categorically denied having conducted or knowing of any cross-border operation. This faulty narrative is circulating, and it looks like it is only inciting fear and misinformation therefore, please ignore it.
- Claim: Report confirms the death of six Assam Rifles personnel in an ULFA-led attack.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The Central Board of Secondary Education (CBSE) has issued a warning to students about fake social media accounts that spread false information about the CBSE. The board has warned students not to trust the information coming from these accounts and has released a list of 30 fake accounts. The board has expressed concern that these handles are misleading students and parents by spreading fake information with the name and logo of the CBSE. The board has has also clarified that it is not responsible for the information being spread from these fake accounts.
The Central Board of Secondary Education (CBSE), a venerable institution in the realm of Indian education, has found itself ensnared in the web of cyber duplicity. Impersonation attacks, a sinister facet of cybercrime, have burgeoned, prompting the Board to adopt a vigilant stance against the proliferation of counterfeit social media handles that masquerade under its esteemed name and emblem.
The CBSE, has revealed a list of approximately 30 spurious handles that have been sowing seeds of disinformation across the social media landscape. These digital doppelgängers, cloaked in the Board's identity, have been identified and exposed. The Board's official beacon in this murky sea of falsehoods is the verified handle '@cbseindia29', a lighthouse guiding the public to the shores of authentic information.
This unfolding narrative signifies the Board's unwavering commitment to tackle the scourge of misinformation and to fortify the bulwarks safeguarding the sanctity of its official communications. By spotlighting the rampant growth of fake social media personas, the CBSE endeavors to shield the public from the detrimental effects of misleading information and to preserve the trust vested in its official channels.
CBSE Impersonator Accounts
The list of identified malefactors, parading under the CBSE banner, serves as a stark admonition to the public to exercise discernment while navigating the treacherous waters of social media platforms. The CBSE has initiated appropriate legal manoeuvres against these unauthorised entities to stymie their dissemination of fallacious narratives.
The Board has previously unfurled comprehensive details concerning the impending board examinations for both Class 10 and Class 12 in the year 2024. These academic assessments are slated to commence from February 15 to April 2, 2024, with a uniform start time of 10:30 AM (IST) across all designated dates.
The CBSE has made it unequivocally clear that there are nefarious entities lurking in the shadows of social media, masquerading in the guise of the CBSE. It has implored students and the general public not to be ensnared by the siren songs emanating from these fraudulent accounts and has also unfurled a list of these imposters. The Board's warning is a beacon of caution, illuminating the path for students as they navigate the digital expanse with the impending commencement of the CBSE Class X and XII exams.
Sounding The Alarm
The Central Board of Secondary Education (CBSE) has sounded the alarm, issuing an advisory to schools, students, and their guardians about the existence of fake social media platform handles that brandish the board’s logo and mislead the academic community. The board has identified about 30 such accounts on the microblogging site 'X' (formerly known as Twitter) that misuse the CBSE logo and acronym, sowing confusion and disarray.
The board is in the process of taking appropriate action against these deceptive entities. CBSE has also stated that it bears no responsibility for any information disseminated by any other source that unlawfully appropriates its name and logo on social media platforms.
Sources reveal that these impostors post false information on various updates, including admissions and exam schedules. After receiving complaints about such accounts on 'X', the CBSE issued the advisory and has initiated action against those operating these accounts, sources said.
The Brute Nature of Impersonation
In the contemporary digital epoch, cybersecurity has ascended to a position of critical importance. It is the bulwark that ensures the sanctity of computer networks is maintained and that computer systems are not marked as prey by cyber predators. Cyberattacks are insidious stratagems executed with the intent of expropriating, manipulating, or annihilating authenticated user or organizational data. It is imperative that cyberattacks be mitigated at their roots so that users and organizations utilizing internet services can navigate the digital domain with a sense of safety and security. Knowledge about cyberattacks thus plays a pivotal role in educating cyber users about the diverse types of cyber threats and the preventive measures to counteract them.
Impersonation Attacks are a vicious form of cyberattack, characterised by the malicious intent to extract confidential information. These attacks revolve around a process where cyber attackers eschew the use of malware or bots to perpetrate their crimes, instead wielding the potent tactic of social engineering. The attacker meticulously researches and harvests information about the legitimate user through platforms such as social media and then exploits this information to impersonate or masquerade as the original, legitimate user.
The threats posed by Impersonation Attacks are particularly insidious because they demand immediate action, pressuring the victim to act without discerning between the authenticated user and the impersonated one. The very nature of an Impersonation Attack is a perilous form of cyber assault, as the original user who is impersonated holds rights to private information. These attacks can be executed by exploiting a resemblance to the original user's identity, such as email IDs. Email IDs with minute differences from the legitimate user are employed in this form of attack, setting it apart from the phishing cyber mechanism. The email addresses are so similar and close to each other that, without paying heed or attention to them, the differences can be easily overlooked. Moreover, the email addresses appear to be correct, as they generally do not contain spelling errors.
Strategies to Prevent
To prevent Impersonation Attacks, the following strategies can be employed:
- Proper security mechanisms help identify malicious emails and thereby filter spamming email addresses on a regular basis.
- Double-checking sensitive information is crucial, especially when important data or funds need to be transferred. It is vital to ensure that the data is transferred to a legitimate user by cross-verifying the email address.
- Ensuring organizational-level security is paramount. Organizations should have specific domain names assigned to them, which can help employees and users distinguish their identity from that of cyber attackers.
- Protection of User Identity is essential. Employees must not publicly share their private identities, which can be exploited by attackers to impersonate their presence within the organization.
Conclusion
The CBSE's struggle against the masquerade of misinformation is a reminder of the vigilance required to safeguard the legitimacy of our digital interactions. As we navigate the complex and uncharted terrain of the internet, let us arm ourselves with the knowledge and discernment necessary to unmask these digital charlatans and uphold the sanctity of truth.
References
- https://timesofindia.indiatimes.com/city/ahmedabad/cbse-warns-against-misuse-of-its-name-by-fake-social-media-handles/articleshow/107644422.cms
- https://www.timesnownews.com/education/cbse-releases-list-of-fake-social-media-handles-asks-not-to-follow-article-107632266
- https://www.etvbharat.com/en/!bharat/cbse-public-advisory-enn24021205856

Introduction
The ongoing debate on whether AI scaling has hit a wall has been rehashed by the underwhelming response to OpenAI’s ChatGPT v5. AI scaling laws, which describe that machine learning models perform better with increased training data, model parameters and computational resources, have guided the rapid progress of Large Language Models (LLMs) so far. But many AI researchers suggest that further improvements in LLMs will have to be effected through large computational costs by orders of magnitude, which does not justify the returns. The question, then, is whether scaling remains a viable path or whether the field must explore new approaches. This is not just a tech issue but a profound innovation challenge for countries like India, charting their own AI course.
The Scaling Wall: Gaps and Innovation Opportunities
Escalating costs, data scarcity, and diminishing gains mean that simply building larger AI models may no longer guarantee breakthroughs. In such a scenario, LLM developers will have to refine new approaches to training these models, for example, by diversifying data types and redefining training techniques.
This global challenge has a bearing on India’s AI ambitions. For India, where compute and data resources are relatively scarce, this scaling slowdown poses both a challenge and an opportunity. While the India AI Mission embodies smart priorities such as democratising compute resources and developing local datasets, looming scaling challenges could prove a roadblock. Realising these ambitions requires strong input from research and academia, and improved coordination between policymakers and startups. The scaling wall highlights systemic innovation gaps where sustained support is needed, not only in hardware but also in talent development, safety research, and efficient model design.
Way Forward
To truly harness AI’s transformative power, India must prioritise policy actions and ecosystem shifts that support smarter, safer, and context-rich research through the following measures:
- Driving Efficiency and Compute Innovation: Instead of relying on brute-force scaling, India should invest in research and startups working on efficient architectures, energy-conscious training methods, and compute optimisation.
- Investing in Multimodal and Diverse Data: While indigenous datasets are being developed under the India AI Mission through AI Kosha, they must be ethically sourced from speech, images, video, sensor data, and regional content, apart from text, to enable context-rich AI models truly tailored to Indian needs.
- Addressing Core Problems for Trustworthy AI: LLMs offered by all major companies, like OpenAI, Grok, and Deepseek, have the problem of unreliability, hallucinations, and biases, since they are primarily built on scaling large datasets and parameters, which have inherent limitations. India should invest in capabilities to solve these issues and design more trustworthy LLMs.
- Supporting Talent Development and Training: Despite its substantial AI talent pool, India faces an impending demand-supply gap. It will need to launch national programs and incentives to upskill engineers, researchers, and students in advanced AI skills such as model efficiency, safety, interpretability, and new training paradigms
Conclusion
The AI scaling wall debate is a reminder that the future of LLMs will depend not on ever-larger models but on smarter, safer, and more sustainable innovation. A new generation of AI is approaching us, and India can help shape its future. The country’s AI Mission and startup ecosystem are well-positioned to lead this shift by focusing on localised needs, efficient technologies, and inclusive growth, if implemented effectively. How India approaches this new set of challenges and translates its ambitions into action, however, remains to be seen.
References
- https://blogs.nvidia.com/blog/ai-scaling-laws/
- https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall
- https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
- https://indiaai.gov.in/
- https://www.deloitte.com/in/en/about/press-room/bridging-the-ai-talent-gap-to-boost-indias-tech-and-economic-impact-deloitte-nasscom-report.html

Introduction
The Digital Personal Data Protection (DPDP) Act 2023 of India is a significant transition for privacy legislation in this age of digital data. A key element of this new law is a requirement for organisations to have appropriate, user-friendly consent mechanisms in place for their customers so that collection, use or removal of an individual's personal data occurs in a clear and compliant manner. As a means of putting this requirement into practice, the Ministry of Electronics and Information Technology (MeitY) issued a comprehensive Business Requirements Document (BRD) in June 2025 to guide organizations, as well as Consent Managers, on how to create a Consent Management System (CMS). This document establishes the technical and functional framework by which organizations and individuals (Data Principals) will exercise control over the way their data is gathered, used and removed.
Understanding the BRD and Its Purpose
BRD represents an optional guide created as part of the "Code for Consent" programme run by MeitY in India. The purpose of the BRD is to provide guidance to startups, digital platforms and other enterprises on how to create a technology system that supports management of user consent per the requirements of the DPDP Act. Although the contents of the BRD do not carry any legal weight, it lays out a clear path for organisations to create their own consent mechanisms using best practices that align with the principles of transparency, accountability and purpose limitation in the DPDP Act.
The goal is threefold:
- Enable complete consent lifecycle management from collection to withdrawal.
- Empower individuals to manage their consents actively and transparently.
- Support data fiduciaries and processors with an interoperable system that ensures compliance.
Key Components of the Consent Management System
The BRD proposes the development of a modular Consent Management System (CMS) that provides users with secure APIs and user-friendly interfaces. This system will allow for a variety of features and modules, including:
- Consent Lifecycle Management – consent should be specific, informed and tied to an explicit purpose. The CMS will manage the collection, validation, renewal, updates and withdrawal of consent. Each transaction of consent will create a tamper-proof “consent artifact,” which will include the timestamp of creation as well as an ID identifying the purpose for which it was given.
- User Dashboard – A user will be able to view and modify the status of their active, expired or withdrawn consent and revoke access at any time via the multilingual user-friendly interface. This would make the system accessible to people from different regions and cultures.
- Notification Engine – The CMS will automatically notify users, fiduciaries and processors of any action taken with respect to consent, in order to ensure real-time updates and accountability.
- Grievance Redress Mechanism – The CMS will include a complaints mechanism that allows users to submit complaints related to the misuse of consent or the denial of their rights. This will enable tracking of the complaint resolution status, and will allow for escalation if necessary.
- Audit and Logging – As part of the CMS's internal controls for compliance and regulatory purposes, the CMS must maintain an immutable record of every instance of consent for auditing and regulatory review. The records must be encrypted, time-stamped, and linked permanently to a user and purpose ID.
- Cookie Consent Management – A separate module will enable users to manage cookie consent for websites separately from any other consents.
Roles and Responsibilities
The BRD identifies the various stakeholders involved and their associated responsibilities.
- Data Principals (Users): The user has full authority to give, withhold, amend, or revoke their consent for the use of their personal data, at any time.
- Data Fiduciaries (Companies): Companies (the fiduciaries) must collect the data principals' consents for each particular reason and must only begin processing a data subject's personal data after validating that consent through the CMS. Companies must also provide the data principals with any information or notifications needed, as well as how to resolve their complaints.
- Data Processors: Data Processors must strictly adhere to the consent stated in the CMS, and Data Processors may only process personal data on behalf of the Data Fiduciary.
- Consent Managers: The Consent Managers are independent entities that are registered with the Data Protection Board. They are responsible for administering the CMS, allowing users to manage their consent across different platforms.
This layered structure ensures transparency and shared responsibility for the consent ecosystem.
Technical Specifications and Security
The following principles of the DPDP Act must be followed to remain compliant with the DPDP Act.
- End-to-End Encryption: All exchanges of data with users must be encrypted using a minimum of TSL 1.3 and also encrypting within that standard.
- API-First Approach: API’s will be utilized to validate, withdraw and update consent in a secured manner using external sources.
- Interoperability/Accessibility: The CMS needs to allow for users to utilize several different languages (e.g. Hindi, Tamil, etc.) and be appropriate for use with various types of mobile devices and different abilities.
- Data Retention Policy: The CMS should also include automatic deletion of consent data (when the consent has expired or has been withdrawn) in order to maintain compliance with data retention limits.
Legal Relevance and Timelines
While the BRD itself is not enforceable, it is directly aligned with the upcoming enforcement of the DPDP Act, 2023. The Act was passed in August 2023 but is expected to come into effect in stages, once officially notified by the central government. Draft implementation rules, including those defining the role of Consent Managers, were released for public consultation in early 2025.
For businesses, the BRD serves as an early compliance tool—offering both a conceptual roadmap and technical framework to prepare before the law is enforced. Legal experts have described it as a critical resource for aligning data governance systems with emerging regulatory expectations.
Implications for Businesses
Organizations that collect and process user data will be required to overhaul their consent workflows:
- No blanket consents: Every data processing activity must have explicit, separate consent.
- Granular audit logs: Companies must maintain tamper-proof logs for every consent action.
- Integration readiness: Enterprises need to integrate their platforms with third-party or in-house CMS platforms via the specified APIs.
- Grievance redress and user support: Systems must be in place to handle complaints and withdrawal requests in a timely, verifiable manner.
Failing to comply once the DPDP Act is in force may expose companies to penalties, reputational damage, and potential regulatory action.
Conclusion
The BRD on Consent Management of India is a forward-looking initiative laying a technological framework that is an essential component of the DPDP Act concerning user consent; Although not yet a legal document, it provides an extent of going into all the necessary discipline for companies to prepare. As data protection grows in importance, developing consent mechanisms based on security, transparency, and the needs of the user is no longer just a regulatory requirement, but rather a requirement for the development of trust. This is the time for businesses to establish or implement CMS solutions that support this objective to be better equipped for the future of data governance in India.
References
- https://d38ibwa0xdgwxx.cloudfront.net/whatsnew-docs/8d5409f5-d26c-4697-b10e-5f6fb2d583ef.pdf
- https://ssrana.in/articles/ministry-releases-business-requirement-document-for-consent-management-under-the-dpdp-act-2023/
- https://dpo-india.com/Blogs/consent-dpdpa/
- https://corporate.cyrilamarchandblogs.com/2025/06/the-ghost-in-the-machine-the-recent-business-requirement-document-on-consent/
- https://www.mondaq.com/india/privacy-protection/1660964/analysis-of-the-business-requirement-document-for-consent-management-system