#FactCheck: False Social Media Claim on six Army Personnel were killed in retaliatory attack by ULFA in Myanmar
Executive Summary:
A widely circulated claim on social media indicates that six soldiers of the Assam Rifles were killed during a retaliatory attack carried out by a Myanmar-based breakaway faction of the United Liberation Front of Asom (Independent), or ULFA (I). The post included a photograph of coffins covered in Indian flags with reference to soldiers who were part of the incident where ULFA (I) killed six soldiers. The post was widely shared, however, the fact-check confirms that the photograph is old, not related, and there are no trustworthy reports to indicate that any such incident took place. This claim is therefore false and misleading.

Claim:
Social media users claimed that the banned militant outfit ULFA (I) killed six Assam Rifles personnel in retaliation for an alleged drone and missile strike by Indian forces on their camp in Myanmar with captions on it “Six Indian Army Assam Rifles soldiers have reportedly been killed in a retaliatory attack by the Myanmar-based ULFA group.”. The claim was accompanied by a viral post showing coffins of Indian soldiers, which added emotional weight and perceived authenticity to the narrative.

Fact Check:
We began our research with a reverse image search of the image of coffins in Indian flags, which we saw was shared with the viral claim. We found the image can be traced to August 2013. We found the traces in The Washington Post, which confirms the fact that the viral snap is from the Past incident where five Indian Army soldiers were killed by Pakistani intruders in Poonch, Jammu, and Kashmir, on August 6, 2013.

Also, The Hindu and India Today offered no confirmation of the death of six Assam Rifles personnel. However, ULFA (I) did issue a statement dated July 13, 2025, claiming that three of its leaders had been killed in a drone strike by Indian forces.

However, by using Shutterstock, it depicts that the coffin's image is old and not representative of any current actions by the United Liberation Front of Asom (ULFA).

The Indian Army denied it, with Defence PRO Lt Col Mahendra Rawat telling reporters there were "no inputs" of such an operation. Assam Chief Minister Himanta Biswa Sarma also rejected that there was cross-border military action whatsoever. Therefore, the viral claim is false and misleading.

Conclusion:
The assertion that ULFA (I) killed six soldiers from the 6th Assam Rifles in a retaliation strike is incorrect. The viral image used in these posts is from 2013 in Jammu & Kashmir and has no relevance to the present. There have been no verified reports of any such killings, and both the Indian Army and the Assam government have categorically denied having conducted or knowing of any cross-border operation. This faulty narrative is circulating, and it looks like it is only inciting fear and misinformation therefore, please ignore it.
- Claim: Report confirms the death of six Assam Rifles personnel in an ULFA-led attack.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the wake of the Spy Loan scandal, more than a dozen malicious loan apps were downloaded on Android phones from the Google Play Store, However, the number is significantly higher because they are also available on third-party marketplaces and questionable websites.
Unmasking the Scam
When a user borrows money, these predatory lending applications capture large quantities of information from their smartphone, which is then used to blackmail and force them into returning the total with hefty interest levels. While the loan amount is disbursed to users, these predatory loan apps request sensitive information by granting access to the camera, contacts, messages, logs, images, Wi-Fi network details, calendar information, and other personal information. These are then sent to loan shark servers.
The researchers have disclosed facts about the applications used by loan sharks to mislead consumers, as well as the numerous techniques used to circumvent some of the limitations imposed on the Play Store. Malware is often created with appealing user interfaces and promotes simple and rapid access to cash with high-interest payback conditions. The revelation of the Spy Loan scandal has triggered an immediate response from law enforcement agencies worldwide. There is an urgency to protect millions of users from becoming victims of malicious loan apps, it has become extremely important for law enforcement to unmask the culprits and dismantle the cyber-criminal network.
Aap’s banned: here is the list of the apps banned by Google Play Store :
- AA Kredit: इंस्टेंट लोन ऐप (com.aa.kredit.android)
- Amor Cash: Préstamos Sin Buró (com.amorcash.credito.prestamo)
- Oro Préstamo – Efectivo rápido (com.app.lo.go)
- Cashwow (com.cashwow.cow.eg)
- CrediBus Préstamos de crédito (com.dinero.profin.prestamo.credito.credit.credibus.loan.efectivo.cash)
- ยืมด้วยความมั่นใจ – ยืมด่วน (com.flashloan.wsft)
- PréstamosCrédito – GuayabaCash (com.guayaba.cash.okredito.mx.tala)
- Préstamos De Crédito-YumiCash (com.loan.cash.credit.tala.prestmo.fast.branch.mextamo)
- Go Crédito – de confianza (com.mlo.xango)
- Instantáneo Préstamo (com.mmp.optima)
- Cartera grande (com.mxolp.postloan)
- Rápido Crédito (com.okey.prestamo)
- Finupp Lending (com.shuiyiwenhua.gl)
- 4S Cash (com.swefjjghs.weejteop)
- TrueNaira – Online Loan (com.truenaira.cashloan.moneycredit)
- EasyCash (king.credit.ng)
- สินเชื่อปลอดภัย – สะดวก (com.sc.safe.credit)
Risks with several dimensions
SpyLoan's loan application violates Google's Financial Services policy by unilaterally shortening the repayment period for personal loans to a few days or any other arbitrary time frame. Additionally, the company threatens users with public embarrassment and exposure if they do not comply with such unreasonable demands.
Furthermore, the privacy rules presented by SpyLoan are misleading. While ostensibly reasonable justifications are provided for obtaining certain permissions, they are very intrusive practices. For instance, camera permission is ostensibly required for picture data uploads for Know Your Customer (KYC) purposes, and access to the user's calendar is ostensibly required to plan payment dates and reminders. However, both of these permissions are dangerous and can potentially infringe on users' privacy.
Prosecution Strategies and Legal Framework
The law enforcement agencies and legal authorities initiated prosecution strategies against the individuals who are involved in the Spy Loan Scandal, this multifaced approach involves international agreements and the exploration of innovative legal avenues. Agencies need to collaborate with International agencies to work on specific cyber-crime, leveraging the legal frameworks against digital fraud furthermore, the cross-border nature of the spy loan operation requires a strong legal framework to exchange information, extradition requests, and the pursuit of legal actions across multiple jurisdictions.
Legal Protections for Victims: Seeking Compensation and Restitution
As the legal battle unfolds in the aftermath of the Spy loan scam the focus shifts towards the victims, who suffer financial loss from such fraudulent apps. Beyond prosecuting culprits, the pursuit of justice should involve legal safeguards for victims. Existing consumer protection laws serve as a crucial shield for Spy Loan victims. These laws are designed to safeguard the rights of individuals against unfair practices.
Challenges in legal representation
As the legal hunt for justice in the Spy Loan scam progresses, it encounters challenges that demand careful navigation and strategic solutions. One of the primary obstacles in the legal pursuit of the Spy loan app lies in the jurisdictional complexities. Within the national borders, it’s quite challenging to define the jurisdiction that holds the authority, and a unified approach in prosecuting the offenders in various regions with the efforts of various government agencies.
Concealing the digital identities
One of the major challenges faced is the anonymity afforded by the digital realm poses a challenge in identifying and catching the perpetrators of the scam, the scammers conceal their identity and make it difficult for law enforcement agencies to attribute to actions against the individuals, this challenge can be overcome by joint effort by international agencies and using the advance digital forensics and use of edge cutting technology to unmask these scammers.
Technological challenges
The nature of cyber threats and crime patterns are changing day by day as technology advances this has become a challenge for legal authorities, the scammers explore vulnerabilities, making it essential, for law enforcement agencies to be a step ahead, which requires continuous training of cybercrime and cyber security.
Shaping the policies to prevent future fraud
As the scam unfolds, it has become really important to empower users by creating more and more awareness campaigns. The developers of the apps need to have a transparent approach to users.
Conclusion
It is really important to shape the policies to prevent future cyber frauds with a multifaced approach. Proposals for legislative amendments, international collaboration, accountability measures, technology protections, and public awareness programs all contribute to the creation of a legal framework that is proactive, flexible, and robust to cybercriminals' shifting techniques. The legal system is at the forefront of this effort, playing a critical role in developing regulations that will protect the digital landscape for years to come.
Safeguarding against spyware threats like SpyLoan requires vigilance and adherence to best practices. Users should exclusively download apps from official sources, meticulously verify the authenticity of offerings, scrutinize reviews, and carefully assess permissions before installation.
References

Introduction
So it's that time of year when you feel bright and excited to start the year with new resolutions; your goals could be anything from going to the gym to learning new skills and being productive this year, but with cybercrime on the rise, you must also be smart and take your New Year Cyber Resolutions seriously. Yes, you heard it right: it's a new year, a new you, but the same hackers with advanced dangers. It's time to make a cyber resolution this year to be secure, smart, and follow the best cyber safety tips for 2K25 and beyond.
Best Cyber Security Tips For You
So while taking your cyber resolutions this 2k25, remember that hackers have resolutions too; so you have to make yours better! CyberPeace has curated a list of great tips and cyber hygiene practices you must practice in 2025:
- Be Aware Of Your Digital Rights: Netizens should be aware of their rights in the digital space. It's important to know where to report issues, how to raise concerns with platforms, and what rights are available to you under applicable IT and Data Protection laws. And as we often say, sharing is caring, so make sure to discuss and share your knowledge of digital rights with your family, peers, and circle. Not only will this help raise awareness, but you’ll also learn from their experiences, collectively empowering yourselves. After all, a well-informed online community is a happy one.
- Awareness Is Your First Line Of Defence: Awareness serves as the first line of defence, especially in light of the lessons learned from 2024, where new forms of cybercrimes have emerged with serious consequences. Scams like digital arrests, romance frauds, lottery scams, and investment scams have become more prevalent. As we move into 2025, remember that sophisticated cyber scams require equally advanced strategies to stay protected. As cybercrimes evolve and become more complex, it's crucial to stay updated with specific strategies and hygiene tips to defend yourself. Build your first line of defence by being aware of these growing scams, and say goodbye to the manipulative tactics used by cyber crooks.
- Customise Social Media Media Profile And Privacy Settings: With the rising misuse of advanced technologies such as deepfake, it’s crucial to share access to your profile only with people you trust and know. Customize your social media profile settings based on your convenience, such as who can add you, who can see your uploaded pictures and stories, and who can comment on your posts. Tailor these settings to suit your needs and preferences, ensuring a safer digital environment for yourself.
- Be Cautious: Choose wisely, just because an online deal seems exciting doesn’t mean it’s legitimate. A single click could have devastating consequences. Not every link leads to a secure website; it could be a malware or phishing attempt. Be cautious and follow basic cyber hygiene tips, such as only visiting websites with a padlock symbol, a secure connection, and the 'HTTPS' status in the URL.
- Don’t Let Fake News Fake You Out: Online misinformation and disinformation have sparked serious concern due to their widespread proliferation. That’s why it’s crucial to 'Spot The Lies Before They Spot You.' Exercise due care and caution when consuming, sharing, or forwarding any online information. Always verify it from trusted sources, recognize the red flags of misleading claims, and contribute to creating a truthful online information landscape.
- Turn the Tables on Cybercriminals: It is crucial to know the proper reporting channels for cybercrimes, including specific reporting methods based on the type of issue. For example, ‘unsolicited commercial communications’ can be reported on the Chakshu portal by the government. Unauthorized electronic transactions can be reported to the RBI toll-free number at 14440, while women can report incidents to the National Commission for Women. If you encounter issues on a platform, you can reach out to the platform's grievance officer. All types of cybercrimes can be reported through the National Cyber Crime Reporting Portal (cybercrime.gov.in) and the helpline at 1930. It’s essential to be aware of the right authorities and reporting mechanisms, so if something goes wrong in your digital experience, you can take action, turn the tables on cybercrooks, and stay informed about official grievances and reporting channels.
- Log Out, Chill Out: The increased use of technology can have far-reaching consequences that are often overlooked, such as procrastination, stress, anxiety, and eye strain (also known as digital eye strain or computer vision syndrome). Sometimes, it’s essential to switch off the digital curtains. This is where a ‘Digital Detox’ comes in, offering a chance to recharge and reset. We’re all aware of how our devices and phones influence our daily lives, shaping our behaviours, decisions, and lifestyles from morning until night, even impacting our sleep. Taking time to unplug can provide a much-needed psychological and physical boost. Practicing a digital detox at regular suitable intervals, such as twice a month, can help restore balance, reduce stress, and improve overall well-being.
Final Words & the Idea of ‘Tech for Good’
Remember that we are in the technological era, and these technologies are created for our ease and convenience. There are certain challenges that bad actors pose, but to counter this, the change starts from you. Remember that technology, while having its risks, also brings tremendous benefits to society. We encourage you to take a step and encourage the responsible and ethical use of the technology. The vision for ‘Tech for Good’ will have to be expanded to a larger picture. Do not engage in a behaviour that you would not ordinarily do in an offline environment, the online environment is also the same and has far-reaching effects. Use technology for good, and follow and encourage ethical and responsible behaviour in online communities. The emphasis should be on using technology in a safer environment for everyone and combatting dishonest practices.
The effective strategies for preventing cybercrime and dishonest practices requires cooperation , efforts by citizens, government agencies, and technology businesses. We intend to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while promoting innovation and connectedness. In 2025, together we can make a cyber safe resilient society.

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide