#FactCheck-A manipulated image showing Indian cricketer Virat Kohli allegedly watching Rahul Gandhi's media briefing on his mobile phone has been widely shared online.
Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.
Related Blogs
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/

On March 02, 2023, the Biden-Harris Administration unveiled the National Cybersecurity Plan to ensure that all Americans can enjoy the advantages of a secure digital environment. In this pivotal decade, the United States will reimagine cyberspace as a tool to achieve our goals in a way that is consistent with our values. These values include a commitment to economic security and prosperity, respect for human rights and fundamental freedoms, faith in our democracy and its institutions, and a commitment to creating a fair and diverse society. This goal cannot be achieved without a dramatic reorganisation of the United States’ cyberspace responsibilities, roles, and resources.
VISION- AIM
A more planned, organised, and well-resourced strategy to cyber protection is necessary for today’s rapidly developing world. State and non-state actors alike are launching creative new initiatives to challenge the United States. New avenues for innovation are opening up as next-generation technologies attain maturity and digital interdependencies are expanding. Thus, this Plan lays forth a plan to counter these dangers and protect the digital future. Putting it into effect can safeguard spending on things like infrastructure, clean energy, and the re-shoring of American industry.
The USA will create its digital environment by:
- Defensible if the cyber defence is comparatively easier, more effective, cheaper
- Resilient, where the impacts of cyberattacks and operator mistakes are lasting and little widespread.
- Values-aligned, where our most cherished values shape—and are in turn reinforced by— our digital world.
Already, the National Security Strategy, Executive Order 14028 (Improving the Nation’s Cybersecurity), National Security Memorandum 5 (Improving Cybersecurity for Critical Infrastructure Control Systems), M-22-09 (Moving the U.S. Government Toward Zero-Trust Cybersecurity Principles), and National Security Memorandum 10 (Improving Cybersecurity for Federal Information Systems) have all been issued to help secure cyberspace and our digital ecosystem (Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems). The Strategy builds upon previous efforts by acknowledging that the Internet serves not as an end in itself but as a means to a goal—the achievement of our highest ideals.
There are five key points that constitute the National Cybersecurity Strategy:
1. Defend Critical Infrastructure –
Defend critical infrastructure by, among other things: i) enacting cybersecurity regulations to secure essential infrastructure; (ii) boosting public-private sector collaboration; (iii) integrating federal cybersecurity centres; (iv) updating federal incident response plans and processes; and (v) modernising federal systems in accordance with zero trust principles.
2. Disrupt and Dismantle Threat Actors
Disrupt and dismantle threat actors, including by i) integrating military, diplomatic, information, financial, intelligence, and law enforcement competence, (ii) strengthening public-private sector collaborations, (iii) increasing the speed and scale of intelligence sharing and victim information, (iv) preventing the abuse of U.S.-based infrastructure, and (v) increasing disruption campaigns and other endeavours against ransomware operators;
3. Shape Market Forces to Drive Security and Resilience
The federal government can help shape market forces that drive security and resilience by doing the following: i) supporting legislative efforts to limit organisations’ ability to collect, use, transfer, and maintain personal information and providing strong protections for sensitive data (such as geolocation and health data), (ii) boosting IoT device security via federal research, development, sourcing, risk management efforts, and IoT security labelling programs, and (iii) instituting legislation establishing standards for the security of IoT devices. (iv) strengthening cybersecurity contract standards with government suppliers, (v) studying a federal cyber insurance framework, and (vi) using federal grants and other incentives to invest in efforts to secure critical infrastructure.
4. Invest in a Resilient Future
Invest in a resilient future by doing things like i) securing the Internet’s underlying infrastructure, (ii) funding federal cybersecurity R&D in areas like artificial intelligence, cloud computing, telecommunications, and data analytics used in critical infrastructure, (iii) migrating vulnerable public networks and systems to quantum-resistant cryptography-based environments, and (iv) investing hardware and software systems that strengthen the resiliency, safety, and security of these areas, (v) enhancing and expanding the nation’s cyber workforce; and (vi) investing in verifiable, strong digital identity solutions that promote security, interoperability, and accessibility.
5. Forge International Partnerships to Pursue Shared Goals
The United States should work with other countries to advance common interests, such as i) forming international coalitions to counter threats to the digital ecosystem; (ii) increasing the scope of U.S. assistance to allies and partners in strengthening cybersecurity; (iii) forming international coalitions to reinforce global norms of responsible state behaviour; and (v) securing global supply chains for information, communications, and operational technologies.
Conclusion:
The Strategy results from months of work by the Office of the National Cyber Director (“ONCD”), the primary cybersecurity policy and strategy advisor to President Biden and coordinates cybersecurity engagement with business and international partners. The National Security Council will oversee the Strategy’s implementation through ONCD and the Office of Management and Budget.
In conclusion, we can say that the National Cybersecurity Plan of the Biden administration lays out an ambitious goal for American cybersecurity that is to be accomplished by the end of the decade. The administration aims to shift tasks and responsibilities to those organisations in the best position to safeguard systems and software and to encourage incentives for long-term investment in cybersecurity to build a more cyber-secure future.
It is impossible to assess the cyber strategy in a vacuum. It’s critical to consider the previous efforts and acknowledge the ones that still need to be made. The implementation specifics for several aspects of the approach are left up to a yet-to-be-written plan.
Given these difficulties, it would be simple to voice some pessimism at this stage regarding the next effort that will be required. Yet, the Biden administration has established a vision for cybersecurity oriented towards the future, with novel projects that could fundamentally alter how the United States handles and maintains cybersecurity. The Biden administration raised the bar for cybersecurity by outlining this robust plan, which will be challenging for succeeding administrations to let go. Also, it has alerted Congress to areas where it will need to act.
References:
- https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/
- https://www.huntonprivacyblog.com/2023/03/02/white-house-releases-national-cybersecurity-strategy/
- https://www.lawfareblog.com/biden-harris-administration-releases-new-national-cybersecurity-strategy
.webp)
Introduction
India's Competition Commission of India (CCI) on 18th November 2024 imposed a ₹213 crore penalty on Meta for abusing its dominant position in internet-based messaging through WhatsApp and online display advertising. The CCI order is passed against abuse of dominance by the Meta and relates to WhatsApp’s 2021 Privacy Policy. The CCI considers Meta a dominant player in internet-based messaging through WhatsApp and also in online display advertising. WhatsApp's 2021 privacy policy update undermined users' ability to opt out of getting their data shared with the group's social media platform Facebook. The CCI directed WhatsApp not to share user data collected on its platform with other Meta companies or products for advertising purposes for five years.
CCI Contentions
The regulator contended that for purposes other than advertising, WhatsApp's policy should include a detailed explanation of the user data shared with other Meta group companies or products specifying the purpose. The regulator also stated that sharing user data collected on WhatsApp with other Meta companies or products for purposes other than providing WhatsApp services should not be a condition for users to access WhatsApp services in India. CCI order is significant as it upholds user consent as a key principle in the functioning of social media giants, similar to the measures taken by some other markets.
Meta’s Stance
WhatsApp parent company Meta has expressed its disagreement with the Competition Commission of India's(CCI) decision to impose a Rs 213 crore penalty on them over users' privacy concerns. Meta clarified that the 2021 update did not change the privacy of people's personal messages and was offered as a choice for users at the time. It also ensured no one would have their accounts deleted or lose functionality of the WhatsApp service because of this update.
Meta clarified that the update was about introducing optional business features on WhatsApp and providing further transparency about how they collect data. The company stated that WhatsApp has been incredibly valuable to people and businesses, enabling organization's and government institutions to deliver citizen services through COVID and beyond and supporting small businesses, all of which further the Indian economy. Meta plans to find a path forward that allows them to continue providing the experiences that "people and businesses have come to expect" from them. The CCI issued cease-and-desist directions and directed Meta and WhatsApp to implement certain behavioral remedies within a defined timeline.
The competition watchdog noted that WhatsApp's 2021 policy update made it mandatory for users to accept the new terms, including data sharing with Meta, and removed the earlier option to opt-out, categorized
as an "unfair condition" under the Competition Act. It was further noted that WhatsApp’s sharing of users’ business transaction information with Meta gave the group entities an unfair advantage over competing platforms.
CyberPeace Outlook
The 2021 policy update by WhatsApp mandated data sharing with Meta's other companies group, removing the opt-out option and compelling users to accept the terms to continue using the platform. This policy undermined user autonomy and was deemed as an abuse of Meta's dominant market position, violating Section 4(2)(a)(i) of the Competition Act, as noted by CCI.
The CCI’s ruling requires WhatsApp to offer all users in India, including those who had accepted the 2021 update, the ability to manage their data-sharing preferences through a clear and prominent opt-out option within the app. This decision underscores the importance of user choice, informed consent, and transparency in digital data policies.
By addressing the coercive nature of the policy, the CCI ruling establishes a significant legal precedent for safeguarding user privacy and promoting fair competition. It highlights the growing acknowledgement of privacy as a fundamental right and reinforces the accountability of tech giants to respect user autonomy and market fairness. The directive mandates that data sharing within the Meta ecosystem must be based on user consent, with the option to decline such sharing without losing access to essential services.
References