#FactCheck: AI Video made by Pakistan which says they launched a cross-border airstrike on India's Udhampur Airbase
Executive Summary:
A social media video claims that India's Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. According to official sources, the Udhampur base is still fully operational, and our research proves that the video was produced by artificial intelligence. The growing problem of AI-driven disinformation in the digital age is highlighted by this incident.

Claim:
A viral video alleges that Pakistan's JF-17 fighter jets successfully destroyed the Udhampur Air Force Base in India. The footage shows aircraft engulfed in flames, accompanied by narration claiming the base's destruction during recent cross-border hostilities.

Fact Check :
The Udhampur Air Force Station was destroyed by Pakistani JF-17 fighter jets, according to a recent viral video that has been shown to be completely untrue. The audio and visuals in the video have been conclusively identified as AI-generated based on a thorough analysis using AI detection tools such as Hive Moderation. The footage was found to contain synthetic elements by Hive Moderation, confirming that the images were altered to deceive viewers. Further undermining the untrue claims in the video is the Press Information Bureau (PIB) of India, which has clearly declared that the Udhampur Airbase is still fully operational and has not been the scene of any such attack.

Our analysis of recent disinformation campaigns highlights the growing concern that AI-generated content is being weaponized to spread misinformation and incite panic, which is highlighted by the purposeful misattribution of the video to a military attack.
Conclusion:
It is untrue that the Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. This claim is supported by an AI-generated video that presents irrelevant footage incorrectly. The Udhampur base is still intact and fully functional, according to official sources. This incident emphasizes how crucial it is to confirm information from reliable sources, particularly during periods of elevated geopolitical tension.
- Claim: Recent video footage shows destruction caused by Pakistani jets at the Udhampur Airbase.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the digital era, where technology is growing rapidly, the role of Artificial Intelligence (AI) has been making its way to different corners of the world. Where nothing seems to be impossible, technology and innovation have been moving conjointly and once again, and such innovation is in the limelight with its groundbreaking initiative known as “Project Groot”, which has been announced by the AI chip leader “Nvidia”. The core of this project is the fusion of technology with AI and robotics, where a humanoid can be produced with the capability to understand the natural language and interact with it to further learn from the physical environment by observing human actions and skills. Project Groot aims to assist humans in diverse sectors such as Healthcare and so on.
Humanoid robots are based on NVIDIA’s thor system-on-chip (SoC). The thor powers the intelligence of these robots, and the chip has been designed to handle complex tasks and ensure a safe and natural interaction between humans and robots. However, a big question arises about the ethical considerations of privacy, autonomy and the possible replacement of human workers.
Brief Analysis
Nvidia has announced Project GR00T, or Generalist Robot 00 Technology, which aims to create AI-powered humanoid robots with human-like understanding and movement. The project is part of Nvidia's efforts to drive breakthroughs in robotics and embodied AI, which can interact with and learn from a physical environment. The robots built on this platform are designed to understand natural language and emulate movements by observing human actions, such as coordination, dexterity, and other skills.
The model has been trained on NVIDIA GPU-accelerated simulation, enabling the robots to learn from human demonstrations with imitation learning and from the robotics platform NVIDIA Isaac Lab for reinforcement learning. This multimodal AI system acts as the mind for humanoid robots, allowing them to learn new skills and interact with the real world. Leading names in robotics, such as Figure, Boston Dynamics, Apptronik, Agility Robotics, Sanctuary AI, and Unitree, are reported to have collaborated with Nvidia to leverage GR00T.
Nvidia has also updated Isaac with Isaac Manipulator and Isaac Perceptor, which add multi-camera 3D vision. The company also unveiled a new computer, Jetson Thor, to aid humanoid robots based on NVIDIA's SoC, which is designed to handle complex tasks and ensure a safe and natural interaction between humans and robots.
Despite the potential job loss associated with humanoid robots potentially handling hazardous and repetitive tasks, many argue that they can aid humans and make their lives more comfortable rather than replacing them.
Policy Recommendations
The Nvidia project highlights a significant development in AI Robotics, presenting a brimming potential and ethical challenges critical for the overall development and smooth assimilation of AI-driven tech in society. To ensure its smooth assimilation, a comprehensive policy framework must be put in place. This includes:
- Human First Policy - Emphasis should be on better augmentation rather than replacement. The authorities must focus on better research and development (R&D) of applications that aid in modifying human capabilities, enhancing working conditions, and playing a role in societal growth.
- Proper Ethical Guidelines - Guidelines stressing human safety, autonomy and privacy should be established. These norms must include consent for data collection, fair use of AI in decision making and proper protocols for data security.
- Deployment of Inclusive Technology - Access to AI Driven Robotics tech should be made available to diverse sectors of society. It is imperative to address potential algorithm bias and design flaws to avoid discrimination and promote inclusivity.
- Proper Regulatory Frameworks - It is crucial to establish regulatory frameworks to govern the smooth deployment and operation of AI-driven tech. The framework must include certification for safety and standards, frequent audits and liability protocols to address accidents.
- Training Initiatives - Educational programs should be introduced to train the workforce for integrating AI driven robotics and their proper handling. Upskilling of the workforce should be the top priority of corporations to ensure effective integration of AI Robotics.
- Collaborative Research Initiatives - AI and emerging technologies have a profound impact on the trajectory of human development. It is imperative to foster collaboration among governments, industry and academia to drive innovation in AI robotics responsibly and undertake collaborative initiatives to mitigate and address technical, societal, legal and ethical issues posed by AI Robots.
Conclusion
On the whole, Project GROOT is a significant quantum leap in the advancement of robotic technology and indeed paves the way for a future where robots can integrate seamlessly into various aspects of human lives.
References
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-nvidias-project-gr00t-impact-robotics-9225089/
- https://medium.com/paper-explanation/understanding-nvidias-project-groot-762d4246b76d
- https://www.techradar.com/pro/nvidias-project-groot-brings-the-human-robot-future-a-significant-step-closer
- https://www.barrons.com/livecoverage/nvidia-gtc-ai-conference/card/nvidia-announces-ai-model-for-humanoid-robot-development-BwT9fewMyD6XbuBrEDSp

Introduction
Charity and donation scams have continued to persist and are amplified in the digital era, where messages spread rapidly through WhatsApp, emails, and social media. These fraudulent schemes involve threat actors impersonating legitimate charities, government appeals, or social causes to solicit funds. Apart from targeting the general public, they also impact entities such as reputable tech firms and national institutions. Victims are tricked into transferring money or sharing personal information, often under the guise of urgent humanitarian efforts or causes.
A recent incident involves a fake WhatsApp message claiming to be from the Indian Ministry of Defence. The message urged users to donate to a fund for “modernising the Indian Army.” The government later confirmed this message was entirely fabricated and part of a larger scam. It emphasised that no such appeal had been issued by the Ministry, and urged citizens to verify such claims through official government portals before responding.
Tech Industry and Donation-Related Scams
Large corporations are also falling prey. According to media reports, an American IT company recently terminated around 700 Indian employees after uncovering a donation-related fraud. At least 200 of them were reportedly involved in a scheme linked to Telugu organisations in the US. The scam echoed a similar situation that had previously affected Apple, where Indian employees were fired after being implicated in donation fraud tied to the Telugu Association of North America (TANA). Investigations revealed that employees had made questionable donations to these groups in exchange for benefits such as visa support or employment favours.
Common People Targeted
While organisational scandals grab headlines, the common man remains equally or even more vulnerable. In a recent incident, a man lost over ₹1 lakh after clicking on a WhatsApp link asking for donations to a charity. Once he engaged with the link, the fraudsters manipulated him into making repeated transfers under various pretexts, ranging from processing fees to refund-related transactions (social engineering). Scammers often employ a similar set of tactics using urgency, emotional appeal, and impersonation of credible platforms to convince and deceive people.
Cautionary Steps
CyberPeace recommends adopting a cautious and informed approach when making charitable donations, especially online. Here are some key safety measures to follow:
- Verify Before You Donate. Always double-check the legitimacy of donation appeals. Use official government portals or the official charities' websites. Be wary of unfamiliar phone numbers, email addresses, or WhatsApp forwards asking for money.
- Avoid Clicking on Suspicious Links
Never click on links or download attachments from unknown or unverified sources. These could be phishing links/ malware designed to steal your data or access your bank accounts. - Be Sceptical of Urgency Scammers bank on creating a false sense of urgency to pressure their victims into donating quickly. One must take the time to evaluate before responding.
- Use Secure Payment Channels Ensure that one makes donations only through platforms that are secure, trusted, and verified. These include official UPI handles, government-backed portals (like PM CARES or Bharat Kosh), among others.
- Report Suspected Fraud In case one receives suspicious messages or falls victim to a scam, they are encouraged to report it to cybercrime authorities via cybercrime.gov.in (1930) or the local police, as prompt reporting can prevent further fraud.
Conclusion
Charity should never come at the cost of trust and safety. While donating to a good cause is noble, doing it mindfully is essential in today’s scam-prone environment. Always remember: a little caution today can save a lot tomorrow.
References
- https://economictimes.indiatimes.com/news/defence/misleading-message-circulating-on-whatsapp-related-to-donation-for-armys-modernisation-govt/articleshow/120672806.cms?from=mdr
- https://timesofindia.indiatimes.com/technology/tech-news/american-company-sacks-700-of-these-200-in-donation-scam-related-to-telugu-organisations-similar-to-firing-at-apple/articleshow/120075189.cms
- https://timesofindia.indiatimes.com/city/hyderabad/apple-fires-some-indians-over-donation-fraud-tana-under-scrutiny/articleshow/117034457.cms
- https://www.indiatoday.in/technology/news/story/man-gets-link-for-donation-and-charity-on-whatsapp-loses-over-rs-1-lakh-after-clicking-on-it-2688616-2025-03-04
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html