#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The rapid advancement of artificial intelligence (AI) technology has sparked intense debates and concerns about its potential impact on humanity. Sam Altman, CEO of AI research laboratory OpenAI, and Altman, known as the father of ChatGPT, an AI chatbot, hold a complex position, recognising both the existential risks AI poses and its potential benefits. In a world tour to raise awareness about AI risks, Altman advocates for global cooperation to establish responsible guidelines for AI development. Artificial intelligence has become a topic of increasing interest and concern as technology advances. Developing sophisticated AI systems raises many ethical questions, including whether they will ultimately save or destroy humanity.
Addressing Concerns
Altman engages with various stakeholders, including protesters who voice concerns about the race toward artificial general intelligence (AGI). Critics argue that focusing on safety rather than pushing AGI development would be a more responsible approach. Altman acknowledges the importance of safety progress but believes capability progress is necessary to ensure safety. He advocates for a global regulatory framework similar to the International Atomic Energy Agency, which would coordinate research efforts, establish safety standards, monitor computing power dedicated to AI training, and possibly restrict specific approaches.
Risks of AI Systems
While AI holds tremendous promise, it also presents risks that must be carefully considered. One of the major concerns is the development of artificial general intelligence (AGI) without sufficient safety precautions. AGI systems with unchecked capabilities could potentially pose existential risks to humanity if they surpass human intelligence and become difficult to control. These risks include the concentration of power, misuse of technology, and potential for unintended consequences.
There are also fears surrounding AI systems’ impact on employment. As machines become more intelligent and capable of performing complex tasks, there is a risk that many jobs will become obsolete. This could lead to widespread unemployment and economic instability if steps are not taken to prepare for this shift in the labour market.
While these risks are certainly caused for concern, it is important to remember that AI systems also have tremendous potential to do good in the world. By carefully designing these technologies with ethics and human values in mind, we can mitigate many of the risks while still reaping the benefits of this exciting new frontier in technology.

Open AI Systems and Chatbots
Open AI systems like ChatGPT and chatbots have gained popularity due to their ability to engage in natural language conversations. However, they also come with risks. The reliance on large-scale training data can lead to biases, misinformation, and unethical use of AI. Ensuring open AI systems’ safety and responsible development mitigates potential harm and maintains public trust.
The Need for Global Cooperation
Sam Altman and other tech leaders emphasise the need for global cooperation to address the risks associated with AI development. They advocate for establishing a global regulatory framework for superintelligence. Superintelligence refers to AGI operating at an exceptionally advanced level, capable of solving complex problems that have eluded human comprehension. Such a framework would coordinate research efforts, enforce safety standards, monitor computing power, and potentially restrict specific approaches. International collaboration is essential to ensure responsible and beneficial AI development while minimising the risks of misuse or unintended consequences.
Can AI Systems Make the World a Better Place: Benefits of AI Systems
AI systems hold many benefits that can greatly improve human life. One of the most significant advantages of AI is its ability to process large amounts of data at a rapid pace. In industries such as healthcare, this has allowed for faster diagnoses and more effective treatments. Another benefit of AI systems is their capacity to learn and adapt over time. This allows for more personalised experiences in areas such as customer service, where AI-powered chatbots can provide tailored solutions based on an individual’s needs. Additionally, AI can potentially increase efficiency in various industries, from manufacturing to transportation. By automating repetitive tasks, human workers can focus on higher-level tasks that require creativity and problem-solving skills. Overall, the benefits of AI systems are numerous and promising for improving human life in various ways.
We must remember the impact of AI on education. It has already started to show its potential by providing personalised learning experiences for students at all levels. With the help of AI-driven systems like intelligent tutoring systems (ITS), adaptive learning technologies (ALT), and educational chatbots, students can learn at their own pace without feeling overwhelmed or left behind.
While there are certain risks associated with the development of AI systems, there are also numerous opportunities for them to make our world a better place. By harnessing the power of these technologies for good, we can create a brighter future for ourselves and generations to come.

Conclusion
The AI revolution presents both extraordinary opportunities and significant challenges for humanity. The benefits of AI, when developed responsibly, have the potential to uplift societies, improve quality of life, and address long-standing global issues. However, the risks associated with AGI demand careful attention and international cooperation. Governments, researchers, and industry leaders must work together to establish guidelines, safety measures, and ethical standards to navigate the path toward AI systems that serve humanity’s best interests and safeguard against potential risks. By taking a balanced approach, we can strive for a future where AI systems save humanity rather than destroy it.

Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry".
- Claimed On: Social Media
- Fact Check:AI Generated Mislead
.webp)
Starting on 16th February 2025, Google changed its advertisement platform program policy. It will permit advertisers to employ device fingerprinting techniques for user tracking. Organizations that use their advertising services are now permitted to use fingerprinting techniques for tracking their users' data. Originally announced on 18th December 2024, this rule change has sparked yet another debate regarding privacy and profits.
The Issue
Fingerprinting is a technique that allows for the collection of information about a user’s device and browser details, ultimately enabling the creation of a profile of the user. Not only used for or limited to targeting advertisements, data procured in such a manner can be used by private entities and even government organizations to identify individuals who access their services. If information on customization options, such as language settings and a user’s screen size, is collected, it becomes easier to identify an individual when combined with data points like browser type, time zone, battery status, and even IP address.
What makes this technique contentious at the moment is the lack of awareness regarding the information being collected from the user and the inability to opt out once permissions are granted.
This is unlike Google’s standard system of data collection through permission requests, such as accepting website cookies—small text files sent to the browser when a user visits a particular website. While contextual and first-party cookies limit data collection to enhance user experience, third-party cookies enable the display of irrelevant advertisements while users browse different platforms. Due to this functionality, companies can engage in targeted advertising.
This issue has been addressed in laws like the General Data Protection Regulation (GDPR) of the European Union (EU) and the Digital Personal Data Protection (DPDP) Act, 2023 (India), which mandate strict rules and regulations regarding advertising, data collection, and consent, among other things. One of the major requirements in both laws is obtaining clear, unambiguous consent. This also includes the option to opt out of previously granted permissions for cookies.
However, in the case of fingerprinting, the mechanism of data collection relies on signals that users cannot easily erase. While clearing all data from the browser or refusing cookies might seem like appropriate steps to take, they do not prevent tracking through fingerprinting, as users can still be identified using system details that a website has already collected. This applies to all IoT products as well. People usually do not frequently change the devices they use, and once a system is identified, there are no available options to stop tracking, as fingerprinting relies on device characteristics rather than data-collecting text files that could otherwise be blocked.
Google’s Changing Stance
According to Statista, Google’s revenue is largely made up of the advertisement services it provides (amounting to 264.59 billion U.S. dollars in 2024). Any change in its advertisement program policies draws significant attention due to its economic impact.
In 2019, Google claimed in a blog post that fingerprinting was a technique that “subverts user choice and is wrong.” It is in this context that the recent policy shift comes as a surprise. In response, the ICO (Information Commissioner’s Office), the UK’s data privacy watchdog, has stated that this change is irresponsible. Google, however, is eager to have further discussions with the ICO regarding the policy change.
Conclusion
The debate regarding privacy in targeted advertising has been ongoing for quite some time. Concerns about digital data collection and storage have led to new and evolving laws that mandate strict fines for non-compliance.
Google’s shift in policy raises pressing concerns about user privacy and transparency. Fingerprinting, unlike cookies, offers no opt-out mechanism, leaving users vulnerable to continuous tracking without consent. This move contradicts Google’s previous stance and challenges global regulations like the GDPR and DPDP Act, which emphasize clear user consent.
With regulators like the ICO expressing disapproval, the debate between corporate profits and individual privacy intensifies. As digital footprints become harder to erase, users, lawmakers, and watchdogs must scrutinize such changes to ensure that innovation does not come at the cost of fundamental privacy rights
References
- https://www.techradar.com/pro/security/profit-over-privacy-google-gives-advertisers-more-personal-info-in-major-fingerprinting-u-turn
- https://www.ccn.com/news/technology/googles-new-fingerprinting-policy-sparks-privacy-backlash-as-ads-become-harder-to-avoid/
- https://www.emarketer.com/content/google-pivot-digital-fingerprinting-enable-better-cross-device-measurement
- https://www.lewissilkin.com/insights/2025/01/16/google-adopts-new-stance-on-device-fingerprinting-102ju7b
- https://www.lewissilkin.com/insights/2025/01/16/ico-consults-on-storage-and-access-cookies-guidance-102ju62
- https://www.bbc.com/news/articles/cm21g0052dno
- https://www.techradar.com/features/browser-fingerprinting-explained
- https://fingerprint.com/blog/canvas-fingerprinting/
- https://www.statista.com/statistics/266206/googles-annual-global-revenue/#:~:text=In%20the%20most%20recently%20reported,billion%20U.S.%20dollars%20in%202024