#FactCheck -AI-Generated Image Falsely Linked to Kotdwar Shop Controversy
Executive Summary
A dispute had recently emerged in Kotdwar, Uttarakhand, over the name of a shop. During the controversy, a local youth, Deepak Kumar, came forward in support of the shopkeeper. The incident subsequently became a subject of discussion on social media, with users expressing varied reactions. Meanwhile, a photo began circulating on social media showing a burqa-clad woman presenting a bouquet to Deepak Kumar. The image is being shared with the claim that All India Majlis-e-Ittehadul Muslimeen (AIMIM)’s women’s president, Rubina, welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. However, research conducted by the CyberPeace found the viral claim to be false. The research revealed that users are sharing an AI-generated image with a misleading claim.
Claim:
On social media platform Instagram, a user shared the viral image claiming that AIMIM’s women’s president Rubina welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. The link to the post, its archived version, and a screenshot are provided below.

Fact Check:
Upon closely examining the viral image, certain inconsistencies raised suspicion that it could be AI-generated. To verify its authenticity, the image was analysed using the AI detection tool Hive Moderation, which indicated a 96 percent probability that the image was AI-generated.

In the next stage of the research , the image was also analysed using another AI detection tool, Wasit AI, which likewise identified the image as AI-generated.

Conclusion
The research establishes that users are circulating an AI-generated image with a misleading claim linking it to the Kotdwar controversy.
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

Introduction
Deepfakes are artificial intelligence (AI) technology that employs deep learning to generate realistic-looking but phoney films or images. Algorithms use large volumes of data to analyse and discover patterns in order to provide compelling and realistic results. Deepfakes use this technology to modify movies or photos to make them appear as if they involve events or persons that never happened or existed.The procedure begins with gathering large volumes of visual and auditory data about the target individual, which is usually obtained from publicly accessible sources such as social media or public appearances. This data is then utilised for training a deep-learning model to resemble the target of deep fakes.
Recent Cases of Deepfakes-
In an unusual turn of events, a man from northern China became the victim of a sophisticated deep fake technology. This incident has heightened concerns about using artificial intelligence (AI) tools to aid financial crimes, putting authorities and the general public on high alert.
During a video conversation, a scammer successfully impersonated the victim’s close friend using AI-powered face-swapping technology. The scammer duped the unwary victim into transferring 4.3 million yuan (nearly Rs 5 crore). The fraud occurred in Baotou, China.
AI ‘deep fakes’ of innocent images fuel spike in sextortion scams
Artificial intelligence-generated “deepfakes” are fuelling sextortion frauds like a dry brush in a raging wildfire. According to the FBI, the number of nationally reported sextortion instances came to 322% between February 2022 and February 2023, with a notable spike since April due to AI-doctored photographs. And as per the FBI, innocent photographs or videos posted on social media or sent in communications can be distorted into sexually explicit, AI-generated visuals that are “true-to-life” and practically hard to distinguish. According to the FBI, predators often located in other countries use doctored AI photographs against juveniles to compel money from them or their families or to obtain actual sexually graphic images.
Deepfake Applications
- Lensa AI.
- Deepfakes Web.
- Reface.
- MyHeritage.
- DeepFaceLab.
- Deep Art.
- Face Swap Live.
- FaceApp.
Deepfake examples
There are numerous high-profile Deepfake examples available. Deepfake films include one released by actor Jordan Peele, who used actual footage of Barack Obama and his own imitation of Obama to convey a warning about Deepfake videos.
A video shows Facebook CEO Mark Zuckerberg discussing how Facebook ‘controls the future’ with stolen user data, most notably on Instagram. The original video is from a speech he delivered on Russian election meddling; only 21 seconds of that address were used to create the new version. However, the vocal impersonation fell short of Jordan Peele’s Obama and revealed the truth.
The dark side of AI-Generated Misinformation
- Misinformation generated by AI-generated the truth, making it difficult to distinguish fact from fiction.
- People can unmask AI content by looking for discrepancies and lacking the human touch.
- AI content detection technologies can detect and neutralise disinformation, preventing it from spreading.
Safeguards against Deepfakes-
Technology is not the only way to guard against Deepfake videos. Good fundamental security methods are incredibly effective for combating Deepfake.For example, incorporating automatic checks into any mechanism for disbursing payments might have prevented numerous Deepfake and related frauds. You might also:
- Regular backups safeguard your data from ransomware and allow you to restore damaged data.
- Using different, strong passwords for different accounts ensures that just because one network or service has been compromised, it does not imply that others have been compromised as well. You do not want someone to be able to access your other accounts if they get into your Facebook account.
- To secure your home network, laptop, and smartphone against cyber dangers, use a good security package such as Kaspersky Total Security. This bundle includes anti-virus software, a VPN to prevent compromised Wi-Fi connections, and webcam security.
What is the future of Deepfake –
Deepfake is constantly growing. Deepfake films were easy to spot two years ago because of the clumsy movement and the fact that the simulated figure never looked to blink. However, the most recent generation of bogus videos has evolved and adapted.
There are currently approximately 15,000 Deepfake videos available online. Some are just for fun, while others attempt to sway your opinion. But now that it only takes a day or two to make a new Deepfake, that number could rise rapidly.
Conclusion-
The distinction between authentic and fake content will undoubtedly become more challenging to identify as technology advances. As a result, experts feel it should not be up to individuals to discover deep fakes in the wild. “The responsibility should be on the developers, toolmakers, and tech companies to create invisible watermarks and signal what the source of that image is,” they stated. Several startups are also working on approaches for detecting deep fakes.

Introduction:
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
- Speed: copying is simpler and faster,compared to cloning or imaging.
- Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
- bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
- Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
- Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
- Integrity: Imaging ensures the integrity and authenticity of the evidence produced
- Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
- Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
- Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
- Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
- Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
- Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
- Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.