#FactCheck – Debunked: Dhoni's Viral Picture Misinterpreted as Political Support
Executive Summary:
The picture that went viral with the false story that Dhoni was supporting the Congress party, actually shows his joy over Chennai Super Kings' victory in the achievement of 6 million followers on X (formerly known as Twitter) in 2020. Dhoni's gesture was misinterpreted by many, which resulted in the spread of false information. The Research team of CyberPeace did an in-depth investigation of the photo's roots and confirmed its authenticity through a reverse image search, highlighting how news outlets and CSK's official social media channels shared it. The case illustrates the value of fact verification and the role of real information in preventing the fake news epidemic.

Claims:
An image of former Indian Cricket captain Mahendra Singh Dhoni, showed him urging people to vote for the Congress party, wearing the Chennai Super Kings (CSK) jersey and showing his right palm visible and gesturing the number 'one' with his left index finger. In reality he is celebrating Chennai Super Kings' milestone achievement on X (formerly Twitter) in 2020. Many people are sharing the misinterpretation knowingly or unknowingly over social media platforms.



Fact Check:
After receiving the post, we ran a reverse image search of the image and found a news article published by NDTV. According to the news outlet, Dhoni and his teammates were celebrating CSK's milestone of reaching six million followers on X (formerly known as Twitter) in the photos.

In the image it is written as a tweet of @chennaiipl, to get an idea we dig into the official account of Chennai Super Kings on X (formerly known as Twitter). And Voila! we found the exact post which surfaced on the X (formerly known as Twitter) on 5th October 2020.

Additionally, we found a video posted on the X (formerly known as Twitter) handle of CSK, featuring other cricketers celebrating the Six Million Followers milestone for which they are thanking the audience for their support. Again, it was posted on Oct 05, 2020. The caption of the video is written as “Chennai Super #SixerOnTwitter! A big thanks to all the super fans for each and every bouquet and brickbat throughout the last decade. All the #yellove to you. #WhistlePodu”

Therefore it is easy to conclude that the viral image of MS Dhoni supporting Congress is wrong and misleading.
Conclusion:
The information that circulated online media regarding a picture of Mahendra Singh Dhoni supporting the Congress Party has been proven to be untrue. The actual photograph was of Dhoni congratulating the Chennai Super Kings for having six million followers on social media in the year 2020. This highlights the need for checking the facts of any news circulating online.
- Claim: A photo allegedly depicting former Indian cricket captain Mahendra Singh Dhoni encouraging people to support the Congress party in elections surfaced online.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
The mysteries of the universe have been a subject of curiosity for humans over thousands of years. To solve these unfolding mysteries of the universe, astrophysicists are always busy, and with the growing technology this seems to be achievable. Recently, with the help of Artificial Intelligence (AI), scientists have discovered the depths of the cosmos. AI has revealed the secret equation that properly “weighs” galaxy clusters. This groundbreaking discovery not only sheds light on the formation and behavior of these clusters but also marks a turning point in the investigation and discoveries of new cosmos. Scientists and AI have collaborated to uncover an astounding 430,000 galaxies strewn throughout the cosmos. The large haul includes 30,000 ring galaxies, which are considered the most unusual of all galaxy forms. The discoveries are the first outcomes of the "GALAXY CRUISE" citizen science initiative. They were given by 10,000 volunteers who sifted through data from the Subaru Telescope. After training the AI on 20,000 human-classified galaxies, scientists released it loose on 700,000 galaxies from the Subaru data.
Brief Analysis
A group of astronomers from the National Astronomical Observatory of Japan (NAOJ) have successfully applied AI to ultra-wide field-of-view images captured by the Subaru Telescope. The researchers achieved a high accuracy rate in finding and classifying spiral galaxies, with the technique being used alongside citizen science for future discoveries.
Astronomers are increasingly using AI to analyse and clean raw astronomical images for scientific research. This involves feeding photos of galaxies into neural network algorithms, which can identify patterns in real data more quickly and less prone to error than manual classification. These networks have numerous interconnected nodes and can recognise patterns, with algorithms now 98% accurate in categorising galaxies.
Another application of AI is to explore the nature of the universe, particularly dark matter and dark energy, which make up over 95% energy of the universe. The quantity and changes in these elements have significant implications for everything from galaxy arrangement.
AI is capable of analysing massive amounts of data, as training data for dark matter and energy comes from complex computer simulations. The neural network is fed these findings to learn about the changing parameters of the universe, allowing cosmologists to target the network towards actual data.
These methods are becoming increasingly important as astronomical observatories generate enormous amounts of data. High-resolution photographs of the sky will be produced from over 60 petabytes of raw data by the Vera C. AI-assisted computers are being utilized for this.
Data annotation techniques for training neural networks include simple tagging and more advanced types like image classification, which classify an image to understand it as a whole. More advanced data annotation methods, such as semantic segmentation, involve grouping an image into clusters and giving each cluster a label.
This way, AI is being used for space exploration and is becoming a crucial tool. It also enables the processing and analysis of vast amounts of data. This advanced technology is fostering the understanding of the universe. However, clear policy guidelines and ethical use of technology should be prioritized while harnessing the true potential of contemporary technology.
Policy Recommendation
- Real-Time Data Sharing and Collaboration - Effective policies and frameworks should be established to promote real-time data sharing among astronomers, AI developers and research institutes. Open access to astronomical data should be encouraged to facilitate better innovation and bolster the application of AI in space exploration.
- Ethical AI Use - Proper guidelines and a well-structured ethical framework can facilitate judicious AI use in space exploration. The framework can play a critical role in addressing AI issues pertaining to data privacy, AI Algorithm bias and transparent decision-making processes involving AI-based tech.
- Investing in Research and Development (R&D) in the AI sector - Government and corporate giants should prioritise this opportunity to capitalise on the avenue of AI R&D in the field of space tech and exploration. Such as funding initiatives focusing on developing AI algorithms coded for processing astronomical data, optimising telescope operations and detecting celestial bodies.
- Citizen Science and Public Engagement - Promotion of citizen science initiatives can allow better leverage of AI tools to involve the public in astronomical research. Prominent examples include the SETI @ Home program (Search for Extraterrestrial Intelligence), encouraging better outreach to educate and engage citizens in AI-enabled discovery programs such as the identification of exoplanets, classification of galaxies and discovery of life beyond earth through detecting anomalies in radio waves.
- Education and Training - Training programs should be implemented to educate astronomers in AI techniques and the intricacies of data science. There is a need to foster collaboration between AI experts, data scientists and astronomers to harness the full potential of AI in space exploration.
- Bolster Computing Infrastructure - Authorities should ensure proper computing infrastructure should be implemented to facilitate better application of AI in astronomy. This further calls for greater investment in high-performance computing devices and structures to process large amounts of data and AI modelling to analyze astronomical data.
Conclusion
AI has seen an expansive growth in the field of space exploration. As seen, its multifaceted use cases include discovering new galaxies and classifying celestial objects by analyzing the changing parameters of outer space. Nevertheless, to fully harness its potential, robust policy and regulatory initiatives are required to bolster real-time data sharing not just within the scientific community but also between nations. Policy considerations such as investment in research, promoting citizen scientific initiatives and ensuring education and funding for astronomers. A critical aspect is improving key computing infrastructure, which is crucial for processing the vast amount of data generated by astronomical observatories.
References
- https://mindy-support.com/news-post/astronomers-are-using-ai-to-make-discoveries/
- https://www.space.com/citizen-scientists-artificial-intelligence-galaxy-discovery
- https://www.sciencedaily.com/releases/2024/03/240325114118.htm
- https://phys.org/news/2023-03-artificial-intelligence-secret-equation-galaxy.html
- https://www.space.com/astronomy-research-ai-future

Introduction
Digital Arrests are a form of scam that involves the digital restraint of individuals. These restraints can vary from restricting access to the account(s), and digital platforms, to implementing measures to prevent further digital activities or being restrained on video calling or being monitored through video calling. Typically, these scams target vulnerable individuals who are unfamiliar with digital fraud tactics, making them more susceptible to manipulation. These scams often target the victims on allegations of drug trafficking, money laundering, falsified documents, etc. These are serious crimes and these scammers scare the victim into thinking that either their identities were used to commit these crimes or they have committed these crimes. Recently there has been an uptick in the digital fraud scams in India highlighting the growing concerns.
The Legality of Digital Arrests in India
There is no legal provision for law enforcement to conduct ‘arrests’ via video calls or online monitoring. If you receive such calls, it is a clear scam. In fact, recently enacted new criminal laws do not provide for any provision for law enforcement agencies conducting a digital arrest. The law only provides for service of the summons and the proceedings in an electronic mode.
The Bhartiya Nagrik Suraksha Sanhita (BNSS), 2023 provides for the summons to be served electronically under section 63. The section defines the form of summons. It states that every summons served electronically shall be encrypted and bear the image of the seal of the Court or digital signature. Further, according to section 532 of the BNSS, the trial and proceedings may be held in electronic mode, by use of electronic communication or by the use of audio-video electronic means.
Modus Operandi
Under digital arrest scams, the scammer makes a connection via video calls (WhatsApp calls, skype, etc) with the victim over their alleged involvement in crimes (financial, drug trafficking, etc) in bogus charges. The victims are intimidated that the arrest will take place soon and till the time the arresting officers do not reach the victim they are to remain on the call and be under digital surveillance and not contact anyone during the ongoing investigation.
During this period, the scammers start collecting information from the victim to confirm their identity and create an atmosphere in which multiple senior officials are on the victim’s case and they are investigating the case thoroughly. By this time, the victim, scared out of their wits, sits through this arrest and it is then that the scammers posing as law enforcement officials make comments that they can avoid arrest by paying a certain amount of the fines to the accounts that they specify. This monitoring/ surveillance continues till the time the victim makes the transfers to the accounts provided by the scammers. These are the common manipulation tactics used by scammers in digital arrest fraud.
Recent Cyber Arrest Cases
- Recently a 35-year-old NBCC official was duped of Rs 55 lakh in a 'digital arrest' scam. Posing as customs officials, fraudsters claimed her details were linked to intercepted illegal items and a pending arrest. They kept her on video calls, convincing her to transfer Rs 55 lakh to avoid money laundering charges. After the transfer, the scammers vanished. A police investigation traced the funds to a fake company, leading to the arrest of suspects.
- Another recent case involved a neurologist who was duped Rs 2.81 crores in a ‘digital arrest’ scam. Fraudsters claimed her phone number and Aadhaar was linked to accounts transferring funds to an Individual. Under pressure, she was convinced to undergo “verification” and made multiple transactions over two days. The scammers threatened legal consequences for money laundering if she didn’t comply. Now a police investigation is ongoing, and her immense financial loss highlights the severity of this cybercrime.
- One another case took place where the victim was duped of Rs 7.67 crores in a prolonged ‘digital arrest’ scam over three months. Fraudsters posing as TRAI officials claimed complaints against her phone number and threatened to suspend it, alleging illegal use of another number linked to her Aadhaar. Pressured and manipulated through video calls, the victim was coerced into transferring large sums, even taking an Rs 80 lakh loan. The case is under investigation as authorities pursue the cybercriminals behind the massive fraud.
Best Practices
- Do not panic when you get any calls where sudden unexpected news is shared with you. Scammers thrive on the panic that they create.
- Do not share personal details such as Aadhaar number, PAN number etc with unknown or suspect entities. Be cautious of your personal and financial information such as credit card numbers, OTPs, or any other passwords with anyone.
- If individuals contact, claiming to be government officials, always verify their identities by contacting the entity through the proper channels.
- Report and block any fraudulent communications that are received and mark them as Spam. This would further inform other users if they see the caller ID being marked as fraud or spam.
- If you have been defrauded then report about the same to the authorities so that action can be taken and authorities can arrest the fraudsters.
- Do not transfer any money as part of ‘fines’ or ‘dues’ to the accounts that these calls or messages link to.
- In case of any threat, issue or discrepancy, file a complaint at cybercrime.gov.in or helpline number 1930. You can also seek assistance from the CyberPeace helpline at +91 9570000066.
References:
- https://www.cyberpeace.org/resources/blogs/digital-arrest-fraud
- https://www.business-standard.com/india-news/what-is-digital-house-arrest-find-out-how-to-avoid-this-new-scam-124052400799_1.html
- https://www.the420.in/ias-ips-officers-major-generals-doctors-and-professors-fall-victim-to-digital-arrest-losing-crores-stay-alert-read-5-real-cases-inside/
- https://indianexpress.com/article/cities/delhi/senior-nbcc-official-duped-in-case-of-digital-arrest-3-arrested-delhi-police-9588418/#:~:text=Of%20the%20duped%20amount%2C%20Rs,a%20Delhi%20police%20officer%20said (case study 1)
- https://timesofindia.indiatimes.com/city/lucknow/lucknow-sgpgims-professor-duped-of-rs-2-81-crore-in-digital-arrest-scam/articleshow/112521530.cms (case study 2)
- https://timesofindia.indiatimes.com/city/jaipur/bits-prof-duped-of-7-67cr-cops-want-cbi-probe-in-case/articleshow/109514200.cms (case study 3)
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62