#FactCheck - Debunked: AI-Generated Image Circulating as April Solar Eclipse Snapshot
Executive Summary:
A picture about the April 8 solar eclipse, which was authored by AI and was not a real picture of the astronomical event, has been spreading on social media. Despite all the claims of the authenticity of the image, the CyberPeace’s analysis showed that the image was made using Artificial Intelligence image-creation algorithms. The total solar eclipse on April 8 was observable only in those places on the North American continent that were located in the path of totality, whereas a partial visibility in other places was possible. NASA made the eclipse live broadcast for people who were out of the totality path. The spread of false information about rare celestial occurrences, among others, necessitates relying on trustworthy sources like NASA for correct information.
Claims:
An image making the rounds through social networks, looks like the eclipse of the sun of the 8th of April, which makes it look like a real photograph.




Fact Check:
After receiving the news, the first thing we did was to try with Keyword Search to find if NASA had posted any lookalike image related to the viral photo or any celestial events that might have caused this photo to be taken, on their official social media accounts or website. The total eclipse on April 8 was experienced by certain parts of North America that were located in the eclipse pathway. A part of the sky above Mazatlan, Mexico, was the first to witness it. Partial eclipse was also visible for those who were not in the path of totality.
Next, we ran the image through the AI Image detection tool by Hive moderation, which found it to be 99.2% AI-generated.

Following that, we applied another AI Image detection tool called Isitai, and it found the image to be 96.16% AI-generated.

With the help of AI detection tools, we came to the conclusion that the claims made by different social media users are fake and misleading. The viral image is AI-generated and not a real photograph.
Conclusion:
Hence, it is a generated image by AI that has been circulated on the internet as a real eclipse photo on April 8. In spite of some debatable claims to the contrary, the study showed that the photo was created using an artificial intelligence algorithm. The total eclipse was not visible everywhere in North America, but rather only in a certain part along the eclipse path, with partial visibility elsewhere. Through AI detection tools, we were able to establish a definite fact that the image is fake. It is very important, when you are talking about rare celestial phenomena, to use the information that is provided by the trusted sources like NASA for the accurate reason.
- Claim: A viral image of a solar eclipse claiming to be a real photograph of the celestial event on April 08
- Claimed on: X, Facebook, Instagram, website
- Fact Check: Fake & Misleading
Related Blogs
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)
%20(1).webp)
Introduction
The Central Electricity Authority (CEA) has released the Draft Central Electricity Authority (Cyber Security in Power Sector) Regulations, 2024, inviting ‘comments’ from stakeholders, including the general public, which are to be submitted by 10 September 2024. The new regulation is intended to make India’s power sector more cyber-resilient and responsive to counter emerging cyber threats and safeguard the nation's power infrastructure.
Key Highlights of the CEA’s New (Cyber Security in Power Sector) Regulations, 2024
- Central Electricity Authority has framed the ‘Cyber Security in Power Sector Regulations, 2024’ in the exercise of the powers conferred by sub-section (1) of 177 of the Electricity Act, 2003 in order to make regulations for measures relating to Cyber Security in the power sector.
- The scope of the regulation entails that these regulations will be applicable to all Responsible Entities, Regional Power Committees, Appropriate Commission, Appropriate Government and Associated Power Sector Government Organizations, and Training Institutes recognized by the Authority, Authority and Vendors.
- One key aspect of the proposed regulation is the establishment of a dedicated Computer Security Incident Response Team (CSIRT) for the power sector. This team will coordinate a unified cyber defense strategy throughout the sector, establishing security frameworks, and serving as the main agency for handling incident response and recovery. The CSIRT will also be responsible for creating/developing Standard Operating Procedures (SOPs), security policies, and best practices for incident response activities in consultation with CERT-In and NCIIPC. The detailed roles and responsibilities of CSIRT are outlined under Chapter 2 of the said regulations.
- All responsible entities in the power sector as mentioned under the scope of the regulation, are mandated to appoint a Chief Information Security Officer (CISO) and an alternate CISO, who need to be Indian nationals and who are senior management employees. The regulations specify that these officers must directly report to the CEO/Head of the Responsible Entity. Thus emphasizing the critical nature of CISO’s roles in safeguarding the nation’s power grid sector assets.
- All Responsible Entities shall establish an Information Security Division (ISD) dedicated to ensuring Cyber Security, headed by the CISO and remain operational around the clock. The schedule under regulation entails that the minimum workforce required for setting up an ISD is 04 (Four) officers including CISO and 04 officers/officials for shift operations. Sufficient workforce and infrastructure support shall be ensured for ISD. The detailed functions and responsibilities of ISD are outlined under Chapter 5 regulation 10. Furthermore, the ISD shall be manned by sufficient numbers of officers, having valid certificates of successful completion of domain-specific Cyber Security courses.
- The regulation obliged the entities to have a defined, documented and maintained Cyber Security Policy which is approved by the Board or Head of the entity. The regulation also obliged the entities to have a Cyber Crisis Management Plan (CCMP) approved by the higher management.
- As regards upskilling and empowerment the regulation advocates for organising or conducting periodic Cyber Security awareness programs and Cyber Security exercises including mock drills and tabletop exercises.
CyberPeace Policy Outlook
CyberPeace Policy & Advocacy Vertical has submitted its detailed recommendations on the proposed ‘Cyber Security in Power Sector Regulations, 2024’ to the Central Electricity Authority, Government of India. We have advised on various aspects within the regulation including harmonisation of these regulations with other rules as issued by CERT-In and NCIIPC, at present. As this needs to be clarified which set of guidelines will supersede in case of any discrepancy that may arise. Additionally, we advised on incorporating or making modifications to specific provisions under the regulation for a more robust framework. We have also emphasized legal mandates and penalties for non-compliance with cybersecurity, so as to make sure that these regulations do not only act as guiding principles but also provide stringent measures in case of non-compliance.
References:

Introduction
In the advanced age of digitalization, the user base of Android phones is high. Our phones have become an integral part of our daily life activities from making online payments, booking cabs, playing online games, booking movie & show tickets, conducting online business activities, social networking, emailing and communication, we utilize our mobile phone devices. The Internet is easily accessible to everyone and offers various convenient services to users. People download various apps and utilize various services on the internet using their Android devices. Since it offers convenience, but in the growing digital landscape, threats and vulnerabilities have also emerged. Fraudsters find the vulnerabilities and target the users. Recently, various creepy online scams such as AI-based scams, deepfake scams, malware, spyware, malicious links leading to financial frauds, viruses, privacy breaches, data leakage, etc. have been faced by Android mobile users. Android mobile devices are more prone to vulnerabilities as compared to iOS. However, both Android and iOS platforms serve to provide safer digital space to mobile users. iOS offers more security features. but we have to play our part and be careful. There are certain safety measures which can be utilised by users to be safe in the growing digital age.
User Responsibility:
Law enforcement agencies have reported that they have received a growing number of complaints showing malware being used to compromise Android mobile devices. Both the platforms, Android and Google, have certain security mechanisms in place. However, cybersecurity experts emphasize that users must actively take care of safeguarding their mobile devices from evolving online threats. In this era of evolving cyber threats, being precautious and vigilant and personal responsibility for digital security is paramount.
Being aware of evolving scams
- Deepfake Scams: Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake.
- Voice cloning: To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
- Stalkerware or spyware: Stalkware or spyware is one of the serious threats to individual digital safety and personal information. Stalkware is basically software installed into your device without your consent or knowledge in order to track your activities and exploit your data. Stalkware, also referred to as spyware, is a type of malicious software secretly installed on your device without your knowledge. Its purpose is to track you or monitor your activities and record sensitive information such as passwords, text messages, GPS location, call history and access to your photos and videos. Cybercriminals and stalkers use this malicious software to unauthorisedly gain access to someone's phone devices.
Best practices or Cyber security tips:
- Keep your software up to date: Turn on automatic software updates for your device and make sure your mobile apps are up to date.
- Using strong passwords: Use strong passwords on your lock/unlock and on important apps on your mobile device.
- Using 2FA or multi-factor authentication: Two-factor authentication or multi-factor authentication provides extra layers of security. Be cautious before clicking on any link and downloading any app or file: Users are often led to click on malicious online links. Scammers may present such links to users through false advertisements on social media platforms, payment processes for online purchases, or in phone text messages. Through the links, victims are led either to phishing sites to give away personal data or to download harmful Android Package Kit (APK) files used to distribute and install apps on Android mobile phones.
- Secure Payments: Do not open any malicious links. Always make payments from secure and trusted payment apps. Use strong passwords for your payment apps as well. And secure your banking credentials.
- Safe browsing: Pay due care and attention while clicking on any link and downloading content. Ignore the links or attachments of suspicious emails which are from an unknown sender.
- Do not download third-party apps: Using an APK file to download a third-party app to an Android device is commonly known as sideloading. Be cautious and avoid downloading apps from third-party or dubious sites. Doing so may lead to the installation of malware in the device, which in turn may result in confidential and sensitive data such as banking credentials being stolen. Always download apps only from the official app store.
- App permissions: Review app permission and only grant permission which is necessary to use that app.
- Do not bypass security measures: Android offers more flexibility in the mobile operating system and in mobile settings. For example, sideloading of apps is disabled by default, and alerts are also in place to warn users. However, an unwitting user who may not truly understand the warnings may simply grant permission to an app to bypass the default setting.
- Monitoring: Regularly monitor your devices and system logs for security check-ups and for detecting any suspicious activity.
- Reporting online scams: A powerful resource available to victims of cybercrime is the National Cyber Crime Reporting Portal, equipped with a 24x7 helpline number, 1930. This portal serves as a centralized platform for reporting cybercrimes, including financial fraud.
Conclusion:
The era of digitalisation has transformed our lives, with Android phones becoming an integral part of our daily routines. While these devices offer convenience, they also expose us to online threats and vulnerabilities, such as scams like deepfake technology-based scams, voice clones, spyware, malware, and malicious links that can lead to significant financial and privacy breaches. Android devices might be more susceptible to such scams. By being aware of emerging scams like deepfakes, spyware, and other malicious activities, we can take proactive steps to safeguard our digital lives. Our mobile devices remain as valuable assets for us. However, they are also potential targets for cybercriminals. Users must remain proactive in protecting their devices and personal data from potential threats. By taking personal responsibility for our digital security and following these best practices, we can navigate the digital landscape with confidence, ensuring that our Android phones remain powerful tools for convenience and connection while keeping our data and privacy intact and staying safe from online threats and vulnerabilities.
References: