#FactCheck - Viral image circulating on social media depicts a natural optical illusion from Epirus, Greece.
Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.
Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.
Similar Posts:
Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.
We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.
The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.
Related Blogs
Misinformation is a scourge in the digital world, making the most mundane experiences fraught with risk. The threat is considerably heightened in conflict settings, especially in the modern era, where geographical borders blur and civilians and conflict actors alike can take to the online realm to discuss -and influence- conflict events. Propaganda can complicate the narrative and distract from the humanitarian crises affecting civilians, while also posing a serious threat to security operations and law and order efforts. Sensationalised reports of casualties and manipulated portrayals of military actions contribute to a cycle of violence and suffering.
A study conducted by MIT found the mere thought of sharing news on social media reduced the ability to judge whether a story was true or false; the urge to share outweighed the consideration of accuracy (2023). Cross-border misinformation has become a critical issue in today's interconnected world, driven by the rise of digital communication platforms. To effectively combat misinformation, coordinated international policy frameworks and cooperation between governments, platforms, and global institutions are created.
The Global Nature of Misinformation
Cross-border misinformation is false or misleading information that spreads across countries. Out-of-border creators amplify information through social media and digital platforms and are a key source of misinformation. Misinformation can interfere with elections, and create serious misconceptions about health concerns such as those witnessed during the COVID-19 pandemic, or even lead to military conflicts.
The primary challenge in countering cross-border misinformation is the difference in national policies, legal frameworks and governance policies of social media platforms across various jurisdictions. Examining the existing international frameworks, such as cybersecurity treaties and data-sharing agreements used for financial crimes might be helpful to effectively address cross-border misinformation. Adapting these approaches to the digital information ecosystem, nations could strengthen their collective response to the spread of misinformation across borders. Global institutions like the United Nations or regional bodies like the EU and ASEAN can work together to set a unified response and uniform international standards for regulation dealing with misinformation specifically.
Current National and Regional Efforts
Many countries have taken action to deal with misinformation within their borders. Some examples include:
- The EU’s Digital Services Act has been instrumental in regulating online intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, etc. The legislation aims to prevent illegal and harmful activities online and the spread of disinformation.
- The primary legislation that governs cyberspace in India is the IT Act of 2000 and its corresponding rules (IT Rules, 2023), which impose strict requirements on social media platforms to counter misinformation content and enable the traceability of the creator responsible for the origin of misinformation. Platforms have to conduct due diligence, failing which they risk losing their safe harbour protection. The recently-enacted DPDP Act of 2023 indirectly addresses personal data misuse that can be used to contribute to the creation and spread of misinformation. Also, the proposed Digital India Act is expected to focus on “user harms” specific to the online world.
- In the U.S., the Right to Editorial Discretion and Section 230 of the Communications Decency Act place the responsibility for regulating misinformation on private actors like social media platforms and social media regulations. The US government has not created a specific framework addressing misinformation and has rather encouraged voluntary measures by SMPs to have independent policies to regulate misinformation on their platforms.
The common gap area across these policies is the absence of a standardised, global framework for addressing cross-border misinformation which results in uneven enforcement and dependence on national regulations.
Key Challenges in Achieving International Cooperation
Some of the key challenges identified in achieving international cooperation to address cross-border misinformation are as follows:
- Geopolitical tensions can emerge due to the differences in political systems, priorities, and trust issues between countries that hinder attempts to cooperate and create a universal regulation.
- The diversity in approaches to internet governance and freedom of speech across countries complicates the matters further.
- Further complications arise due to technical and legal obstacles around the issues of sovereignty, jurisdiction and enforcement, further complicating matters relating to the monitoring and removal of cross-border misinformation.
CyberPeace Recommendations
- The UN Global Principles For Information Integrity Recommendations for Multi-stakeholder Action, unveiled on 24 June 2024, are a welcome step for addressing cross-border misinformation. This can act as the stepping stone for developing a framework for international cooperation on misinformation, drawing inspiration from other successful models like climate change agreements, international criminal law framework etc.
- Collaborations like public-private partnerships between government, tech companies and civil societies can help enhance transparency, data sharing and accountability in tackling cross-border misinformation.
- Engaging in capacity building and technology transfers in less developed countries would help to create a global front against misinformation.
Conclusion
We are in an era where misinformation knows no borders and the need for international cooperation has never been more urgent. Global democracies are exploring solutions, both regulatory and legislative, to limit the spread of misinformation, however, these fragmented efforts fall short of addressing the global scale of the problem. Establishing a standardised, international framework, backed by multilateral bodies like the UN and regional alliances, can foster accountability and facilitate shared resources in this fight. Through collaborative action, transparent regulations, and support for developing nations, the world can create a united front to curb misinformation and protect democratic values, ensuring information integrity across borders.
References
- https://economics.mit.edu/sites/default/files/2023-10/A%20Model%20of%20Online%20Misinformation.pdf
- https://www.indiatoday.in/global/story/in-the-crosshairs-manufacturing-consent-and-the-erosion-of-public-trust-2620734-2024-10-21
- https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/
- https://www.article19.org/resources/un-article-19-global-principles-for-information-integrity/
Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/
Introduction
All citizens are using tech to their advantage, and so we see a lot of upskilling among the population leading to innovation in India. As we go deeper into cyberspace, we must maintain our cyber security efficiently and effectively. When bad actors use technology to their advantage, we often see data loss or financial loss of the victim, In this blog, we will shine light upon two new forms of cyber attacks, causing havoc upon the innocent. The “Daam” Malware and a new malicious app are the two new issues.
Daam Botnet
Since 2021, the DAAM Android botnet has been used to acquire unauthorised access to targeted devices. Cybercriminals use it to carry out different destructive actions. Using the DAAM Android botnet’s APK binding service, threat actors can combine malicious code with a legitimate application. Keylogging, ransomware, VOIP call records, runtime code execution, browser history collecting, incoming call recording, PII data theft, phishing URL opening, photo capture, clipboard data theft, WiFi and data status switching, and browser history gathering are just a few of the functions offered by the DAAM Android botnet. The DAAM botnet tracks user activity using the Accessibility Service and stores keystrokes it has recorded together with the name of the programme package in a database. It also contains a ransomware module that encrypts and decrypts data on the infected device using the AES method.
Additionally, the botnet uses the Accessibility service to monitor the VOIP call-making features of social media apps like WhatsApp, Skype, Telegram, and others. When a user engages with these elements, the virus begins audio recording.
The Malware
CERT-IN, the central nodal institution that reacts to computer security-related issues, claims that Daam connects with various Android APK files to access a phone. The files on the phone are encrypted using the AES encryption technique, and it is distributed through third-party websites.
It is claimed that the malware can damage call recordings and contacts, gain access to the camera, change passwords, take screenshots, steal SMS, download/upload files, and perform a variety of other things.
Safeguards and Guidelines by Cert-In
Cert-In has released the guideline for combating malware. These were issued in the public interest. The recommendations by Cert-In are as follows-
Only download from official app stores to limit the risk of potentially harmful apps.
Before downloading an app, always read the details and user reviews; likewise, always give permissions that are related to the program’s purpose.
Install Android updates solely from Android device vendors as they become available.
Avoid visiting untrustworthy websites or clicking on untrustworthy
Install and keep anti-virus and anti-spyware software up to date.
Be cautious if you see mobile numbers that appear to be something other than genuine/regular mobile numbers.
Conduct sufficient investigation Before clicking on a link supplied in a communication.
Only click on URLs that clearly display the website domain; avoid abbreviated URLs, particularly those employing bit.ly and tinyurl.
Use secure browsing technologies and filtering tools in antivirus, firewall, and filtering services.
Before providing sensitive information, look for authentic encryption certificates by looking for the green lock in your browser’s URL information, look for authentic encryption certificates by looking for the green lock in your browser’s URL bar.
Any ‘strange’ activity in a user’s bank account must be reported immediately to the appropriate bank.
New Malicious App
From the remote parts of Jharkhand, a new form of malicious application has been circulated among people on the pretext of a bank account closure. The bad actors have always used messaging platforms like Whatsapp and Telegram to circulate malicious links among unaware and uneducated people to dupe them of their hard-earned money.
They send an ordinary-looking message on Whatsapp or Telegram where they mention that the user has a bank account at ICICI bank and, due to irregularity with the credentials, their account is being deactivated. Further, they ask users to update their PAN card to reactivate their account by uploading the PAN card on an application. This app, in turn, is a malicious app that downloads all the user’s personal credentials and shares them with the bad actors via text message, allowing them to bypass banks’ two-factor authentication and drain the money from their accounts. The Jharkhand Police Cyber Cells have registered numerous FIRs pertaining to this type of cybercrime and are conducting full-scale investigations to apprehend the criminals.
Conclusion
Malware and phishing attacks have gained momentum in the previous years and have become a major contributor to the tally of cybercrimes in the country. DaaM malware is one of the examples brought into light due to the timely action by Cert-In, but still, a lot of such malware are deployed by bad actors, and we as netizens need to use our best practices to keep such criminals at bay. Phishing crimes are often substantiated by exploiting vulnerabilities and social engineering. Thus working towards a rise in awareness is the need of the hour to safeguard the population by and large.