Digitally Altered Photo of Rowan Atkinson Circulates on Social Media
Executive Summary:
A photo claiming that Mr. Rowan Atkinson, the famous actor who played the role of Mr. Bean, lying sick on bed is circulating on social media. However, this claim is false. The image is a digitally altered picture of Mr.Barry Balderstone from Bollington, England, who died in October 2019 from advanced Parkinson’s disease. Reverse image searches and media news reports confirm that the original photo is of Barry, not Rowan Atkinson. Furthermore, there are no reports of Atkinson being ill; he was recently seen attending the 2024 British Grand Prix. Thus, the viral claim is baseless and misleading.

Claims:
A viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in sick condition.



Fact Check:
When we received the posts, we first did some keyword search based on the claim made, but no such posts were found to support the claim made.Though, we found an interview video where it was seen Mr. Bean attending F1 Race on July 7, 2024.

Then we reverse searched the viral image and found a news report that looked similar to the viral photo of Mr. Bean, the T-Shirt seems to be similar in both the images.

The man in this photo is Barry Balderstone who was a civil engineer from Bollington, England, died in October 2019 due to advanced Parkinson’s disease. Barry received many illnesses according to the news report and his application for extensive healthcare reimbursement was rejected by the East Cheshire Clinical Commissioning Group.
Taking a cue from this, we then analyzed the image in an AI Image detection tool named, TrueMedia. The detection tool found the image to be AI manipulated. The original image is manipulated by replacing the face with Rowan Atkinson aka Mr. Bean.



Hence, it is clear that the viral claimed image of Rowan Atkinson bedridden is fake and misleading. Netizens should verify before sharing anything on the internet.
Conclusion:
Therefore, it can be summarized that the photo claiming Rowan Atkinson in a sick state is fake and has been manipulated with another man’s image. The original photo features Barry Balderstone, the man who was diagnosed with stage 4 Parkinson’s disease and subsequently died in 2019. In fact, Rowan Atkinson seemed perfectly healthy recently at the 2024 British Grand Prix. It is important for people to check on the authenticity before sharing so as to avoid the spreading of misinformation.
- Claim: A Viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in a sick condition.
- Claimed on: X, Facebook
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
Old footage of Indian Cricketer Virat Kohli celebrating Ganesh Chaturthi in September 2023 was being promoted as footage of Virat Kohli at the Ram Mandir Inauguration. A video of cricketer Virat Kohli attending a Ganesh Chaturthi celebration last year has surfaced, with the false claim that it shows him at the Ram Mandir consecration ceremony in Ayodhya on January 22. The Hindi newspaper Dainik Bhaskar and Gujarati newspaper Divya Bhaskar also displayed the now-viral video in their respective editions on January 23, 2024, escalating the false claim. After thorough Investigation, it was found that the Video was old and it was Ganesh Chaturthi Festival where the cricketer attended.
Claims:
Many social media posts, including those from news outlets such as Dainik Bhaskar and Gujarati News Paper Divya Bhaskar, show him attending the Ram Mandir consecration ceremony in Ayodhya on January 22, where after investigation it was found that the Video was of Virat Kohli attending Ganesh Chaturthi in September, 2023.



The caption of Dainik Bhaskar E-Paper reads, “ क्रिकेटर विराट कोहली भी नजर आए ”
Fact Check:
CyberPeace Research Team did a reverse Image Search of the Video where several results with the Same Black outfit was shared earlier, from where a Bollywood Entertainment Instagram Profile named Bollywood Society shared the same Video in its Page, the caption reads, “Virat Kohli snapped for Ganapaati Darshan” the post was made on 20 September, 2023.

Taking an indication from this we did some keyword search with the Information we have, and it was found in an article by Free Press Journal, Summarizing the article we got to know that Virat Kohli paid a visit to the residence of Shiv Sena leader Rahul Kanal to seek the blessings of Lord Ganpati. The Viral Video and the claim made by the news outlet is false and Misleading.
Conclusion:
The recent Claim made by the Viral Videos and News Outlet is an Old Footage of Virat Kohli attending Ganesh Chaturthi the Video back to the year 2023 but not of the recent auspicious day of Ram Mandir Pran Pratishtha. To be noted that, we also confirmed that Virat Kohli hadn’t attended the Program; there was no confirmation that Virat Kohli attended on 22 January at Ayodhya. Hence, we found this claim to be fake.
- Claim: Virat Kohli attending the Ram Mandir consecration ceremony in Ayodhya on January 22
- Claimed on: Youtube, X
- Fact Check: Fake

Introduction
In the labyrinthine world of cybersecurity, a new spectre has emerged from the digital ether, casting a long shadow over the seemingly impregnable orchards of Apple's macOS. This phantom, known as SpectralBlur, is a backdoor so cunningly crafted that it remained shrouded in the obscurity of cyberspace, undetected by the vigilant eyes of antivirus software until its recent unmasking. The discovery of SpectralBlur is not just a tale of technological intrigue but a narrative that weaves together the threads of geopolitical manoeuvring, the relentless pursuit of digital supremacy, and the ever-evolving landscape of cyber warfare.
SpectralBlur, a term that conjures images of ghostly interference and elusive threats, is indeed a fitting moniker for this new macOS backdoor threat. Cybersecurity researchers have peeled back the layers of the digital onion to reveal a moderately capable backdoor that can upload and download files, execute shell commands, update its configuration, delete files, and enter states of hibernation or sleep, all at the behest of a remote command-and-control server. Greg Lesnewich, a security researcher whose name has become synonymous with the relentless pursuit of digital malefactors, has shed light on this new threat that overlaps with a known malware family attributed to the enigmatic North Korean threat actors.
SpectralBlur similar to Lazarus Group’s KANDYKORN
The malware shares its DNA with KANDYKORN, also known as SockRacket, an advanced implant that functions as a remote access trojan capable of taking control of a compromised host. It is a digital puppeteer, pulling the strings of infected systems with a malevolent grace. The KANDYKORN activity also intersects with another campaign orchestrated by the Lazarus sub-group known as BlueNoroff, or TA444, which culminates in the deployment of a backdoor referred to as RustBucket and a late-stage payload dubbed ObjCShellz.
Recently, the threat actor has been observed combining disparate pieces of these two infection chains, leveraging RustBucket droppers to deliver KANDYKORN. This latest finding is another sign that North Korean threat actors are increasingly setting their sights on macOS to infiltrate high-value targets, particularly those within the cryptocurrency and blockchain industries. 'TA444 keeps running fast and furious with these new macOS malware families,' Lesnewich remarked, painting a picture of a relentless adversary in the digital realm.
Patrick Wardle, a security researcher whose insights into the inner workings of SpectralBlur have further illuminated the threat landscape, noted that the Mach-O binary was uploaded to the VirusTotal malware scanning service in August 2023 from Colombia. The functional similarities between KANDYKORN and SpectralBlur have raised the possibility that they may have been built by different developers with the same requirements. What makes the malware stand out are its attempts to hinder analysis and evade detection while using grant to set up a pseudo-terminal and execute shell commands received from the C2 server.
The disclosure comes as 21 new malware families designed to target macOS systems, including ransomware, information stealers, remote access trojans, and nation-state-backed malware, were discovered in 2023, up from 13 identified in 2022. 'With the continued growth and popularity of macOS (especially in the enterprise!), 2024 will surely bring a bevvy of new macOS malware,' Wardle noted, his words a harbinger of the digital storms on the horizon.
Hackers are beefing up their efforts to go after the best MacBooks as security researchers have discovered a brand new macOS backdoor which appears to have ties to another recently identified Mac malware strain. As reported by Security Week, this new Mac malware has been dubbed SpectralBlur and although it was uploaded to VirusTotal back in August of last year, it remained undetected by the best antivirus software until it recently caught the attention of Proofpoint’s Greg Lesnewich.
Lesnewich explained that SpectralBlur has similar capabilities to other backdoors as it can upload and download files, delete files and hibernate or sleep when given commands from a hacker-controlled command-and-control (C2) server. What is surprising about this new Mac malware strain though is that it shares similarities to the KandyKorn macOS backdoor which was created by the infamous North Korean hacking group Lazarus.
Just like SpectralBlur, KandyKorn is designed to evade detection while providing the hackers behind it with the ability to monitor and control infected Macs. Although different, these two Mac malware strains appear to be built based on the same requirements. Once installed on a vulnerable Mac, SpectralBlur executes a function that allows it to decrypt and encrypt network traffic to help it avoid being detected. However, it can also erase files after opening them and then overwrite the data they contain with zeros..
How to keep your Apple computers safe from hackers
As with the best iPhones, keeping your Mac up to date is the easiest and most important way to keep it safe from hackers. Hackers often prey on users who haven’t updated their devices to the latest software as they can exploit unpatched vulnerabilities and security flaws.
Checking to see if you're running the latest macOS version is quite easy. Just click on the Apple Logo in the top right corner of your computer, head to System Preferences and then click on Software Update. If you need a bit more help, check out our guide on how to update a Mac for more detailed instructions with pictures.
Even though your Mac has its own built-in malware scanner from Apple called xProtect, you should consider using one of the best Mac antivirus software solutions for additional protection. Paid antivirus software is often updated more frequently and you often also get access to other extras to help keep you safe online like a password manager or a VPN.
Besides updating your Mac frequently and using antivirus software, you must be careful online. This means sticking to trusted online retailers, carefully checking the URLs of the websites you visit and avoiding opening links and attachments sent to you via email or social media from people you don’t know. Likewise, you should also learn how to spot a phishing scam to know which emails you want to delete right away.
Conclusion
The thing about hackers and other cybercriminals is that they are constantly evolving their tactics and attack methods. This helps them avoid detection and allows them to devise brand-new ways to trick ordinary people. With the surge we saw in Mac malware last year, though, Apple will likely be working on beefing up xProtect and macOS to better defend against these new threats.
References
- https://www.scmagazine.com/news/new-macos-malware-spectralblur-idd-as-north-korean-backdoor
- https://www.tomsguide.com/news/this-new-macos-backdoor-lets-hackers-take-over-your-mac-remotely-how-to-stay-safe
- https://thehackernews.com/2024/01/spectralblur-new-macos-backdoor-threat.html

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.