#FactCheck- Brazil Explosion Video Falsely Linked to Attack on Israeli Defence Minister
Executive Summary
Amid reports of a two-week ceasefire announced on April 8, 2026, between the United States and Iran, a video showing a sudden explosion inside a building has gone viral on social media. The clip shows a fire brigade vehicle stationed outside a structure, with people entering the premises moments before a blast occurs. Social media users are sharing the video with claims that Iran carried out an attack on Israeli Defence Minister Yoav Gallant, alleging that the building shown is linked to Israel’s defence ministry.
However, a research by CyberPeace has found the claim to be false. The viral video is not recent and has no connection to Israel or any ongoing conflict.
Claim
A Facebook user shared the video on April 3, 2026, claiming that Iran had attacked Israeli Defence Minister Yoav Gallant and severely damaged a building associated with him.

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. This led us to the same video posted on an X account named Fernanda Melchionna on December 31, 2025.

According to the available information, the video is from Santana do Livramento, where a major fire broke out in a supermarket. Further keyword searches led us to a report published on December 31, 2025, by the Brazilian news website GZH (gaúcha zh clicrbs). The report stated that a fire had erupted in a supermarket in Santana do Livramento, and firefighters had reached the spot to control the blaze. During the operation, an explosion occurred, leaving around 17 people injured. The injured were later taken to a hospital.

We also found the same video uploaded on the YouTube channel Terra Brasil on January 1, 2026, further confirming its origin and timeline.

Conclusion
The viral claim is false and misleading. The explosion video being shared as an attack on Israeli Defence Minister Yoav Gallant is unrelated to the ongoing Middle East situation. The footage is actually from December 2025 and shows an incident in Brazil, where a fire in a supermarket led to an explosion during firefighting operations. There is no evidence to suggest any such attack took place in Israel. The video has been taken out of context and circulated with a fabricated narrative to mislead users and exploit geopolitical tensions.
Related Blogs

Key points: Data collection, Protecting Children, and Awareness
Introduction
The evolution of technology has drastically changed over the period impacting mankind and their lifestyle. For every single smallest aspect, humans are reliable on the computers they have manufactured. The use of AI has almost hindered mankind, kids these days are more lethargic to work and write more sensibly on their own, but they are more likely interested in television, video games, mobile games, etc. School kids use AI just to complete their homework. Is it a good sign for the country’s future? The study suggests that Tools like ChatGPT is a threat to humans/a child’s potential to be creative and make original content requiring a human writer’s insight. Tools like ChatGPT can remove students’ artistic voices rather than using their unique writing style.
Does any of those browsers or search engines use your search history against you? or How do non-users tend to lose their private info on such a search engine?
Are there any safety measures that one’s the government of a particular country taking to protect their people’s rights?
Some of us might wonder how these two fancy-looking world merge and into, Arey they a boon or curse?
So here’s the top news getting flooded all over the world through the internet,
“Italian Agency impose strict measures on OpenAI’s ChatGPT”
Italy becomes the first Western European country to take serious measures about using Open AI ChatGPT. An Italian Data Protection agency named Garante has set mandates on ChatGPT. Garante has raised concerns about privacy violations and the inability to verify the age of users. Garate has also claimed that the AI ChatBot is violating the EU’s General Data Protection Regulation (GDPR). In a press release, Garante demanded OpenAI take necessary actions.
To begin with, Garante has demanded that OpenAI’s ChatGPT should increase its transparency and give a comprehensive statement about its data processing practices. OpenAI must specify between obtaining user consent for processing users’ data to train its AI model or may rely on a legitimate basis. OpenAI must maintain the privacy of users’ data.
In addition, ChatGPT should also take measures to prevent minors from accessing the technology at such an early stage of life, which could hinder their brain power. ChatGPT should add some age verification system to prevent minors from accessing explicit content. Moreover, Garante suggests that OpenAI should spread awareness among its users about their data being processed to train its AI model. Garante has set a deadline of April 30 for ChatGPT to complete the given tasks. Until then, its service should be banned in the country.
Child safety while surfing on ChatGpt
Italian agency demands age limitation to surf and an age verification method to exclude users under the age of 13, and parental authority should be required for users between the ages of 13 and 18. As this is a matter of security. Children might get exposed to explicit content invalidated to their age or explore illegitimate content. The AI chatbot doesn’t have the sense to determine which content is appropriate for the underage audience. Due to tools like chatbots, subjective things/information are already available to young students, leading to endangered irrespective of their future. As ChatGpt can hinder their potential and ability to create original and creative content for young minds. It is a threat motivation to humans’ motivation to write. Moreover, when students need time to think and analyze they get lethargic due to tools like ChatGPT, and the practice they need fades away.
Collection of User’s Data
According to some reports from the company’s privacy policy, OpenAI ChatGpt collects an assortment of additional data. The first two questions are for a free trial when a session starts. It asks for your Login, and SignUp through your Gmail account collects your IP address, browser type, and the data you put in the form of input, i.e. it collects data on the user’s interaction with the website, It also collects the user’s data like session time, cookies through third party may tend to sell it to an unspecified third party.
This snapshot shows that they have added a few things after Garante’s draft.
Conclusion
AI chatbot – Chatgpt is an advanced technology tool that makes work a little easier, but one surfing on such tools must stay aware of the information they are asking for. Such AI bots are trained to understand mankind, its job is to give a helping hand and not doltish. In case of this, some people tend to provide sensitive information unknowingly, young minds get exposed to explicit information. Such bots need to put some age limitations. Such innovations keep taking place, but it’s individuals’ responsibility what actions to be allowed to access their online connected device. Unlike the Italian Agency, which has taken some preventive measures to keep their user’s data safe, also looking at the adverse effect of such chatbots on a young mind.

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.

Introduction
India’s telecom regulator, the Telecom Regulatory Authority of India (TRAI), has directed telcos to block all unverified headers and message templates within 30 and 60 days, respectively, according to a press release. The regulator observed that telemarketers were ‘misusing’ headers and message templates of registered parties and asked telcos to reverify all registered headers & message templates on the DLT (Distributed Ledger Technology) platform. All telecom service providers (TSP) have to comply with these directions, issued under the Telecom Commercial Communication Customer Preference Regulations, 2018, within a month, TRAI said in its release. The directions were issued after TRAI held a meeting with telcos on February 17, 2023, to discuss quality of service (QoS) improvements, review of QoS standards, QoS of 5G services and unsolicited commercial communications”, as per its press release.
Why it matters?
It may be useful as it can ensure that all promotional messages are sent through registered telemarketers using only approved templates. It is no secret that the spam problem has been difficult to rein in, so the measure can restrict its proliferation and filter out telemarketers resorting to misuse.
Details about TRAI’s orders
The release said that telcos have to ensure that temporary headers are deactivated immediately after the time duration for which such headers were created. The telcos also have to ensure that there is no space to insert unwanted content in the template of a message where one can add content to be sent to people. Message recipients should not be confused, so telcos must ensure that they register no lookalike headers in the names of different senders.
Measures to check unregistered telemarketers
The release ordered telcos to bar telemarketers not registered on its DLT platform from accessing message templates and scrubbing them to deliver spam messages to recipients on the telco’s network. The telcos have been directed not to allow promotional messages to be sent by unregistered telemarketers or telemarketers using 10-digit telephone numbers. It added that telcos have to take action against erring telemarketers and share details of these telemarketers with other telcos, which will then be responsible for stopping these entities from sending commercial communications through their networks.
How big is the problem of spam?
A survey conducted by LocalCircles said that two out of every three people (66 per cent) in India get three or more spam calls daily. It added that not one person among thousands of respondents checked the box of ‘no spam’.
The platform said that it was a national survey which gathered over 56,000 responses from Indians located in 342 districts. It also found that 92 % of responders said they continue receiving spam despite opting for DND. The DND list is a feature where mobile subscriber can register their number to avoid getting unsolicited commercial communication (UCC).
Addressing the problem of spam
The regulatory body recently released a consultation paper that proposed the idea of providing the real name identity of callers to people receiving calls. The paper said that it would use a database containing each subscriber’s correct name to implement the caller name presentation (CNAP) service. The regulator wants to use details acquired by telecom service providers via customer acquisition forms (CAF).
TRAI formed a joint committee to look at the issue of phishing and cyber fraud in 2022. It included officials from the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI). The telecom watchdog had laid out a plan to combat SMS and call spam using blockchain technology (DLT). It saw telecom companies and TRAI to build an encrypted and distributed database that will record user consent to be included in SMS or call send-out lists.
According to a press release, the Telecom Regulatory Authority of India (TRAI), the telecom regulator in India, has ordered carriers to block any unverified headers and message templates within 30 and 60 days, respectively.
The regulator saw that telemarketers were “misusing” registered parties’ headers and message templates. Thus, they requested that telecoms validate all of the registered headers and message templates on the DLT (Distributed Ledger Technology) platform.
According to TRAI’s statement, all telecom service providers (TSP) must adhere to these directives within one month under the 2018 Telecom Commercial Communication Consumer Preference Rules. The guidelines were released following a conference with telcos convened by TRAI on February 17, 2023, to discuss quality of service (QoS) enhancements, a review of QoS standards, the QoS of 5G services, and unsolicited commercial communications.
Why it matters?
Requiring that only registered telemarketers send promotional communications using approved templates may prove to be a beneficial safeguard. It is no secret that the spam problem has been challenging to control, so the measure can limit its spread and screen out telemarketers that employ abusive tactics.
Information on the TRAI order
According to the press release, telecoms must ensure that temporary headers are deactivated as soon as the time period they were established has passed. The telecoms must also ensure that there is no room in the message template where one can add content to be sent to recipients for unwanted content. There should be no room for uncertainty among message recipients. Thus, telecoms must ensure that no similar-looking headers are registered under the identities of various senders.
Taking action against unregistered telemarketers In accordance with the directive, telcos must prevent telemarketers who are not registered on their DLT platform from obtaining message templates and using them to send spam to subscribers on their network. Telemarketers who are not registered or who use 10-digit phone numbers cannot send promotional messages, according to instructions given to telecoms. Telcos must take action against misbehaving telemarketers, it was noted, and divulge their information to other telecoms, who would be in charge of preventing these companies from transmitting commercial messages.
How widespread is the spam issue?
According to a LocalCircles poll, three or more spam calls are received every day by two out of every three Indians (66%) on average. It further stated that not a single one of the thousands of responses clicked the “no-spam” box. According to the platform, the survey was conducted nationally and received over 56,000 responses from Indians in 342 districts. Moreover, 92 % of respondents reported that even after choosing DND, they still receive spam. A mobile subscriber can register their number on the DND list to prevent receiving unsolicited commercial communication (UCC).
consultation document recently in which it recommended the concept of providing the genuine name identify of callers to persons receiving calls. The paper indicated that it would employ a database containing each subscriber’s correct name to implement the caller name presentation (CNAP) service. The regulator wants to use information collected by telecom service providers through client acquisition forms (CAF).
Conclusion
TRAI established a joint committee to examine the problem of phishing and cyber scams in 2022. Officials from the Securities and Exchange Board of India (SEBI) and Reserve Bank of India (RBI) were present (SEBI).
The telecom watchdog had outlined a strategy for leveraging blockchain technology to combat SMS and call spam (DLT).