#FactCheck - False Claim of Italian PM Congratulating on Ram Temple, Reveals Birthday Thanks
Executive Summary:
A number of false information is spreading across social media networks after the users are sharing the mistranslated video with Indian Hindus being congratulated by Italian Prime Minister Giorgia Meloni on the inauguration of Ram Temple in Ayodhya under Uttar Pradesh state. Our CyberPeace Research Team’s investigation clearly reveals that those allegations are based on false grounds. The true interpretation of the video that actually is revealed as Meloni saying thank you to those who wished her a happy birthday.
Claims:
A X (Formerly known as Twitter) user’ shared a 13 sec video where Italy Prime Minister Giorgia Meloni speaking in Italian and user claiming to be congratulating India for Ram Mandir Construction, the caption reads,
“Italian PM Giorgia Meloni Message to Hindus for Ram Mandir #RamMandirPranPratishta. #Translation : Best wishes to the Hindus in India and around the world on the Pran Pratistha ceremony. By restoring your prestige after hundreds of years of struggle, you have set an example for the world. Lots of love.”

Fact Check:
The CyberPeace Research team tried to translate the Video in Google Translate. First, we took out the transcript of the Video using an AI transcription tool and put it on Google Translate; the result was something else.

The Translation reads, “Thank you all for the birthday wishes you sent me privately with posts on social media, a lot of encouragement which I will treasure, you are my strength, I love you.”
With this we are sure that it was not any Congratulations message but a thank you message for all those who sent birthday wishes to the Prime Minister.
We then did a reverse Image Search of frames of the Video and found the original Video on the Prime Minister official X Handle uploaded on 15 Jan, 2024 with caption as, “Grazie. Siete la mia” Translation reads, “Thank you. You are my strength!”

Conclusion:
The 13 Sec video shared by a user had a great reach at X as a result many users shared the Video with Similar Caption. A Misunderstanding starts from one Post and it spreads all. The Claims made by the X User in Caption of the Post is totally misleading and has no connection with the actual post of Italy Prime Minister Giorgia Meloni speaking in Italian. Hence, the Post is fake and Misleading.
- Claim: Italian Prime Minister Giorgia Meloni congratulated Hindus in the context of Ram Mandir
- Claimed on: X
- Fact Check: Fake
Related Blogs
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms

Introduction
In this ever-evolving world of technology, cybercrimes and criminals continue to explore new and innovative methods to exploit and intimidate their victims. One of the recent shocking incidents has been reported from the city of Bharatpur, Rajasthan, where the cyber crooks organised a mock court session This complex operation, meant to induce fear and force obedience, exemplifies the daring and intelligence of modern hackers. In this blog article, we’ll go deeper into this concerning occurrence, delving into it to offer light on the strategies used and the ramifications for cybersecurity.to frighten their targets.
The Setup
The case was reported from Gopalgarh village in Bharatpur, Rajasthan, and has unfolded with a shocking twist -the father-son duo, Tahir Khan and his son Talim Khano — from Gopalgarh village in Bharatpur, Rajasthan, has been fooling people to gain their monetary gain by staging a mock court setting and recorded the proceedings to intimidate their victims into paying hefty sums. In the recent case, they have gained 2.69 crores through sextortion. the duo uses to trace their targets on social media platforms, blackmail them, and earn a hefty amount.
An official complaint was filed by a 69-year-old victim who was singled out through his social media accounts, his friends, and his posts Initially, they contacted the victim with a pre-recorded video featuring a nude woman, coaxing him into a compromising situation. As officials from the Delhi Crime Branch and the CBI, they threatened the victim, claiming that a girl had approached them intending to file a complaint against him. Later, masquerading as YouTubers, they threatened to release the incriminating video online. Adding to the charade, they impersonated a local MLA and presented the victim with a forged stamp paper alleging molestation charges. Eventually, posing as Delhi Crime Branch officials again, they demanded money to settle the case after falsely stating that they had apprehended the girl. To further manipulate the victim, the accused staged a court proceeding, recording it and subsequently sending it to him, creating the illusion that everything was concluded. This unique case of sextortion stands out as the only instance where the culprits went to such lengths, staging and recording a mock court to extort money. Furthermore, it was discovered that the accused had fabricated a letter from the Delhi High Court, adding another layer of deception to their scheme.
The Investigation
The complaint was made in a cyber cell. After the complaint was filed, the investigation was made, and it was found that this case stands as one of the most significant sextortion incidents in the country. The father-son pair skillfully assumed five different roles, meticulously executing their plan, which included creating a simulated court environment. “We have also managed to recover Rs 25 lakh from the accused duo—some from their residence in Gopalgarh and the rest from the bank account where it was deposited.
The Tricks used by the duo
The father-son The setup in the fake court scene event was a meticulously built web of deception to inspire fear and weakness in the victim. Let’s look at the tricks the two used to fool the people.
- Social Engineering strategies: Cyber criminals are skilled at using social engineering strategies to acquire the trust of their victims. In this situation, they may have employed phishing emails or phone calls to get personal information about the victim. By appearing as respectable persons or organisations, the crooks tricked the victim into disclosing vital information, giving them weapons they needed to create a sense of trustworthiness.
- Making a False Narrative: To make the fictitious court scenario more credible, the cyber hackers concocted a captivating story based on the victim’s purported legal problems. They might have created plausible papers to give their plan authority, such as forged court summonses, legal notifications, or warrants. They attempted to create a sense of impending danger and an urgent necessity for the victim to comply with their demands by deploying persuasive language and legal jargon.
- Psychological Manipulation: The perpetrators of the fictitious court scenario were well aware of the power of psychological manipulation in coercing their victims. They hoped to emotionally overwhelm the victim by using fear, uncertainty, and the possible implications of legal action. The offenders probably used threats of incarceration, fines, or public exposure to increase the victim’s fear and hinder their capacity to think critically. The idea was to use desperation and anxiety to force the victim to comply.
- Use of Technology to Strengthen Deception: Technological advancements have given cyber thieves tremendous tools to strengthen their misleading methods. The simulated court scenario might have included speech modulation software or deep fake technology to impersonate the voices or appearances of legal experts, judges, or law enforcement personnel. This technology made the deception even more believable, blurring the border between fact and fiction for the victim.
The use of technology in cybercriminals’ misleading techniques has considerably increased their capacity to fool and influence victims. Cybercriminals may develop incredibly realistic and persuasive simulations of judicial processes using speech modulation software, deep fake technology, digital evidence alteration, and real-time communication tools. Individuals must be attentive, gain digital literacy skills, and practice critical thinking when confronting potentially misleading circumstances online as technology advances. Individuals can better protect themselves against the expanding risks posed by cyber thieves by comprehending these technological breakthroughs.
What to do?
Seeking Help and Reporting Incidents: If you or anyone you know is the victim of cybercrime or is fooled by cybercrooks. When confronted with disturbing scenarios such as the imitation court scene staged by cybercrooks, victims must seek help and act quickly by reporting the occurrence. Prompt reporting serves various reasons, including increasing awareness, assisting with investigations, and preventing similar crimes from occurring again. Victims should take the following steps:
- Contact your local law enforcement: Inform local legal enforcement about the cybercrime event. Provide them with pertinent incident facts and proof since they have the experience and resources to investigate cybercrime and catch the offenders involved.
- Seek Assistance from a Cybersecurity specialist: Consult a cybersecurity specialist or respected cybersecurity business to analyse the degree of the breach, safeguard your digital assets, and obtain advice on minimising future risks. Their knowledge and forensic analysis can assist in gathering evidence and mitigating the consequences of the occurrence.
- Preserve Evidence: Keep any evidence relating to the event, including emails, texts, and suspicious actions. Avoid erasing digital evidence, and consider capturing screenshots or creating copies of pertinent exchanges. Evidence preservation is critical for investigations and possible legal procedures.
Conclusion
The setting fake court scene event shows how cybercriminals would deceive and abuse their victims. These criminals tried to use fear and weakness in the victim through social engineering methods, the fabrication of a false narrative, the manipulation of personal information, psychological manipulation, and the use of technology. Individuals can better defend themselves against cybercrooks by remaining watchful and sceptical.
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds