#FactCheck-RBI's Alleged Guidelines on Ink Colour for Cheque Writing
Executive Summary:
A viral message is circulating claiming the Reserve Bank of India (RBI) has banned the use of black ink for writing cheques. This information is incorrect. The RBI has not issued any such directive, and cheques written in black ink remain valid and acceptable.

Claim:
The Reserve Bank of India (RBI) has issued new guidelines prohibiting using black ink for writing cheques. As per the claimed directive, cheques must now be written exclusively in blue or green ink.

Fact Check:
Upon thorough verification, it has been confirmed that the claim regarding the Reserve Bank of India (RBI) issuing a directive banning the use of black ink for writing cheques is entirely false. No such notification, guideline, or instruction has been released by the RBI in this regard. Cheques written in black ink remain valid, and the public is advised to disregard such unverified messages and rely only on official communications for accurate information.
As stated by the Press Information Bureau (PIB), this claim is false The Reserve Bank of India has not prescribed specific ink colors to be used for writing cheques. There is a mention of the color of ink to be used in point number 8, which discusses the care customers should take while writing cheques.


Conclusion:
The claim that the Reserve Bank of India has banned the use of black ink for writing cheques is completely false. No such directive, rule, or guideline has been issued by the RBI. Cheques written in black ink are valid and acceptable. The RBI has not prescribed any specific ink color for writing cheques, and the public is advised to disregard unverified messages. While general precautions for filling out cheques are mentioned in RBI advisories, there is no restriction on the color of the ink. Always refer to official sources for accurate information.
- Claim: The new RBI ink guidelines are mandatory from a specified date.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Betting has long been associated with sporting activities and has found a growing presence in online gaming and esports globally. As the esports industry continues to expand, Statista has projected that it will reach a market value of $5.9 billion by 2029. As such, associated markets have also seen significant growth. In 2024, this segment accounted for an estimated $2.5 billion globally. While such engagement avenues are popular among international audiences, they also bring attention to concerns around regulation, integrity, and user protection. As esports builds its credibility and reach, especially among younger demographics, these aspects become increasingly important to address in policy and practice.
What Does Esports Betting Involve?
Much like traditional sports, esports engagement in some regions includes the practice of wagering on teams, players, or match outcomes. But it is inherently more complex. The accurate valuation of odds in online gaming and esports can be complicated by frequently updated game titles, changing teams, and shifting updates to game mechanics (called metas- most effective strategies). Bets can be placed using real money, virtual items like skins (digital avatars), or increasingly, cryptocurrency.
Esports and Wagering: Emerging Issues and Implications
- Legal Grey Areas: While countries like South Korea and some USA states have dedicated regulations for esports betting and licensed bookmaking, most do not. This creates legal grey areas for betting service providers to access unregulated markets, increasing the risk of fraud, money laundering, and exploitation of bettors in those regions.
- The Skill v/s Chance Dilemma: Most gambling laws across the world regulate betting based on the distinction between ‘games of skill’ and ‘games of chance’. Betting on the latter is typically illegal, since winning depends on chance. But the definitions of ‘skill’ and ‘chance’ may vary by jurisdiction. Also, esports betting often blurs into gambling. Outcomes may depend on player skill, but in-game economies like skin betting and unpredictable gameplay introduce elements of chance, complicating regulation and making enforcement difficult.
- Underage Gambling and Addiction Risks: Players are often minors and are exposed to the gambling ecosystem due to gamified betting through reward systems like loot boxes. These often mimic the mechanics of betting, normalising gambling behaviours among young users before they fully understand the risks. This can lead to the development of addictive behaviours.
- Match-Fixing and Loss of Integrity: Esports are particularly susceptible to match-fixing because of weak regulation, financial pressures, and the anonymity of online betting. Instances like the Dota 2 Southeast Asia Scandals (2023) and Valorant match-fixing in North America (2021) can jeopardise audience trust and sponsorships. This affects the trustworthiness of minor tournaments, where talent is discovered.
- Cybersecurity and Data Risks: Esports betting apps collect sensitive user data, making them an attractive target for cybercrime. Bettors are susceptible to identity theft, financial fraud, and data breaches, especially on unlicensed platforms.
Way Forward
To strengthen trust, ensure user safety, and protect privacy within the esports ecosystem, responsible management of betting practices can be achieved through targeted interventions focused on:
- National-Level Regulations: Countries like India have a large online gaming and esports market. It will need to create a regulatory authority along the lines of the UK’s Gambling Commission and update its gambling laws to protect consumers.
- Protection of Minors: Setting guardrails such as age verification, responsible advertising, anti-fraud mechanisms, self-exclusion tools, and spending caps can help to keep a check on gambling by minors.
- Harmonizing Global Standards: Since esports is inherently global, aligning core regulatory principles across jurisdictions (such as through multi-country agreements or voluntary industry codes of conduct) can help create consistency while avoiding overregulation.
- Co-Regulation: Governments, esports organisers, betting platforms, and player associations should work closely to design effective, well-informed policies. This can help uphold the interests of all stakeholders in the industry.
Conclusion
Betting in esports is inevitable. But the industry faces a double dilemma- overregulating on the one hand, or letting gambling go unchecked, on the other. Both can be detrimental to its growth. This is why there is a need for industry actors like policymakers, platforms and organisers to work together to harmonise legal inconsistencies, protect vulnerable users and invest in forming data security. Forming industry-wide ethics boards, promoting regional regulatory dialogue, and instating transparency measures for betting operators can be a step in this direction to ensure that esports evolves into a mature, trusted global industry.