#FactCheck-RBI's Alleged Guidelines on Ink Colour for Cheque Writing
Executive Summary:
A viral message is circulating claiming the Reserve Bank of India (RBI) has banned the use of black ink for writing cheques. This information is incorrect. The RBI has not issued any such directive, and cheques written in black ink remain valid and acceptable.

Claim:
The Reserve Bank of India (RBI) has issued new guidelines prohibiting using black ink for writing cheques. As per the claimed directive, cheques must now be written exclusively in blue or green ink.

Fact Check:
Upon thorough verification, it has been confirmed that the claim regarding the Reserve Bank of India (RBI) issuing a directive banning the use of black ink for writing cheques is entirely false. No such notification, guideline, or instruction has been released by the RBI in this regard. Cheques written in black ink remain valid, and the public is advised to disregard such unverified messages and rely only on official communications for accurate information.
As stated by the Press Information Bureau (PIB), this claim is false The Reserve Bank of India has not prescribed specific ink colors to be used for writing cheques. There is a mention of the color of ink to be used in point number 8, which discusses the care customers should take while writing cheques.


Conclusion:
The claim that the Reserve Bank of India has banned the use of black ink for writing cheques is completely false. No such directive, rule, or guideline has been issued by the RBI. Cheques written in black ink are valid and acceptable. The RBI has not prescribed any specific ink color for writing cheques, and the public is advised to disregard unverified messages. While general precautions for filling out cheques are mentioned in RBI advisories, there is no restriction on the color of the ink. Always refer to official sources for accurate information.
- Claim: The new RBI ink guidelines are mandatory from a specified date.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading

Introduction
In the age of digital advancement, where technology continually grows, so does the method of crime. The rise of cybercrime has created various threats to individuals and organizations, businesses, and government agencies. To combat such crimes law enforcement agencies are looking out for innovative solutions against these challenges. One such innovative solution is taken by the Surat Police in Gujarat, who have embraced the power of Artificial Intelligence (AI) to bolster their efforts in reducing cybercrimes.
Key Highlights
Surat, India, has launched an AI-based WhatsApp chatbot called "Surat Police Cyber Mitra Chatbot" to tackle growing cybercrime. The chatbot provides quick assistance to individuals dealing with various cyber issues, ranging from reporting cyber crimes to receiving safety tips. The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety. Surat Police Commissioner-in-Charge commended the use of AI in crime control as a positive step forward, while also stressing the need for continuous improvements in various areas, including technological advancements, data acquisition related to cybercrime, and training for police personnel.
The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, allowing them to access crucial information on cyber fraud and legal matters.
Surat Police's AI Chatbot: Cyber Mitra
- Surat Police in Gujarat, India, has launched an AI-based WhatsApp chatbot, "Surat Police Cyber Mitra Chatbot," to combat growing cybercrime.
- The chatbot provides assistance to individuals dealing with various cyber issues, from reporting cyber crimes to receiving safety tips.
- The initiative is the first of its kind in the country, showcasing Surat Police's dedication to using advanced technology for public safety.
- The Surat Cyber Mitra Chatbot, available on WhatsApp number 9328523417, offers round-the-clock assistance to citizens, providing crucial information on cyber fraud.
The Growing Cybercrime Threat
With the advancement of technology, cybercrime has become more complex due to the interconnectivity of digital devices and the internet. The criminals exploit vulnerabilities in software, networks, and human behavior to perpetrate a wide range of malicious activities to fulfill their illicit gains. Individuals and organizations face a wide range of cyber risks that can cause significant financial, reputational, and emotional harm.
Surat Police’s Strategic Initiative
Surat Police Cyber Mitra Chatbot is an AI-powered tool for instant problem resolution. This innovative approach allows citizens to address any issue or query at their doorstep, providing immediate and accurate responses to concerns. The chatbot is accessible 24/7, 24 hours a day, and serves as a reliable resource for obtaining legal information related to cyber fraud.
The use of AI in police initiatives has been a topic of discussion for some time, and the Surat City Police has taken this step to leverage technology for the betterment of society. The chatbot promises to boost public trust towards law enforcement and improve the legal system by addressing citizen issues within seconds, ranging from financial disputes to cyber fraud incidents.
This accessibility extends to inquiries such as how to report financial crimes or cyber-fraud incidents and understand legal procedures. The availability of accurate information will not only enhance citizens' trust in the police but also contribute to the efficiency of law enforcement operations. The availability of accurate information will lead to more informed interactions between citizens and the police, fostering a stronger sense of community security and collaboration.
The utilisation of this chatbot will facilitate access to information and empower citizens to engage more actively with the legal system. As trust in the police grows and legal processes become more transparent and accessible, the overall integrity and effectiveness of the legal system are expected to improve significantly.
Conclusion
The Surat Police Cyber Mitra Chatbot is an AI-powered tool that provides round-the-clock assistance to citizens, enhancing public trust in law enforcement and streamlining access to legal information. This initiative bridges the gap between law enforcement and the community, fostering a stronger sense of security and collaboration, and driving improvements in the efficiency and integrity of the legal process.
References:
- https://www.ahmedabadmirror.com/surat-first-city-in-india-to-launch-ai-chatbot-to-tackle-cybercrime/81861788.html
- https://government.economictimes.indiatimes.com/news/secure-india/gujarat-surat-police-adopts-ai-to-check-cyber-crimes/107410981
- https://www.timesnownews.com/india/chatbot-and-advanced-analytics-surat-police-utilising-ai-technology-to-reduce-cybercrime-article-107397157
- https://www.grownxtdigital.in/technology/surat-police-ai-cyber-mitra-chatbot-gujarat/
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)