#FactCheck - An edited video of Bollywood actor Ranveer Singh criticizing PM getting viral
Executive Summary:
An alleged video is making the rounds on the internet featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government. But after examining the video closely it revealed that it has been tampered with to change the audio. In fact, the original videos posted by different media outlets actually show Ranveer Singh praising Varanasi, professing his love for Lord Shiva, and acknowledging Modiji’s role in enhancing the cultural charms and infrastructural development of the city. Differences in lip synchronization and the fact that the original video has no sign of criticizing PM Modi show that the video has been potentially manipulated in order to spread misinformation.
Claims:
The Viral Video of Bollywood actor Ranveer Singh criticizing Prime Minister Narendra Modi.
Fact Check:
Upon receiving the Video we divided the video into keyframes and reverse-searched one of the images, we landed on another video of Ranveer Singh with lookalike appearance, posted by an Instagram account named, “The Indian Opinion News''. In the video Ranveer Singh talks about his experience of visiting Kashi Vishwanath Temple with Bollywood actress Kriti Sanon. When we watched the Full video we found no indication of criticizing PM Modi.
Taking a cue from this we did some keyword search to find the full video of the interview. We found many videos uploaded by media outlets but none of the videos indicates criticizing PM Modi as claimed in the viral video.
Ranveer Singh shared his thoughts about how he feels about Lord Shiva, his opinions on the city and the efforts undertaken by the Prime Minister Modi to keep history and heritage of Varanasi alive as well as the city's ongoing development projects. The discrepancy in the viral video clip is clearly seen when we look at it closely. The lips are not in synchronization with the words which we can hear. It is clearly seen in the original video that the lips are in perfect synchronization with the words of audio. Upon lack of evidence to the claim made and discrepancies in the video prove that the video was edited to misrepresent the original interview of Bollywood Actor Ranveer Singh. Hence, the claim made is misleading and false.
Conclusion:
The video that claims Ranveer Singh criticizing PM Narendra Modi is not genuine. Further investigation shows that it has been edited by changing the audio. The original footage actually shows Singh speaking positively about Varanasi and Modi's work. Differences in lip-syncing and upon lack of evidence highlight the danger of misinformation created by simple editing. Ultimately, the claim made is false and misleading.
- Claim: A viral featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government.
- Claimed on: X (formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions
Introduction
Cybersecurity threats have been globally prevalent for quite some time now. All nations, organisations and individuals stand at risk from new and emerging potential cybersecurity threats, putting finances, privacy, data, identities and sometimes human lives at stake. The latest Data Breach Report by IBM revealed that nearly a staggering 83% of organisations experienced more than one data breach instance during 2022. As per the 2022 Data Breach Investigations Report by Verizon, the total number of global ransomware attacks surged by 13%, indicating a concerning rise equal to the last five years combined. The statistics clearly showcase how the future is filled with potential threats as we advance further into the digital age.
Who is Okta?
Okta is a secure identity cloud that links all your apps, logins and devices into a unified digital fabric. Okta has been in existence since 2009 and is based out of San Francisco, USA and has been one of the leading service providers in the States. The advent of the company led to early success based on the high-quality services and products introduced by them in the market. Although Okta is not as well-known as the big techs, it plays a vital role in big organisations' cybersecurity systems. More than 18,000 users of the identity management company's products rely on it to give them a single login for the several platforms that a particular business uses. For instance, Zoom leverages Okta to provide "seamless" access to its Google Workspace, ServiceNow, VMware, and Workday systems with only one login, thus showing how Okta is fundamental in providing services to ease the human effort on various platforms. In the digital age, such organisations are instrumental in leading the pathway to innovation and entrepreneurship.
The Okta Breach
The last Friday, 20 October, Okta reported a hack of its support system, leading to chaos and havoc within the organisation. The result of the hack can be seen in the market in the form of the massive losses incurred by Okta in the stock exchange.
Since the attack, the company's market value has dropped by more than $2 billion. The well-known incident is the most recent in a long line of events connected to Okta or its products, which also includes a wave of casino invasions that caused days-long disruptions to hotel rooms in Las Vegas, casino giants Caesars and MGM were both affected by hacks as reported earlier this year. Both of those attacks, targeting MGM and Caesars’ Okta installations, used a sophisticated social engineering attack that went through IT help desks.
What can be done to prevent this?
Cybersecurity attacks on organisations have become a very common occurrence ever since the pandemic and are rampant all across the globe. Major big techs have been successful in setting up SoPs, safeguards and precautionary measures to protect their companies and their digital assets and interests. However, the Medium, Mico and small business owners are the most vulnerable to such unknown high-intensity attacks. The governments of various nations have established Computer Emergency Response Teams to monitor and investigate such massive-scale cyberattacks both on organisations and individuals. The issue of cybersecurity can be better addressed by inculcating the following aspects into our daily digital routines:
- Team Upskilling: Organisations need to be critical in creating upskilling avenues for employees pertaining to cybersecurity and threats. These campaigns should be run periodically, focusing on both the individual and organisational impact of any threat.
- Reporting Mechanism for Employees and Customers: Business owners and organisations need to deploy robust, sustainable and efficient reporting mechanisms for both employees well as customers. The mechanism will be fundamental in pinpointing the potential grey areas and threats in the cyber security mechanism as well. A dedicated reporting mechanism is now a mandate by a lot of governments around the world as it showcases transparency and natural justice in terms of legal remedies.
- Preventive, Precautionary and Recovery Policies: Organisations need to create and deploy respective preventive, precautionary and recovery policies in regard to different forms of cyber attacks and threats. This will be helpful in a better understanding of threats and faster response in cases of emergencies and attacks. These policies should be updated regularly, keeping in mind the emerging technologies. Efficient deployment of the policies can be done by conducting mock drills and threat assessment activities.
- Global Dialogue Forums: It is pertinent for organisations and the industry to create a community of cyber security enthusiasts from different and diverse backgrounds to address the growing issues of cyberspace; this can be done by conducting and creating global dialogue forums, which will act as the beacon of sharing best practices, advisories, threat assessment reports, potential threats and attacks thus establishing better inter-agency and inter-organisation communication and coordination.
- Data Anonymisation and Encryption: Organisations should have data management/processing policies in place for transparency and should always store data in an encrypted and anonymous manner, thus creating a blanket of safety in case of any data breach.
- Critical infrastructure: The industry leaders should push the limits of innovation by setting up state-of-the-art critical cyber infrastructure to create employment, innovation, and entrepreneurship spirit among the youth, thus creating a whole new generation of cyber-ready professionals and dedicated netizens. Critical infrastructures are essential in creating a safe, secure, resilient and secured digital ecosystem.
- Cysec Audits & Sandboxing: All organisations should establish periodic routines of Cybersecurity audits, both by internal and external entities, to find any issue/grey area in the security systems. This will create a more robust and adaptive cybersecurity mechanism for the organisation and its employees. All tech developing and testing companies need to conduct proper sandboxing exercises for all or any new tech/software creation to identify its shortcomings and flaws.
Conclusion
In view of the rising cybersecurity attacks on organisations, especially small and medium companies, a lot has been done, and a lot more needs to be done to establish an aspect of safety and security for companies, employees and customers. The impact of the Okta breach very clearly show how cyber attacks can cause massive repercussion for any organisation in the form of monetary loss, loss of business, damage to reputation and a lot of other factors. One should take such instances as examples and learnings for ourselves and prepare our organisation to combat similar types of threats, ultimately working towards preventing these types of threats and eradicating the influence of bad actors from our digital ecosystem altogether.
References:
- https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach#:~:text=In%202022%2C%20the%20global%20average,legal%20fees%2C%20and%20audit%20fees.
- https://www.okta.com/intro-to-okta/#:~:text=Okta%20is%20a%20secure%20identity,use%20to%20work%2C%20instantly%20available.
- https://www.cyberpeace.org/resources/blogs/mgm-resorts-shuts-down-it-systems-after-cyberattack
Introduction
Misinformation has the potential to impact people, communities and institutions alike, and the ramifications can be far-ranging. From influencing voter behaviours and consumer choices to shaping personal beliefs and community dynamics, the information we consume in our daily lives affects every aspect of our existence. And so, when this very information is flawed or incomplete, whether accidentally or deliberately so, it has the potential to confuse and mislead people.
‘Debunking’ is the process of exposing false information or countering inaccuracies and manipulation by presenting actual facts. The goal is to minimise the harmful effects of misinformation by informing and educating people. Debunking initiatives work hard to expose false information and cut down conspiracies, catalogue evidence of false information, clearly identify sources of misinformation vs. accurate information, and assert the truth. Debunking looks at building capacity and educating people both as a strategy and goal.
Debunking is most effective when it comes from trusted sources, provides detailed explanations, and offers guidance and verifiable advice. Debunking is reactive in nature and it focuses on specific instances of misinformation and is closely tied to fact-checking. Debunking aims to mitigate the impact of misinformation that has already spread. As such, the approach is to contain and correct, post-occurrence. The most common method of debunking is collaboration between fact-checking groups and social media companies. When journalists or other fact-checkers identify false or misleading content, social media sites flag or label it such, so that audiences are alerted. Debunking is an essential method for reducing the impact and incidence of misinformation by providing real facts and increasing overall accuracy of content in the digital information ecosystem.
Role of Debunking the Misinformation
Debunking fights against false or misleading information by correcting false claims, myths, and misinformation with evidence-based rebuttals. It combats untruths and the spread of misinformation by providing and disseminating debunked evidence to the public. Debunking by presenting evidence that contradicts misleading facts and encourages individuals to develop fact-checking habits and proactively check for authenticated sources. Debunking plays a vital role in boosting trust in credible sources by offering evidence-based corrections and enhancing the credibility of online information. By exposing falsehoods and endorsing qualities like information completeness and evidence-backed data and logic, debunking efforts help create a culture of well-informed and constructive public conversations and analytical exchanges. Effectively dispelling myths and misinformation can help create communities and societies that are more educated, resilient, and goal-oriented.
Debunking as a tailoring Strategy to counter Misinformation
Understanding the information environment and source trustworthiness is critical for developing effective debunking techniques. Successful debunking efforts use clear messages, appealing forms, and targeted distribution to reach a wide range of netizens. Debunking as an effective method for combating misinformation includes analysing successful efforts, using fact-checking, relying on reputable sources for corrections, and using scientific communication. Fact-checking plays a critical role in ensuring information accuracy and holding people accountable for making misleading claims. Collaborative efforts and transparent techniques can boost the credibility and efficacy of fact-checking activities and boost the legitimacy and effectiveness of debunking initiatives at a larger scale. Scientific communication is also critical for debunking myths about different topics/concerns by giving evidence-based information. Clear and understandable framing of scientific knowledge is critical for engaging broad audiences and effectively refuting misinformation.
CyberPeace Policy Recommendations
- It is recommended that debunking initiatives must highlight core facts, emphasising what is true over what is wrong and establishing a clear contrast between the two. This is crucial as people are more likely to believe familiar information even if they learn later that it is incorrect. Debunking must provide a comprehensive explanation, filling the ‘information gap’ created by the myth. This can be done by explaining things as clearly as possible, as people may stop paying attention if they are faced with an overload of competing information. The use of visuals to illustrate core facts is an effective way to help people understand the issue and clearly tell the difference between information and misinformation.
- Individuals can play a role in debunking misinformation on social media by highlighting inconsistencies, recommending related articles with corrections or sharing trusted sources and debunking reports in their communities.
- Governments and regulatory agencies can improve information openness by demanding explicit source labelling and technical measures to be implemented on platforms. This can increase confidence in information sources and equip people to practice discernment when they consume content online. Governments should also support and encourage independent fact-checking organisations that are working to disprove misinformation. Digital literacy programmes may teach the public how to critically assess information online and spot any misinformation.
- Tech businesses may enhance algorithms for detecting and flagging misinformation, therefore reducing the propagation of misleading information. Offering options for people to report suspicious/doubtful information and misinformation can empower them and help them play an active role in identifying and rectifying inaccurate information online and foster a more responsible information environment on the platforms.
Conclusion
Debunking is an effective strategy to counter widespread misinformation through a combination of fact-checking, scientific evidence, factual explanations, verified facts and corrections. Debunking can play an important role in fostering a culture where people look for authenticity while consuming the information and place a high value on trusted and verified information. A collaborative strategy can increase the legitimacy and reach of debunking efforts, making them more effective in reaching larger audiences and being easy-to-understand for a wide range of demographics. In a complex and ever-evolving digital ecosystem, it is important to build information resilience both at the macro level for the ecosystem as a whole and at the micro level, with the individual consumer. Only then can we ensure a culture of mindful, responsible content creation and consumption.
References