#FactCheck - Manipulated Image Alleging Disrespect Towards PM Circulates Online
Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.

Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.



Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.

ABP Live and The Hitavada also have the same image published on their website in May 2019.


Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.

With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Tech News overview
Recently, the TRAI has passed some recommendations that benefit the telecommunications industry in India. The suggestion is to lower the entry fees and bank guarantees on the 26th of July 20, 2022. Then wrote a few consulting papers, countering comments by the stakeholders of various companies.
In a significant move, TRAI (Telecom Regulatory Authority of India) has proposed spacious changes in terms of entry fees and bank guarantees in the telecom sector. These endorsements have been abeyant to escort the new era of competition, investment, and innovation, reshaping India’s telecommunication landscape.
Proposal Points by TRAI to telecom companies:
As we dive into considering the recommendations by TRAI into the crucial aspects of the telecom industry, deliberate about the significance of entry fees, the importance of banks, and the guarantees.
- Entry fees: Entry fees are the advance key point that upholds the charges that telecom companies pay to the government when they want to offer services to the civilians of the country. The amount they pay is quite hefty and usually non-refundable.
- Bank guarantee: An important factor that is also a type of security, the financial security that assures the telecom companies to fulfil their financial obligations and follow the regulations and policy conditions specified in their license agreement.
- TRAI roleplay: The Telecom Regulatory Authority of India is an authority responsible for supervising the telecom industry in the country. Making sure that the regulations and recommendations such as entry fees and bank guarantees are working in the proper way or not, a supervision of such things.
- Expected outcomes: TRAI focuses on reducing the entry fees for various types of licenses in the other telecom sector. This step encourages other new telecom operators to enter the market and increase the fair price and investment, which leads to enhancing the competition.
- Consolidating Bank guarantees: TRAI also proposed an amalgamation of bank guarantees, which means telecom companies are required to maintain separate guarantees for different business licenses, which makes business doing sectors an easy environment.
- No entry fee at the time of License Renewal: Recommendations by TRAI by not charging any entry fees when telecom operators renew their licenses. This step can reduce the financial burden on both existing and new entrants,, specifically for UL(VNO)license shareholders.
Reshaping the telecom panorama:
Recommendation by TRAI that can potentially help in reshaping the Telecoms landscape in India in various aspects:
- Increment in healthy Competition: By reducing the entry fees, TRAI would be creating a platform profitable and affordable for new market players in India.
- Market enlargement: Lowering the entry fees might lead to the participation of new entrants, including regional and smaller players,, to get involved in the telecom industry.
- Due to the market expansion, the outcomes can potentially lead to improved access to telecom services in underdeveloped areas and regions and contribute to digital inclusion.
- Job Recruitment: The evolution in the telecom industry due to new operators and increased investment can lead to job uplift in both telecom and industries related to technological infrastructure.
- Choice of preference: As there is a rise in competition, consumers are likely to have many choices when it comes to telecom service providers. The consumers get to select from a wider range of services, leading to better value for money and quality of service.
- Quality of service: With increased competition and a hefty amount of investment, telecom operators have a spur to enhance the quality of service.
Conclusion:
In conclusion, TRAIs proposal on lowering the entry fees and bank guarantee for financial services marks a significant milestone in India’s telecom industry. These essential changes hold the promise of fostering competition, investment, a platform for new entrants, quality of service, wider range of platforms for selection. As these advance suggestions take place, in telecom industry in India is on a new threshold of an existing transformation that could reevaluate the way we communicate and connect.
Reference:

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Introduction
There is a rising desire for artificial intelligence (AI) laws that limit threats to public safety and protect human rights while allowing for a flexible and inventive setting. Most AI policies prioritize the use of AI for the public good. The most compelling reason for AI innovation as a valid goal of public policy is its promise to enhance people's lives by assisting in the resolution of some of the world's most difficult difficulties and inefficiencies and to emerge as a transformational technology, similar to mobile computing. This blog explores the complex interplay between AI and internet governance from an Indian standpoint, examining the challenges, opportunities, and the necessity for a well-balanced approach.
Understanding Internet Governance
Before delving into an examination of their connection, let's establish a comprehensive grasp of Internet Governance. This entails the regulations, guidelines, and criteria that influence the global operation and management of the Internet. With the internet being a shared resource, governance becomes crucial to ensure its accessibility, security, and equitable distribution of benefits.
The Indian Digital Revolution
India has witnessed an unprecedented digital revolution, with a massive surge in internet users and a burgeoning tech ecosystem. The government's Digital India initiative has played a crucial role in fostering a technology-driven environment, making technology accessible to even the remotest corners of the country. As AI applications become increasingly integrated into various sectors, the need for a comprehensive framework to govern these technologies becomes apparent.
AI and Internet Governance Nexus
The intersection of AI and Internet governance raises several critical questions. How should data, the lifeblood of AI, be governed? What role does privacy play in the era of AI-driven applications? How can India strike a balance between fostering innovation and safeguarding against potential risks associated with AI?
- AI's Role in Internet Governance:
Artificial Intelligence has emerged as a powerful force shaping the dynamics of the internet. From content moderation and cybersecurity to data analysis and personalized user experiences, AI plays a pivotal role in enhancing the efficiency and effectiveness of Internet governance mechanisms. Automated systems powered by AI algorithms are deployed to detect and respond to emerging threats, ensuring a safer online environment.
A comprehensive strategy for managing the interaction between AI and the internet is required to stimulate innovation while limiting hazards. Multistakeholder models including input from governments, industry, academia, and civil society are gaining appeal as viable tools for developing comprehensive and extensive governance frameworks.
The usefulness of multistakeholder governance stems from its adaptability and flexibility in requiring collaboration from players with a possible stake in an issue. Though flawed, this approach allows for flaws that may be remedied using knowledge-building pieces. As AI advances, this trait will become increasingly important in ensuring that all conceivable aspects are covered.
The Need for Adaptive Regulations
While AI's potential for good is essentially endless, so is its potential for damage - whether intentional or unintentional. The technology's highly disruptive nature needs a strong, human-led governance framework and rules that ensure it may be used in a positive and responsible manner. The fast emergence of GenAI, in particular, emphasizes the critical need for strong frameworks. Concerns about the usage of GenAI may enhance efforts to solve issues around digital governance and hasten the formation of risk management measures.
Several AI governance frameworks have been published throughout the world in recent years, with the goal of offering high-level guidelines for safe and trustworthy AI development. The OECD's "Principles on Artificial Intelligence" (OECD, 2019), the EU's "Ethics Guidelines for Trustworthy AI" (EU, 2019), and UNESCO's "Recommendations on the Ethics of Artificial Intelligence" (UNESCO, 2021) are among the multinational organizations that have released their own principles. However, the advancement of GenAI has resulted in additional recommendations, such as the OECD's newly released "G7 Hiroshima Process on Generative Artificial Intelligence" (OECD, 2023).
Several guidance documents and voluntary frameworks have emerged at the national level in recent years, including the "AI Risk Management Framework" from the United States National Institute of Standards and Technology (NIST), a voluntary guidance published in January 2023, and the White House's "Blueprint for an AI Bill of Rights," a set of high-level principles published in October 2022 (The White House, 2022). These voluntary policies and frameworks are frequently used as guidelines by regulators and policymakers all around the world. More than 60 nations in the Americas, Africa, Asia, and Europe had issued national AI strategies as of 2023 (Stanford University).
Conclusion
Monitoring AI will be one of the most daunting tasks confronting the international community in the next centuries. As vital as the need to govern AI is the need to regulate it appropriately. Current AI policy debates too often fall into a false dichotomy of progress versus doom (or geopolitical and economic benefits versus risk mitigation). Instead of thinking creatively, solutions all too often resemble paradigms for yesterday's problems. It is imperative that we foster a relationship that prioritizes innovation, ethical considerations, and inclusivity. Striking the right balance will empower us to harness the full potential of AI within the boundaries of responsible and transparent Internet Governance, ensuring a digital future that is secure, equitable, and beneficial for all.
References
- The Key Policy Frameworks Governing AI in India - Access Partnership
- AI in e-governance: A potential opportunity for India (indiaai.gov.in)
- India and the Artificial Intelligence Revolution - Carnegie India - Carnegie Endowment for International Peace
- Rise of AI in the Indian Economy (indiaai.gov.in)
- The OECD Artificial Intelligence Policy Observatory - OECD.AI
- Artificial Intelligence | UNESCO
- Artificial intelligence | NIST