World Environment Day 2025: The Hidden Cost of Our Digital Lives
On June 5th, the world comes together to reflect on how the way we live impacts the environment. We discuss conserving water, cutting back on plastic, and planting trees, but how often do we think about the environmental impact of our digital lives?
The internet is ubiquitous but invisible in a world that is becoming more interconnected by the day. It drives our communications, meetings, and recollections. However, there is a price for this digital convenience: carbon emissions.
A Digital Carbon Footprint: What Is It?
Electricity is necessary for every video we stream, email we send, and file we store on the cloud. But almost 60% of the electricity produced today is generated from burning fossil fuels. The digital world uses an incredible amount of energy, from the energy-hungry data centres that house our information to the networks that send it. Thus, the greenhouse gas emissions produced by our use of digital tools and services are referred to as our "digital carbon footprint."
To put it in perspective:
- Up to 150–200 grams of CO₂ can be produced by streaming an hour-long HD video on your phone.
- A typical email sent can release about 4 grams of CO₂, and more if it contains attachments.
- Comparable to the airline industry, the internet as a whole accounts for 1.5% to 4% of global greenhouse gas emissions.
Why It Matters
Ironically, despite the fact that digital life frequently feels "clean" and weightless, it is backed by enormous, power-hungry infrastructures. Additionally, our online activity is growing at a rapid pace as digital penetration increases. Plus, with the advent of AI and big data, the demand for energy is only going to rise. The harms of air, water, and soil degradation, and biodiversity loss are already upon us. It's high time we reconsider how we use technology on World Environment Day.
What Can You Do?
The good news is that even minor adjustments to our online conduct can have an impact.
🗑️ Clear out your digital clutter by getting rid of unnecessary emails, apps, and files.
📥 Unsubscribe from mailing lists that you no longer use.
📉 When HD is not required, stream videos with lower quality.
⚡ Make use of energy-saving gadgets and disconnect them when not in use.
🌐 Make the move to renewable energy-powered, environmentally friendly cloud providers.
🗳️ Support informed policy by engaging with your elected representatives and advocating for greener tech policies. Knowing your digital rights and responsibilities can help shape smarter policies and a healthier planet.
We at the CyberPeace Foundation think that cyberspace needs to be sustainable. An eco-friendly digital world is also a safer one, where all communities can thrive in harmony. We must promote digital responsibility, including its environmental component, as we work towards digital equity and resilience.
On this World Environment Day, let's go one step further and work towards a greener internet as well as a greener planet.
Related Blogs

Introduction
A Reuters investigation has uncovered an elephant in the room regarding Meta Platforms' internal measures to address online fraud and illicit advertising. The confidential documents that Reuters reviewed disclosed that Meta was planning to generate approximately 10% of its 2024 revenue, i.e., USD 16 billion, from ads related to scams and prohibited goods. The findings point out a disturbing paradox: on the one hand, Meta is a vocal advocate for digital safety and platform integrity, while on the other hand, the internal logs of the company indicate the existence of a very large area allowing the shunning of fraudulent advertisement activities that exploit users throughout the world.
The Scale of the Problem
Internal Meta projections show that its platforms, Facebook, Instagram, and WhatsApp, are displaying a staggering 15 billion scam ads per day combined. The advertisements include deceitful e-commerce promotions, fake investment schemes, counterfeit medical products, and unlicensed gambling platforms.
Meta has developed sophisticated detection tools, but even then, the system does not catch the advertisers until they are 95% certain to be fraudsters. By having at least that threshold for removing an ad, the company is unlikely to lose much money. As a result, instead of turning the fraud adjacent advertisers down, it charges them higher ad rates, which is the strategy they call “penalty bids” internally.
Internal Acknowledgements & Business Dependence
Internal documents that date between 2021 and 2025 reveal that the financial, safety, and lobbying divisions of Meta were cognizant of the enormity of revenues generated from scams. One of the 2025 strategic papers even describes this revenue source as "violating revenue," which implies that it includes ads that are against Meta's policies regarding scams, gambling, sexual services, and misleading healthcare products.
The company's top executives consider the cost-benefit scenario of stricter enforcement. According to a 2024 internal projection, Meta's half-yearly earnings from high-risk scam ads were estimated at USD 3.5 billion, whereas regulatory fines for such violations would not exceed USD 1 billion, thus making it a tolerable trade-off from a commercial viewpoint. At the same time, the company intends to scale down scam ad revenue gradually, thus from 10.1% in 2024 to 7.3% by 2025, and 6% by 2026; however, the documents also reveal a planned slowdown in enforcement to avoid "abrupt reductions" that could affect business forecasts.
Algorithmic Amplification of Scams
One of the most alarming situations is the fact that Meta's own advertising algorithms amplify scam content. It has been reported that users who click on fraudulent ads are more likely to see other similar ads, as the platform's personalisation engine assumes user "interest."
This scenario creates a self-reinforcing feedback loop where the user engagement with scam content dictates the amount of such content being displayed. Thus, a digital environment is created which encourages deceptive engagement and consequently, user trust is eroded and systemic risk is amplified.
An internal presentation in May 2025 was said to put a number on how deeply the platform's ad ecosystem was intertwined with the global fraud economy, estimating that one-third of the scams that succeeded in the U.S. were due to advertising on Meta's platforms.
Regulatory & Legal Implications
The disclosures arrived at the same time as the US and UK governments started to closely check the company's activities more than ever before.
- The U.S. Securities and Exchange Commission (SEC) is said to be looking into whether Meta has had any part in the promotion of fraudulent financial ads.
- The UK’s Financial Conduct Authority (FCA) found that Meta’s platforms were the main sources of scams related to online payments and claimed that the amount of money lost was more than all the other social platforms combined in 2023.
Meta’s spokesperson, Andy Stone, at first denied the accusations, stating that the figures mentioned in the leak were “rough and overly-inclusive”; nevertheless, he conceded that the company’s consistent efforts toward enforcement had negatively impacted revenue and would continue to do so.
Operational Challenges & Policy Gaps
The internal documents also reveal the weaknesses in Meta's day-to-day operations when it comes to the implementation of its own policies.
- Because of the large number of employees laid off in 2023, the whole department that dealt with advertiser-brand impersonation was said to have been dissolved.
- Scam ads were categorised as a "low severity" issue, which was more of a "bad user experience" than a critical security risk.
- At the end of 2023, users were submitting around 100,000 legitimate scam reports per week, of which Meta dismissed or rejected 96%.
Human Impact: When Fraud Becomes Personal
The financial and ethical issues have tangible human consequences. The Reuters investigation documented multiple cases of individuals defrauded through hijacked Meta accounts.
One striking example involves a Canadian Air Force recruiter, whose hacked Facebook account was used to promote fake cryptocurrency schemes. Despite over a hundred user reports, Meta failed to act for weeks, during which several victims, including military colleagues, lost tens of thousands of dollars.
The case underscores not just platform negligence, but also the difficulty of law enforcement collaboration. Canadian authorities confirmed that funds traced to Nigerian accounts could not be recovered due to jurisdictional barriers, a recurring issue in transnational cyber fraud.
Ethical and Cybersecurity Implications
The research has questioned extremely important things at least from the perspective of cyber policy:
- Platform Accountability: Meta, by its practice, is giving more importance to the monetary aspect rather than the truth, and in this way, it is going against the principles of responsible digital governance.
- Transparency in Ad Ecosystems: The lack of transparency in digital advertising systems makes it very easy for dishonest actors to use automated processes with very little supervision.
- Algorithmic Responsibility: The use of algorithms that impact the visibility of misleading content and targeting can be considered the direct involvement of the algorithms in the fraud.
- Regulatory Harmonisation: The presence of different and disconnected enforcement frameworks across jurisdictions is a drawback to the efforts in dealing with cross-border cybercrime.
- Public Trust: Users’ trust in the digital world is mainly dependent on the safety level they see and the accountability of the companies.
Conclusion
Meta’s records show a very unpleasant mix of profit, laxity, and failure in the policy area concerning scam-related ads. The platform’s readiness to accept and even profit from fraudulent players, though admitting the damage they cause, calls for an immediate global rethinking of advertising ethics, regulatory enforcement, and algorithmic transparency.
With the expansion of its AI-driven operations and advertising networks, protecting the users of Meta must evolve from being just a public relations goal to being a core business necessity, thus requiring verifiable accountability measures, independent audits, and regulatory oversight. It is an undeniable fact that there are billions of users who count on Meta’s platforms for their right to digital safety, which is why this right must be respected and enforced rather than becoming optional.
References
- https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/?utm_source=chatgpt.com
- https://www.indiatoday.in/technology/news/story/leaked-docs-claim-meta-made-16-billion-from-scam-ads-even-after-deleting-134-million-of-them-2815183-2025-11-07

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.

What are Decentralised Autonomous Organizations (DAOs)?
A Decentralised Autonomous Organisation or a DAO, is a unique take on democracy on the blockchain. It is a set of rules encoded into a self-executing contract (also known as a smart contract) that operates autonomously on a blockchain system. A DAO imitates a traditional company, although, in its more literal sense, it is a contractually created entity. In theory, DAOs have no centralised authority in making decisions for the system; it is a communally run system whereby all decisions (be it for internal governance or for the development of the blockchain system) are voted upon by the community members. DAOs are primarily characterised by a decentralised form of operation, where there is no one entity, group or individual running the system. They are self-sustaining entities, having their own currency, economy and even governance, that do not depend on a group of individuals to operate. Blockchain systems, especially DAOs are characterised by pure autonomy created to evade external coercion or manipulation from sovereign powers. DAOs follow a mutually created, agreed set of rules created by the community, that dictates all actions, activities, and participation in the system’s governance. There may also be provisions that regulate the decision-making power of the community.
Ethereum’s DAO’s White Paper described DAO as “The first implementation of a [DAO Entity] code to automate organisational governance and decision making.” Can be used by individuals working together collaboratively outside of a traditional corporate form. It can also be used by a registered corporate entity to automate formal governance rules contained in corporate bylaws or imposed by law.” The referred white paper proposes an entity that would use smart contracts to solve governance issues inherent in traditional corporations. DAOs attempt to redesign corporate governance with blockchain such that contractual terms are “formalised, automated and enforced using software.”
Cybersecurity threats under DAOs
While DAOs offer increased transparency and efficiency, they are not immune to cybersecurity threats. Cybersecurity risks in DAO, primarily in governance, stem from vulnerabilities in the underlying blockchain technology and the DAO's smart contracts. Smart contract exploits, code vulnerabilities, and weaknesses in the underlying blockchain protocol can be exploited by malicious actors, leading to unauthorised access, fund manipulations, or disruptions in the governance process. Additionally, DAOs may face challenges related to phishing attacks, where individuals are tricked into revealing sensitive information, such as private keys, compromising the integrity of the governance structure. As DAOs continue to evolve, addressing and mitigating cybersecurity threats is crucial to ensuring the trust and reliability of decentralised governance mechanisms.
Centralisation/Concentration of Power
DAOs today actively try to leverage on-chain governance, where any governance votes or transactions are directly taken on the blockchain. But such governance is often plutocratic in nature, where the wealthy hold influences, rather than democracies, since those who possess the requisite number of tokens are only allowed to vote and each token staked implies that many numbers of votes emerge from the same individual. This concentration of power in the hands of “whales” often creates disadvantages for the newer entrants into the system who may have an in-depth background but lack the funds to cast a vote. Voting, presently in the blockchain sphere, lacks the requisite concept of “one man, one vote” which is critical in democratic societies.
Smart contract vulnerabilities and external threats
Smart contracts, self-executing pieces of code on a blockchain, are integral to decentralised applications and platforms. Despite their potential, smart contracts are susceptible to various vulnerabilities such as coding errors, where mistakes in the code can lead to funds being locked or released erroneously. Some of them have been mentioned as follows;
Smart Contracts are most prone to re-entrance attacks whereby an untrusted external code is allowed to be executed in a smart contract. This scenario occurs when a smart contract invokes an external contract, and the external contract subsequently re-invokes the initial contract. This sequence of events can lead to an infinite loop, and a reentrancy attack is a tactic exploiting this vulnerability in a smart contract. It enables an attacker to repeatedly invoke a function within the contract, potentially creating an endless loop and gaining unauthorised access to funds.
Additionally, smart contracts are also prone to oracle problems. Oracles refer to third-party services or mechanisms that provide smart contracts with real-world data. Since smart contracts on blockchain networks operate in a decentralised, isolated environment, they do not have direct access to external information, such as market prices, weather conditions, or sports scores. Oracles bridge this gap by acting as intermediaries, fetching and delivering off-chain data to smart contracts, enabling them to execute based on real-world conditions. The oracle problem within blockchain pertains to the difficulty of securely incorporating external data into smart contracts. The reliability of external data poses a potential vulnerability, as oracles may be manipulated or provide inaccurate information. This challenge jeopardises the credibility of blockchain applications that rely on precise and timely external data.
Sybil Attack: A Sybil attack involves a single node managing multiple active fake identities, known as Sybil identities, concurrently within a peer-to-peer network. The objective of such an attack is to weaken the authority or influence within a trustworthy system by acquiring the majority of control in the network. The fake identities are utilised to establish and exert this influence. A successful Sybil attack allows threat actors to perform unauthorised actions in the system.
Distributed Denial of Service Attacks: A Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt the regular functioning of a network, service, or website by overwhelming it with a flood of traffic. In a typical DDoS attack, multiple compromised computers or devices, often part of a botnet (a network of infected machines controlled by a single entity), are used to generate a massive volume of requests or data traffic. The targeted system becomes unable to respond to legitimate user requests due to the excessive traffic, leading to a denial of service.
Conclusion
Decentralised Autonomous Organisations (DAOs) represent a pioneering approach to governance on the blockchain, relying on smart contracts and community-driven decision-making. Despite their potential for increased transparency and efficiency, DAOs are not immune to cybersecurity threats. Vulnerabilities in smart contracts, such as reentrancy attacks and oracle problems, pose significant risks, and the concentration of voting power among wealthy token holders raises concerns about democratic principles. As DAOs continue to evolve, addressing these challenges is essential to ensuring the resilience and trustworthiness of decentralised governance mechanisms. Efforts to enhance security measures, promote inclusivity, and refine governance models will be crucial in establishing DAOs as robust and reliable entities in the broader landscape of blockchain technology.
References:
https://www.imperva.com/learn/application-security/sybil-attack/
https://www.linkedin.com/posts/satish-kulkarni-bb96193_what-are-cybersecurity-risk-to-dao-and-how-activity-7048286955645677568-B3pV/ https://www.geeksforgeeks.org/what-is-ddosdistributed-denial-of-service/ Report of Investigation Pursuant to Section 21 (a) of the Securities Exchange Act of 1934: The DAO, Securities and Exchange Board, Release No. 81207/ July 25, 2017
https://www.sec.gov/litigation/investreport/34-81207.pdf https://www.legalserviceindia.com/legal/article-10921-blockchain-based-decentralized-autonomous-organizations-daos-.html