WOO (WOO) sustainability report
| Name | BlockNodes SAS |
| Relevant legal entity identifier | 969500PZJWT3TD1SUI59 |
| Name of the crypto-asset | WOO |
| Beginning of the period to which the disclosure relates | 2025-04-29 |
| End of the period to which the disclosure relates | 2026-04-29 |
| Energy consumption | 402.62968 kWh/a |
Consensus Mechanism
WOO is present on the following networks: Arbitrum, Avalanche, Base, Binance Smart Chain, Ethereum, Fantom, Linea, Near Protocol, Polygon, Solana, Zksync.
Arbitrum, an innovative Layer 2 scaling solution built on top of Ethereum, utilizes an Optimistic Rollup consensus mechanism to significantly enhance transaction scalability and reduce operational costs. This optimistic approach operates on the fundamental assumption that all transactions processed off-chain are valid by default. Consequently, transactions only undergo a rigorous verification process if their validity is explicitly challenged during a specific time window.
The core architecture of the Arbitrum network integrates several key components essential for its functionality. The Sequencer plays a pivotal role by efficiently ordering user transactions and aggregating them into batches, which are then processed off-chain. This mechanism is critical for achieving high transaction throughput and maintaining network efficiency. A Bridge facilitates secure and seamless transfers of assets between the Arbitrum Layer 2 environment and the underlying Ethereum Layer 1 mainnet, ensuring interoperability and leveraging Ethereum's robust security. Safeguarding the network from malicious activities are Fraud Proofs, an interactive verification system designed to detect and invalidate fraudulent transactions.
The transaction verification process unfolds as follows: users first submit their transactions to the Arbitrum Sequencer. The Sequencer orders these transactions, bundles them into batches, and subsequently submits these batches along with a cryptographic "state commitment" to the Ethereum mainnet. A crucial "challenge period" then commences, during which any network validator can initiate a fraud proof if they suspect an invalid state transition. Should a challenge be raised, an iterative dispute resolution protocol is activated to pinpoint the exact fraudulent step. If fraud is confirmed, the system rolls back the incorrect state, and the dishonest party is subjected to penalties. The final, validated state is then executed on the Ethereum blockchain, preserving the rollup's integrity. This combination of off-chain computation, batching, and on-chain fraud detection, as seen in networks built on the Arbitrum Nitro stack like Kinto, enables high transaction volumes at considerably lower fees.
The Avalanche blockchain network implements a sophisticated Proof-of-Stake (PoS) mechanism known as Avalanche Consensus, distinguishing itself from many other PoS protocols by incorporating a novel, subsampling-based approach rather than a traditional Byzantine Fault Tolerant (BFT) consensus. This unique consensus process is built upon three integrated protocols: Snowball, Snowflake, and Avalanche, all working in concert to achieve high throughput, rapid finality, and robust security. The process begins with the Snowball protocol, where each validator randomly samples a small, fixed-size group of other validators. Through repeated polling of these sampled validators, a preference for a particular transaction is established. Validators maintain confidence counters for each transaction, incrementing them as sampled validators express support for their chosen transaction. A transaction is deemed accepted once its confidence counter surpasses a predefined threshold. Building upon Snowball, the Snowflake protocol refines the process by introducing a binary decision system, compelling validators to choose between two conflicting transactions. Binary confidence counters track the preferred binary choice, and once a specific confidence level is attained, the decision becomes final and irreversible. The overarching Avalanche protocol organizes transactions using a Directed Acyclic Graph (DAG) structure. This DAG architecture is crucial for facilitating parallel transaction processing, which significantly enhances the network's overall throughput and efficiency. Transactions are added to the DAG based on their intrinsic dependencies, ensuring a consistent and logical order across the network. Ultimately, validators reach consensus on both the structure and content of this DAG through the iterative application of the Snowball and Snowflake protocols. The Avalanche X-Chain, a component of the broader Avalanche network, also utilizes this Avalanche consensus protocol, emphasizing repeated subsampling of validators to achieve agreement on transactions. Furthermore, networks like Flare integrate the Avalanche Consensus with a Federated Byzantine Agreement (FBA) model to further bolster scalability, security, and decentralization, leveraging a gossip protocol for rapid node communication and transaction confirmation.
Base operates as a Layer-2 (L2) scaling solution built on the Ethereum blockchain, having been developed by Coinbase using Optimism's OP Stack. Critically, Base L2 transactions do not possess an independent consensus mechanism. Instead, their validation is directly linked to and secured by the underlying Ethereum Layer-1 (L1) network. This is achieved through a specialized component known as a sequencer. The sequencer's role is to aggregate multiple L2 transactions into bundles, which are then regularly published to the Ethereum mainnet as a single L1 transaction.
Consequently, all transactions processed on the Base network are indirectly secured by Ethereum's robust Proof-of-Stake (PoS) consensus mechanism once they are recorded on L1. Ethereum's PoS system, established with "The Merge" in 2022, moves away from energy-intensive mining by requiring validators to stake at least 32 ETH. In this system, a validator is randomly selected every 12 seconds to propose a new block, while other validators on the network are responsible for verifying its integrity. The network employs a sophisticated slot and epoch system, with transaction finality typically occurring after two epochs, which translates to approximately 12.8 minutes, utilizing the Casper-FFG protocol. The Beacon Chain is central to coordinating validators, and the LMD-GHOST fork-choice rule ensures the chain adheres to the path with the most accumulated validator votes. Validators are incentivized with rewards for their participation in proposing and verifying blocks, but face stringent penalties, known as slashing, for any malicious actions or prolonged inactivity. This design choice by Ethereum aims to significantly enhance energy efficiency, security, and scalability, with ongoing and future upgrades, such as Proto-Danksharding, further targeting improvements in transaction processing efficiency, thereby benefiting Base as its foundational security layer. Base specifically leverages Optimistic Rollups as part of the OP Stack, meaning transactions are presumed valid unless challenged within a specified period via fault proofs.
The Binance Smart Chain (BSC) network utilizes a hybrid consensus mechanism known as Proof of Staked Authority (PoSA). This innovative approach integrates key elements from both Delegated Proof of Stake (DPoS) and Proof of Authority (PoA) to achieve a balance of high transaction speeds, cost-efficiency, and network security, while striving to maintain a reasonable level of decentralization. The core participants in the PoSA mechanism include Validators, referred to as "Cabinet Members," Delegators, and Candidates.
Validators play a critical role, being responsible for creating new blocks, verifying transactions, and upholding the overall security of the network. To qualify as a validator, an entity must stake a substantial quantity of BNB, which serves as collateral to ensure honest conduct. These validators are selected through a dynamic process that considers both the amount of BNB they have staked and the votes they receive from token holders. At any given time, there are 21 active validators, whose rotation aims to enhance decentralization and security. Delegators are token holders who opt not to operate a validator node themselves but can contribute to network security by delegating their BNB tokens to chosen validators. This delegation bolsters a validator's total stake, increasing their likelihood of being selected for block production. In return, delegators receive a share of the rewards earned by their chosen validators, fostering broader participation in network governance and security. Candidates represent potential validators who have met the minimum BNB staking requirements and are awaiting election into the active validator set through community voting. Their presence ensures a continuous pool of ready-to-serve nodes, contributing to the network's resilience and decentralization.
During the consensus process, validators are chosen based on their accumulated BNB stake and delegator votes. The higher these metrics, the greater the chance of selection for validating transactions and producing new blocks. Once selected, these validators take turns in a PoA-like fashion to produce blocks rapidly and efficiently, validating transactions, adding them to blocks, and broadcasting them across the network. BSC boasts fast block times, typically around 3 seconds, leading to quick transaction finality. This rapid finality is a direct benefit of the efficient PoSA mechanism, which allows validators to reach consensus swiftly. To further ensure network integrity, validators face economic incentives such as slashing, where a portion of their staked BNB can be forfeited if they engage in malicious activities. This mechanism aligns validators' interests with the network's well-being, complementing the rewards they receive for their honest participation.
The Ethereum blockchain network, following "The Merge" in 2022, operates on a Proof-of-Stake (PoS) consensus mechanism, a significant departure from its previous Proof of Work system. This transition replaced energy-intensive mining with validator staking, aiming to enhance energy efficiency, security, and scalability. In this model, participants willing to secure the network act as validators by staking a minimum of 32 units of the network's native asset (Ether). The network organizes its operations around a precise slot and epoch system. Every 12 seconds, a validator is randomly selected to propose a new block. Following this proposal, other validators on the network verify the integrity and validity of the block. Finalization of transactions, meaning they become irreversible, occurs after approximately two epochs, which translates to about 12.8 minutes, utilizing the Casper-FFG (Friendly Finality Gadget) protocol. The Beacon Chain plays a central role in coordinating the activities of these validators, while the LMD-GHOST (Latest Message Driven-Greedy Heaviest Observed SubTree) fork-choice rule is employed to ensure all network participants agree on the canonical chain, following the branch with the heaviest accumulated validator votes. Validators are economically incentivized for their honest participation in proposing and verifying blocks, but they also face severe penalties, known as slashing, for malicious actions or prolonged inactivity. This PoS framework is designed not only to reduce the network's environmental footprint but also to lay the groundwork for future upgrades, such as Proto-Danksharding, which are intended to further improve transaction efficiency and overall network throughput. The core components like validator selection, block production, and transaction finality are intrinsically tied to the amount of Ether staked, ensuring that participants have a vested interest in the network's security and stability.
Fantom's operational foundation is built upon the Lachesis Protocol, an Asynchronous Byzantine Fault Tolerant (aBFT) consensus mechanism specifically engineered to deliver rapid, secure, and highly scalable transaction processing. This innovative protocol diverges from conventional linear blockchain structures by employing a Directed Acyclic Graph (DAG) architecture, which facilitates the parallel processing of multiple transactions across various nodes. This parallel execution significantly boosts network throughput, making Fantom exceptionally well-suited for decentralized applications (dApps) that demand swift and efficient transaction handling. A cornerstone of the Lachesis Protocol is its asynchronous and leaderless design. This means that individual nodes can achieve consensus independently, without needing to defer to a central leader. Such a design inherently enhances the network's decentralization and overall speed, minimizing potential bottlenecks. Transactions on Fantom are organized into "event blocks," which undergo validation asynchronously by a multitude of validators. Once a sufficient number of validators confirm an event block, it is integrated into the network's immutable history. A critical feature distinguishing Fantom is its instant finality, meaning that once transactions are confirmed, they are irreversible and cannot be altered. This property is particularly valuable for applications where immediate and unchangeable transaction settlement is paramount, offering a high degree of confidence and reliability to users and developers alike.
Linea's consensus mechanism is anchored in Zero-Knowledge Rollups (zk-Rollups), a sophisticated Layer 2 scaling solution designed to enhance the scalability, security, and efficiency of transaction processing while maintaining full compatibility with the Ethereum ecosystem. At its core, Linea leverages zk-Rollups to aggregate numerous off-chain transactions into extensive batches. Instead of submitting each transaction individually to the Ethereum mainnet, a single, concise zero-knowledge proof representing the validity of the entire batch is posted. This innovative approach drastically reduces on-chain congestion and significantly improves the network's throughput and scalability. A pivotal component of Linea is its Type 2 zkEVM, which ensures complete compatibility with the Ethereum Virtual Machine (EVM). This compatibility allows for a seamless integration of existing Ethereum-based smart contracts and decentralized applications (dApps) onto the Linea network without requiring significant modifications. The network further utilizes a mechanism known as proof aggregation. This process involves finalizing multiple batches of transactions into a singular zero-knowledge proof. This aggregated proof is then submitted to the Ethereum mainnet, guaranteeing the secure and efficient finalization of Layer 2 activities directly on Ethereum's robust base layer. By employing these advanced cryptographic proofs, Linea ensures that transactions are not only processed rapidly off-chain but also inherit the strong security guarantees of the Ethereum mainnet, as the validity of all off-chain computations is cryptographically verified on Layer 1. This architecture provides a robust, efficient, and secure environment for dApp development and transaction execution, making it an economical solution for a wide range of use cases.
The NEAR Protocol blockchain network operates on a distinctive consensus mechanism that synergistically combines the principles of Proof of Stake (PoS) with a proprietary innovation known as Doomslug, further enhanced by dynamic sharding through Nightshade. This multi-faceted approach is engineered to deliver high efficiency, rapid transaction finality, and robust security across the network. At its foundation, the system relies on Proof of Stake, where participants, termed validators, secure the network by staking their native NEAR tokens. The greater the stake, coupled with community trust, the higher their probability of being chosen to propose and validate blocks.Doomslug significantly accelerates transaction finality. Unlike single-stage block confirmations, Doomslug introduces a two-stage process. Initially, validators propose new blocks. Finality is achieved swiftly when two-thirds of the participating validators formally approve the proposed block, making confirmed transactions irreversible and preventing potential forks. This rapid finality is crucial for applications demanding near-instant confirmations. Complementing this, NEAR integrates Nightshade, a dynamic sharding technique. Nightshade segments the network into multiple shards, allowing for the parallel processing of transactions. Each shard handles a distinct subset of transactions concurrently, and their respective processing outcomes are then consolidated into a single "snapshot" block. This dynamic sharding is vital for scalability, enabling the network to efficiently manage increasing transaction volumes and user demand without compromising performance.The consensus process also emphasizes decentralization and fairness through epoch rotation. Validators are regularly reshuffled across distinct intervals called epochs. This rotation mechanism ensures a balanced distribution of block proposal opportunities and validation responsibilities among eligible validators, mitigating centralization risks and promoting sustained network resilience. By integrating PoS for economic security, Doomslug for fast finality, and Nightshade for scalable throughput, the NEAR Protocol establishes a high-performance and secure blockchain environment.
The Polygon blockchain network, originally known as Matic Network, operates as a Layer 2 scaling solution for Ethereum, leveraging a sophisticated hybrid consensus mechanism to enhance scalability, ensure security, and maintain decentralization. The foundational elements of its consensus protocol are built upon a combination of Proof of Stake (PoS) and Plasma Chains. Within the PoS framework, validators are selected based on the number of MATIC tokens they have staked, with a larger stake increasing their probability of being chosen to validate transactions and produce new blocks. This system also allows MATIC token holders who prefer not to run their own validator nodes to delegate their tokens to trusted validators, thereby earning a share of the rewards and actively contributing to the network's overall security and decentralization.
Supplementing PoS, Polygon utilizes Plasma Chains, which serve as a framework for establishing child chains that run in parallel with the main Ethereum chain. These child chains facilitate off-chain transaction processing, significantly improving transaction throughput and reducing congestion on the Ethereum mainnet by committing only the final, aggregated state back to Ethereum. To uphold the integrity and security of these off-chain transactions, Plasma Chains incorporate a robust fraud-proof mechanism, enabling the challenging and potential reversion of any detected fraudulent activity.
The consensus process on Polygon begins with validators confirming the validity of transactions and subsequently integrating them into blocks. Validators then propose new blocks, with their staked tokens influencing their voting power, and engage in a collective voting process to reach consensus. A new block is officially added to the blockchain upon receiving a majority of votes. A critical security measure is the periodic checkpointing system, where snapshots of the Polygon sidechain's state are regularly submitted to the Ethereum main chain, thereby leveraging Ethereum's inherent security for the finality of Polygon's transactions. The Plasma framework further enables off-chain validation of transactions on child chains, with their final states eventually submitted to the Ethereum main chain, and fraud proofs ready to challenge any suspicious transactions within a specified period, collectively reinforcing Polygon's operational integrity and security.
The Solana blockchain architecture operates through a hybrid consensus model that integrates Proof of History (PoH) with Proof of Stake (PoS). This combination is designed to optimize transaction throughput and reduce network latency while maintaining a high degree of security. Proof of History functions as a decentralized clock, using a Verifiable Delay Function (VDF) to create a permanent, timestamped record of events. This cryptographic sequence allows the network to agree on the chronological order of transactions without requiring nodes to communicate extensively, thereby solving traditional synchronization bottlenecks found in other distributed ledgers. Parallel to PoH, the Proof of Stake component manages the selection of validators and the finalization of the ledger state. Validators are chosen to act as leaders for specific blocks based on the total quantity of the native network assets they have staked. Users who do not run their own hardware can participate in network security by delegating their assets to existing validators, sharing in the rewards generated by successful block production. The consensus process begins when transactions are broadcast and collected for validation. A designated leader then generates a PoH sequence to order these transactions within a block. Subsequently, other validators in the network verify the integrity of the PoH hashes and the validity of the transactions. Once a sufficient number of signatures are collected, the block is finalized and appended to the blockchain. This dual approach ensures that the network remains resilient against attacks; validators must provide collateral through staking, and any malicious activity, such as producing invalid blocks or double-signing, can result in the loss of staked assets through a process known as slashing. This economic deterrent ensures that participants remain aligned with the network's health and operational standards.
Zksync utilizes a sophisticated Layer 2 scaling architecture built on zero-knowledge rollup (ZK-Rollup) technology. Unlike traditional Layer 1 networks that require every node to execute every transaction, this network aggregates numerous transactions into discrete batches off-chain. The core of its consensus and security mechanism lies in the generation of validity proofs, specifically employing zk-SNARKs (Succinct Non-Interactive Arguments of Knowledge). These cryptographic proofs provide a mathematical guarantee that all transactions within a batch are legitimate and adhere to the protocol's rules. Once a validity proof is generated, it is submitted to the Ethereum mainnet. This approach allows the network to inherit the robust security of Ethereum's base layer while significantly increasing throughput. A critical component in this process is the sequencer, which is responsible for the ordering and bundling of user transactions. Unlike optimistic rollups that rely on a challenge period and fraud proofs, Zksync provides immediate finality once the validity proof is verified on the Layer 1 chain. This architectural choice eliminates the withdrawal delays often associated with other scaling solutions. Furthermore, the network ensures data availability by publishing transaction data on-chain, which allows any participant to reconstruct the state of the network independently. This transparency maintains the decentralized nature of the system while offloading the heavy computational burden from the primary blockchain, resulting in a highly efficient and secure environment for decentralized applications.
Incentive Mechanisms and Applicable Fees
WOO is present on the following networks: Arbitrum, Avalanche, Base, Binance Smart Chain, Ethereum, Fantom, Linea, Near Protocol, Polygon, Solana, Zksync.
Arbitrum One, serving as a Layer 2 scaling solution for Ethereum, incorporates a sophisticated array of incentive mechanisms to guarantee the ongoing security and integrity of its network. Central to this framework are the Validators and Sequencers. Sequencers are entrusted with the vital task of ordering user transactions and compiling them into batches for efficient off-chain processing, playing a critical role in optimizing network throughput and speed. Validators, conversely, actively monitor the Sequencers' activities, meticulously verifying state transitions and ensuring that only valid transactions are included in the batches. Both Sequencers and Validators are motivated through economic rewards, primarily derived from collected transaction fees and potentially other protocol-specific incentives, contingent on their honest and efficient performance.
Arbitrum’s security model is heavily reliant on its Fraud Proofs system. Transactions processed off-chain are initially given an "assumption of validity," which enables swift transaction finality and higher throughput. However, a predefined "challenge period" is established, during which any network participant can submit a fraud proof to contest the validity of a transaction. This acts as a powerful deterrent against malicious behavior. If a challenge is successfully brought forward, an interactive verification process is initiated to precisely identify and confirm any fraudulent activity. In instances where fraud is proven, the invalid transaction is reversed, and the dishonest actor faces economic penalties, which may include the slashing of staked tokens or other forms of financial disincentive. This balanced system of rewards for honest participation and strict penalties for malicious actions aligns participants' interests with the overall health and security of the Arbitrum network.
The Applicable Fees on the Arbitrum One blockchain are structured to be cost-effective. Users pay Layer 2 Fees for transactions executed on the Arbitrum network, which are typically significantly lower than those on the Ethereum mainnet due to reduced computational load. A specific "Arbitrum Transaction Fee" is applied to each transaction processed by the sequencer, covering the costs of processing and batch inclusion. Additionally, L1 Data Fees are incurred when batches of Layer 2 state updates are periodically posted as calldata to the Ethereum mainnet. This fee covers the requisite gas costs on Ethereum. A key economic benefit is "cost sharing," where the fixed expenses of submitting these state updates to Ethereum are distributed across multiple transactions within a batch, substantially lowering the per-transaction cost for users. For example, protocols leveraging the Arbitrum stack, such as Kinto, utilize ETH for transaction fee payments.
The Avalanche blockchain network employs a comprehensive system of incentive mechanisms and fees designed to ensure its security, integrity, and efficiency, primarily through its Avalanche Consensus mechanism. Validators, who are critical to the network's operation, are required to stake a certain amount of AVAX tokens. The quantity of staked tokens directly influences their likelihood of being chosen to propose or validate new blocks. In return for their active participation, validators receive rewards, which are calculated proportionally to the amount of AVAX they have staked, as well as their consistent uptime and overall performance in validating transactions. To further decentralize participation, validators can also accept delegations from other token holders. These delegators subsequently share in the earned rewards, thus incentivizing smaller token holders to contribute indirectly to the network's security. The economic incentives for validators extend beyond staking rewards to include block rewards, which are distributed from the inflationary issuance of new AVAX tokens for proposing and validating blocks. Additionally, validators earn a portion of the transaction fees paid by users across the network, covering simple transactions, complex smart contract interactions, and the creation of new assets. Crucially, Avalanche's penalty system differs from some other Proof-of-Stake systems by not employing 'slashing,' which involves the confiscation of staked tokens for misbehavior. Instead, the network relies on the economic disincentive of lost future rewards. Validators who fail to maintain consistent uptime or engage in malicious activities will simply miss out on potential earnings, providing a strong incentive for honest and reliable behavior. The network also imposes clear uptime requirements, where poor performance directly impacts a validator's ability to earn rewards. Fees on the Avalanche blockchain are structured to be dynamic, adjusting based on current network demand and the computational complexity of transactions. This ensures that fees remain equitable and reflect the actual network usage. A significant portion of these transaction fees is 'burned,' meaning they are permanently removed from circulation. This deflationary mechanism helps to offset the inflationary effects of block rewards and aims to enhance the long-term value of AVAX tokens. Fees for deploying and interacting with smart contracts are determined by the required computational resources, promoting efficient resource utilization. Similarly, fees are imposed for creating new assets on the network, a measure designed to deter spam and ensure that network resources are utilized by serious projects. On the Avalanche X-Chain, validator incentives are realized indirectly through the network's overall AVAX issuance, while its transaction fees are fixed and burned to combat spam and progressively reduce the total supply of AVAX.
The Base blockchain, as an Ethereum Layer-2 solution utilizing Optimistic Rollups from the OP Stack, implements incentive mechanisms primarily focused on optimizing transaction costs and ensuring secure asset transfers, leveraging the economic security of its underlying Ethereum L1. A core incentive to use Base is its efficiency in reducing transaction expenses. This is achieved by a sequencer that bundles numerous L2 transactions together, submitting them as a single, consolidated L1 transaction to Ethereum. This process significantly lowers the average transaction cost for individual L2 operations, as the collective L2 transactions share the cost of the single L1 transaction fee, thereby making Base a more economically attractive option compared to direct L1 usage.
For the secure movement of crypto-assets between Base and Ethereum, a specialized smart contract on the Ethereum network is employed. Since Base, as an L2, does not manage its own consensus for fund withdrawals, an additional mechanism is in place to guarantee that only legitimate funds can be moved off the L2. When a user initiates a withdrawal request on Ethereum's L1, a predetermined challenge period begins. During this window, any other network participant has the opportunity to submit a "fault proof" if they detect a fraudulent withdrawal attempt, triggering a dispute resolution process. This entire system is strategically designed with economic incentives to encourage honest behavior and deter malicious activities, although specific details of these economic incentives for fault proof submission are not explicitly outlined beyond the general principle.
Furthermore, Base inherits and benefits from the robust incentive structure of Ethereum’s Proof-of-Stake (PoS) system, which indirectly secures Base transactions. Ethereum validators, by staking a minimum of 32 ETH, are rewarded for proposing and attesting to valid blocks, as well as for participating in sync committees. These rewards are distributed through newly issued ETH and a portion of transaction fees. Under the EIP-1559 fee model, transaction fees comprise a base fee, which is algorithmically burned to manage supply, and an optional priority fee (or 'tip') paid directly to validators. To maintain network integrity, validators face economic penalties, known as slashing, if they engage in malicious conduct or fail to perform their duties. This comprehensive incentive framework ensures strong security alignment for Base by reinforcing reliable validator behavior on its underlying L1.
The Binance Smart Chain (BSC) network employs a robust system of incentive mechanisms and applicable fees, primarily built around its Proof of Staked Authority (PoSA) consensus, designed to secure the network, encourage participation, and maintain operational efficiency. This system ensures that validators, delegators, and other participants are economically motivated to act in the network's best interest.
Validators on BSC, often referred to as "Cabinet Members," are critical to the network's operation. They are incentivized through staking rewards, which include a combination of transaction fees and newly generated block rewards. To become a validator, a significant amount of BNB must be staked. Their selection for block production is determined by the total BNB staked, encompassing both their own stake and delegated tokens, as well as the votes received from delegators. This competitive selection process motivates validators to attract delegators and maintain high performance. Delegators, in turn, are crucial for supporting network decentralization and security. By delegating their BNB to validators, they increase the validators' total stake, enhancing their chances of selection. In exchange, delegators receive a share of the rewards earned by their chosen validators, fostering active community involvement. The system also includes a pool of Candidates, nodes that have staked BNB and are ready to become active validators, ensuring a robust and resilient network of potential participants. Economic security is further reinforced through slashing mechanisms, where validators found engaging in malicious behavior or failing to perform their duties face penalties, including the forfeiture of a portion of their staked BNB. The opportunity cost of locking up BNB also provides a strong economic incentive for all participants to act honestly.
BSC is known for its low transaction fees, which are paid in BNB. These fees are vital for network maintenance and compensate validators for processing transactions. The fee structure is dynamic, adjusting based on network congestion and transaction complexity, though it is designed to remain significantly lower than on some other major blockchain networks, such as the Ethereum mainnet. In addition to transaction fees, validators receive block rewards, further incentivizing their role in maintaining and processing network activity. BSC also supports cross-chain compatibility, enabling asset transfers between Binance Chain and Binance Smart Chain, which incur minimal fees to facilitate a seamless user experience. Furthermore, interacting with and deploying smart contracts on BSC involves fees based on the computational resources required. These smart contract fees are also paid in BNB and are structured to be cost-effective, encouraging developers to build and innovate on the BSC platform.
The Ethereum network's Proof-of-Stake (PoS) system is underpinned by a robust framework of incentive mechanisms and applicable fees, meticulously designed to secure transactions and encourage active, honest participation from validators. Validators, who are essential for the network's operation, commit at least 32 units of the network's native asset (Ether) to secure their role. Their primary incentives include rewards for successfully proposing new blocks, attesting to the validity of other blocks, and participating in sync committees, all of which contribute to the network's integrity and consensus. These rewards are distributed in newly issued Ether, alongside a portion of the transaction fees generated on the network. A key feature of Ethereum's fee structure is the implementation of EIP-1559, which divides transaction fees into two main components. The first is a base fee, which is automatically burned, effectively reducing the overall supply of Ether over time and potentially introducing a deflationary aspect, especially during periods of high network activity. The second is an optional priority fee, also known as a "tip," which users can choose to pay directly to validators to incentivize faster inclusion of their transactions into a block. This dual-fee structure aims to make transaction costs more predictable for users. To enforce honest behavior and prevent malicious activities, the network employs a strict system of economic penalties, including slashing. Validators who engage in dishonest acts or demonstrate extended periods of inactivity risk losing a portion of their staked Ether, providing a powerful deterrent against misconduct and ensuring the long-term security and reliability of the network. This comprehensive system aligns the economic interests of validators with the overall health and security of the Ethereum blockchain.
Fantom's economic framework incorporates a robust incentive model crafted to bolster network security and foster broad participation among its users and validators. A primary mechanism involves staking rewards for validators, who are crucial to the consensus process. These validators earn rewards in FTM tokens, directly proportional to the amount they have staked, thereby creating a strong incentive to actively secure and maintain the network. To ensure a balanced reward structure and support long-term network security, Fantom employs a dynamic staking reward rate, which adjusts based on the total FTM staked across the network. Consequently, if the total staked amount increases, individual rewards may see a proportional decrease. Beyond active validators, the network also facilitates participation for token holders who do not wish to operate their own validator nodes through delegated staking. These users can delegate their FTM tokens to existing validators, and in return, they receive a share of the staking rewards. This delegation option is vital for encouraging wider community involvement in the network's security without requiring significant technical overhead. Regarding applicable fees, transactions on the Fantom network are subject to fees paid in FTM tokens. Thanks to the network's high throughput capabilities, largely attributed to its DAG structure, these transaction fees are kept remarkably low. This efficient fee model, combined with the network's inherent scalability, renders Fantom an exceptionally cost-effective platform for users and an attractive environment for developing and deploying high-volume decentralized applications.
Linea's incentive model is meticulously crafted to harmonize the performance of validators with the network's security requirements, all while catering to user demands for cost-effective and efficient transaction processing. The primary incentive for network participants, particularly validators, stems from transaction fees. Validators play a crucial role in the Linea ecosystem by processing off-chain transactions and subsequently generating and submitting aggregated zero-knowledge proofs to the Ethereum mainnet. For these essential services, validators are rewarded with a portion of the transaction fees, creating a direct financial motivation for them to maintain network integrity, ensure prompt transaction finalization, and contribute to the overall security posture of the Layer 2 solution. This system ensures that those who uphold the network's operational standards are consistently compensated. Regarding applicable fees, users engaging with the Linea network are required to pay transaction fees, typically denominated in the network's native token. These fees serve a dual purpose: they cover the computational costs associated with executing transactions on the Layer 2 network and contribute to the expenses incurred when submitting the cryptographic proofs to the Ethereum mainnet for finalization. A significant advantage of Linea's architecture, powered by zk-Rollups, is its inherent cost efficiency. By batching multiple individual transactions into a single zero-knowledge proof before interacting with Ethereum, Linea dramatically reduces the per-transaction cost compared to direct transactions on the Ethereum mainnet. This innovative batching mechanism amortizes the fixed cost of Layer 1 interactions across many Layer 2 transactions, positioning Linea as an economically viable solution for developers and users seeking to deploy and interact with scalable dApps while benefiting from reduced gas expenses. The fee structure is designed to be predictable and lower than those on the mainnet, encouraging broader adoption and usage of the Linea network.
The NEAR Protocol blockchain network employs a comprehensive suite of economic mechanisms designed to ensure network security, incentivize active participation from its community, and manage resource allocation efficiently. A core incentive is the staking reward system, where validators and delegators are compensated for their role in securing transactions. Validators, selected based on their staked NEAR tokens and community trust, receive a share of newly minted tokens, constituting about 90% of the approximate 5% annual inflation. They earn these rewards for proposing and validating blocks. Similarly, token holders who choose not to operate a full validator node can delegate their NEAR tokens to active validators, thereby contributing to network security and earning rewards proportional to their delegated stake. This delegation model fosters broader participation and strengthens the network's overall decentralization.To uphold network integrity, NEAR Protocol implements a robust slashing mechanism. Validators engaging in malicious activities, such as incorrect validation or dishonest behavior, face economic penalties, including the deduction of a portion of their staked tokens. This serves as a powerful deterrent against harmful actions, ensuring validators operate in the network's best interest. Additionally, the network promotes fairness and prevents undue concentration of power through regular epoch rotations. During these predefined intervals, validators are periodically reshuffled, and new block proposers are selected, maintaining a healthy balance between network performance and decentralization.Regarding applicable fees, the NEAR blockchain charges users for transaction processing and data storage, paid in NEAR tokens. A unique aspect of its fee structure is the burning mechanism for transaction fees, which reduces the total circulating supply of NEAR tokens over time, potentially introducing a deflationary effect. While a portion of these fees is burned, the remaining part is distributed to validators as additional compensation, providing a continuous incentive for network maintenance. Furthermore, the protocol imposes storage fees based on the amount of blockchain space consumed by user accounts, smart contracts, and associated data. Users are required to hold NEAR tokens as a deposit commensurate with their storage usage, which encourages efficient resource management and helps prevent network spam. This dual system of incentives and fees creates a self-sustaining economic model for the NEAR Protocol.
The Polygon network employs a robust set of incentive mechanisms and a distinct fee structure, combining its Proof of Stake (PoS) consensus with the Plasma framework to ensure network security, encourage active participation, and maintain transaction integrity. Validators play a crucial role, securing the network by staking MATIC tokens. Their selection for validating transactions and producing new blocks is directly influenced by the quantity of tokens they have staked. In exchange for their services, validators receive rewards in the form of newly minted MATIC tokens and a portion of the transaction fees. They are responsible for proposing and voting on new blocks, with incentives structured to promote honest and efficient operation, while also deterring misconduct through potential penalties. A key security feature involves validators periodically submitting checkpoints of the Polygon sidechain to the Ethereum main chain, which leverages Ethereum's established robustness to guarantee the finality of Polygon's transactions.
Delegators, who are token holders opting not to operate their own validator nodes, can delegate their MATIC tokens to trusted validators. This delegation allows them to earn a share of the rewards distributed to their chosen validators, fostering broader community participation in securing the network and enhancing its decentralization. The economic security of Polygon is further reinforced by a slashing mechanism, which penalizes validators for malicious actions, such as double-signing transactions or extended periods of inactivity. Slashing entails the forfeiture of a portion of their staked tokens, serving as a powerful deterrent against dishonest behavior. Additionally, validators are required to bond a substantial amount of MATIC, ensuring they have a significant financial interest in upholding the network's integrity.
Regarding the fee structure, one of Polygon's significant advantages is its remarkably low transaction fees compared to the Ethereum main chain. These fees, paid in MATIC tokens, are designed to be affordable, thereby encouraging high transaction throughput and widespread user adoption. While fees on Polygon can exhibit dynamic variations based on network congestion and transaction complexity, they consistently remain considerably lower than those on Ethereum, making Polygon an attractive option for users and developers. Deploying and interacting with smart contracts on Polygon also incurs fees, which are determined by the computational resources required. These smart contract fees are also paid in MATIC and are substantially lower than on Ethereum, offering a cost-effective environment for developing and maintaining decentralized applications (dApps). Furthermore, the Plasma framework facilitates off-chain processing for state transfers and withdrawals, with associated fees also paid in MATIC, collectively contributing to a reduced overall cost of utilizing the network.
Incentives within the Solana blockchain network are structured to ensure high performance and decentralized security. The primary participants are validators and delegators, both of whom receive financial compensation for their roles in maintaining the ledger. Validators are rewarded for successfully producing and verifying blocks. These rewards are distributed in the network's native asset and are determined by the validator's overall stake and historical performance. Furthermore, validators receive a portion of the transaction fees associated with the data processed in their blocks, which encourages them to maximize efficiency and maintain uptime. Token holders who prefer not to operate complex infrastructure can delegate their stake to professional validators. This delegation model facilitates a more inclusive security environment, as delegators earn a percentage of the rewards proportional to their contribution, thereby decentralizing the control of the network. Security is further enforced through economic penalties. The network employs a slashing mechanism where a portion of a validator's staked assets is confiscated if they engage in dishonest behavior or fail to meet network requirements, such as remaining offline for extended periods. This introduces an opportunity cost for all participants, ensuring they remain committed to honest operations. Regarding the cost of using the network, the fee structure is designed to be highly competitive and predictable. Users pay transaction fees to compensate for the computational power and bandwidth consumed by nodes. These fees are notably low, facilitating high-volume usage. In addition to transaction costs, the network implements rent fees for data storage. This unique mechanism charges for the persistence of data on the blockchain, discouraging the inefficient use of state storage and prompting developers to prune unnecessary data. Finally, smart contract execution fees are calculated based on the specific resource intensity of the code, ensuring that participants pay a fair rate for the network resources they utilize.
The Zksync network employs a multifaceted incentive and fee structure designed to balance operational efficiency with network security. The primary participants, including validators and sequencers, are compensated through transaction fees paid by users. Sequencers play a vital role in the ecosystem by ordering and bundling transactions into batches; they receive a portion of the transaction fees to cover the costs of maintaining high-performance processing and fast confirmation times. Validators, who are responsible for the computationally intensive task of generating validity proofs, are likewise rewarded for ensuring that these batches are processed accurately and efficiently. Unlike some Layer 2 solutions that might use a native utility token for all operations, Zksync utilizes Ether (ETH) as the primary currency for paying transaction fees. This integrates the network more closely with the Ethereum ecosystem and simplifies the user experience. The fee model itself is dynamic, calculating costs based on the complexity of the specific transaction—such as smart contract interactions versus simple transfers—as well as the current gas prices on the Ethereum mainnet for submitting the aggregated proofs. By batching transactions, the network significantly reduces the individual gas burden on users, making it far more cost-effective than direct Layer 1 interactions. Additionally, the protocol includes provisions for ecosystem growth rewards, allocating resources to incentivize developers and projects that contribute to the proliferation of decentralized finance (DeFi) and non-fungible token (NFT) marketplaces. This holistic approach ensures that all roles, from infrastructure providers to end-users and developers, have clear economic reasons to participate in and support the network's long-term sustainability.
Energy consumption sources and methodologies
WOO is present on the following networks: Arbitrum, Avalanche, Base, Binance Smart Chain, Ethereum, Fantom, Linea, Near Protocol, Polygon, Solana, Zksync.
The methodology employed for calculating the energy consumption attributed to the Arbitrum network adopts a "bottom-up" approach, systematically assessing individual operational components to arrive at an aggregate consumption figure. Within this framework, network nodes are identified as the central and most significant contributors to the network's overall energy footprint. The foundational assumptions underpinning these calculations are derived from empirical findings, which are compiled through the extensive use of publicly available information sites, proprietary in-house crawlers developed by the assessors, and various open-source data collection tools.
A crucial step in estimating energy consumption involves accurately determining the specific hardware devices utilized within the network. This determination is made by evaluating the technical requirements necessary for operating the client software pertinent to the Arbitrum network. Once these hardware profiles are established, their corresponding energy consumption rates are precisely measured under controlled conditions in certified test laboratories, ensuring a high degree of accuracy and reliability for the baseline data. To ensure a comprehensive and accurate scope, particularly when accounting for diverse implementations of crypto-assets across different networks, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is employed whenever such an identifier is available. This tool assists in clearly delineating all relevant instances of an asset, with these mappings consistently updated based on data provided by the Digital Token Identifier Foundation.
Furthermore, the methodology relies on specific assumptions regarding the type of hardware deployed and the estimated number of active participants within the network. These assumptions are subjected to continuous validation using best-effort empirical data. A general guiding principle in these estimations is the presumption that network participants act in a largely economically rational manner. In accordance with a precautionary principle, conservative estimates are applied whenever there is uncertainty, typically resulting in higher assessments of potential adverse environmental impacts. When quantifying the energy consumption for a particular crypto-asset operating on Arbitrum, a proportionate fraction of the overall network's energy consumption is allocated to that asset, based on its observed activity within the Arbitrum ecosystem. The source documents do not provide any direct external links related to this methodology.
The methodology for assessing the Avalanche network's energy consumption is founded on a 'bottom-up' approach, where individual nodes are identified as the primary contributors to the network's overall energy footprint. This comprehensive calculation aggregates energy usage across various interconnected components of the network. The assumptions underpinning these calculations are derived from extensive empirical findings, utilizing a combination of publicly available information sites, sophisticated open-source crawlers, and proprietary in-house developed crawlers. A key aspect of this methodology involves estimating the hardware deployed within the network. This estimation is primarily driven by the technical specifications and operational requirements for running the client software, which dictates the type and performance of necessary hardware devices. The energy consumption profiles of these identified hardware devices are meticulously measured in certified test laboratories to ensure accuracy. To ensure a broad and precise scope, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is leveraged, whenever available, to pinpoint all relevant implementations of the crypto-asset under consideration. These mappings are regularly updated based on current data provided by the Digital Token Identifier Foundation. The data regarding specific hardware usage and the total number of network participants is based on empirically verified assumptions, consistently updated with best-effort validation. A foundational assumption in this model is that network participants generally behave in an economically rational manner. Furthermore, adhering to a precautionary principle, any uncertainties or doubts during the estimation process lead to conservative assumptions, specifically by making higher estimates for potential adverse environmental impacts. When determining the energy consumption attributable to a specific token within the Avalanche ecosystem, the energy consumption of the entire Avalanche network (including subnets like Avalanche X-Chain) is calculated first. Subsequently, a fraction of this total network energy is allocated to the token, proportional to its activity and footprint within the network. This detailed, multi-layered approach aims to provide a robust and conservative estimate of the energy consumption associated with the Avalanche blockchain.
The energy consumption calculation for the Base blockchain network is meticulously performed using a "bottom-up" approach, where individual nodes are identified as the primary contributors to the network's overall energy footprint. This methodology is based on empirical data collected from a variety of sources, including publicly available information sites, dedicated open-source crawlers, and proprietary in-house crawling tools. The fundamental aspect of estimating hardware usage within the network involves determining the minimum requirements necessary to operate the client software. The energy consumption profiles of the specific hardware devices identified are obtained from measurements conducted in certified test laboratories, ensuring a high degree of accuracy in these foundational figures.
In the process of calculating network energy consumption, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized when available, serving to identify and encompass all relevant implementations of a crypto-asset within the scope of analysis. These mappings are regularly updated, drawing on data provided by the Digital Token Identifier Foundation. However, the source documents do not provide specific URLs for the public information sites, open-source crawlers, or the Digital Token Identifier Foundation, preventing direct external linking within this summary.
The methodology also incorporates assumptions regarding the hardware deployed and the number of participants operating within the network. These assumptions are rigorously verified with "best effort" against empirical data to ensure their realism and accuracy. A key underlying principle is the assumption that network participants generally act in a "largely economically rational" manner. Furthermore, to adhere to a precautionary principle, conservative estimates are applied in situations of uncertainty, leading to higher projected impacts to mitigate underestimation risks. For a specific token on Base, a fraction of the network’s total energy consumption is attributed, based on the token's activity within the network.
The methodology for calculating the energy consumption of the Binance Smart Chain (BSC) network, which then serves as a basis for attributing a fraction of energy to tokens operating on it, primarily utilizes a "bottom-up" approach. This method focuses on the individual components of the network to aggregate a comprehensive energy profile. The central factor in this calculation is identified as the network nodes themselves.
Assumptions regarding the hardware used within the BSC network are derived from extensive empirical findings. These findings are gathered through a combination of public information sites, sophisticated open-source crawlers, and proprietary in-house developed crawlers. The primary determinants for estimating the specific hardware deployed are the technical requirements necessary to operate the client software of the network. To ensure accuracy, the energy consumption of these identified hardware devices is rigorously measured in certified test laboratories. This precise measurement allows for a detailed understanding of the power demands of the operational infrastructure.
For the comprehensive identification of all implementations of an asset within scope, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is employed, where available. The mappings associated with the FFG DTI are regularly updated based on data provided by the Digital Token Identifier Foundation. The information regarding both the hardware in use and the total number of participants active within the network is based on assumptions that undergo best-effort verification using empirical data. Generally, participants are presumed to be largely economically rational in their decision-making. As a precautionary principle, in situations of uncertainty, assumptions tend to err on the conservative side, meaning higher estimates are made for potential adverse impacts. When determining the energy consumption for a specific token that operates on BSC, the initial step involves calculating the energy consumption of the entire Binance Smart Chain network. Following this, a fraction of the total network energy consumption is attributed to the particular crypto-asset, a fraction determined by the asset's specific activity within the network.
The methodology for calculating the Ethereum network's energy consumption primarily employs a "bottom-up" approach, which focuses on the energy demands of individual nodes that are central to the network's operation. These nodes are considered the fundamental factor driving the network's overall energy use. The assumptions underpinning these calculations are derived from empirical data gathered through a variety of sources, including public information sites, open-source crawlers, and proprietary in-house crawlers developed for this purpose. A critical step in this methodology involves determining the hardware used within the network, primarily by assessing the computational and other requirements necessary to run the client software. The energy consumption characteristics of these identified hardware devices are then rigorously measured in certified test laboratories to ensure accuracy. When quantifying the energy consumption for the network, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized, when available, to identify all implementations of the asset in scope, with mappings regularly updated based on data from the Digital Token Identifier Foundation. The information regarding the specific hardware deployed and the total number of participants in the network relies on assumptions that are diligently verified using empirical data whenever possible. Generally, participants are presumed to act in an economically rational manner. Furthermore, adhering to a precautionary principle, if there is any doubt in estimations, conservative assumptions are made, meaning higher estimates are used for potential adverse impacts to ensure a comprehensive and cautious assessment of energy consumption.
The methodology employed for calculating the Fantom network's energy consumption utilizes a "bottom-up" approach, which identifies individual nodes as the primary contributors to the network's overall energy footprint. This calculation is grounded in a combination of empirical findings derived from various public information sources, proprietary in-house crawlers, and publicly available open-source crawlers. The core factor in estimating the hardware deployed across the network is the technical specifications and operational demands required to run the client software. To ensure accuracy, the energy consumption associated with these specific hardware devices is meticulously measured in certified test laboratories. When determining the full scope of crypto-asset implementations for energy calculation purposes, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized whenever available. This allows for comprehensive mapping of the asset in question, with these mappings being updated regularly based on data provided by the Digital Token Identifier Foundation. The underlying data regarding the types of hardware in use and the total number of participants in the network relies on carefully constructed assumptions. These assumptions are rigorously verified through a best-effort approach, cross-referencing against available empirical data. A general principle guiding these estimations is the assumption that participants are largely economically rational actors. Furthermore, in instances of uncertainty, a precautionary principle is applied, leading to conservative estimates that lean towards higher assessments of potential adverse environmental impacts. The detailed approach aims to provide a comprehensive, albeit assumption-based, quantification of energy usage. No specific external document links are available within the provided context for this section.
The methodology for determining the energy consumption associated with the Linea network follows a multi-component aggregation approach. Initially, the energy consumption for the entire Linea network is calculated as a foundational step. Since Linea is a Layer 2 solution operating on top of Ethereum and other underlying blockchain infrastructures, its energy footprint is intertwined with these foundational layers. However, the direct measurement for a specific Layer 2 network like Linea involves attributing a fraction of the overall network energy consumption to its operations. This attribution is typically based on the level of activity observed for crypto-assets and transactions within the Linea network compared to the overall activity on the underlying L1. To ensure a comprehensive scope for calculating energy consumption, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized, when available, to identify and include all relevant implementations of a crypto-asset across the various networks it resides on. The mappings provided by the Digital Token Identifier Foundation are regularly updated to maintain accuracy. The estimation process for hardware usage and the number of network participants relies on empirical data, which is verified with a best-effort approach. A core assumption in these calculations is that participants are largely economically rational. Furthermore, a precautionary principle is applied, meaning that in cases of uncertainty, higher estimates for adverse environmental impacts are consistently chosen to ensure conservative reporting. This systematic approach aims to provide a robust estimate of the network's energy usage.
The methodology for assessing the energy consumption of the NEAR Protocol network relies on a "bottom-up" approach, meticulously aggregating data across its various operational components. This method primarily considers the network's nodes as the central contributors to overall energy usage. The fundamental assumptions underpinning these calculations are derived from empirical findings obtained through a combination of public information sources, proprietary in-house crawlers, and publicly available open-source crawlers. These tools are instrumental in gathering essential data about the network's operational footprint.A critical determinant in estimating the hardware deployed within the network is the specific computational requirements necessary to run the client software. Based on these identified requirements, the energy consumption of the corresponding hardware devices is rigorously measured in certified test laboratories. This ensures accuracy and consistency in energy attribution. Furthermore, when calculating energy consumption, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized, where available, to precisely identify and encompass all relevant implementations of any crypto-asset within the network's scope. These mappings are regularly updated, leveraging data from the Digital Token Identifier Foundation, to maintain current and accurate representations.The information pertaining to the hardware used and the total number of participants in the network is built upon assumptions, which are diligently verified using the best available empirical data. A general underlying premise is that network participants are largely economically rational actors. In adherence to a precautionary principle, conservative estimations are favored when uncertainties arise, leading to higher projected adverse impacts to ensure a robust and responsible assessment of energy consumption. This comprehensive methodology allows for a detailed and conservative estimation of the NEAR Protocol network's energy footprint.
The methodology for assessing the Polygon network's energy consumption is primarily based on a comprehensive "bottom-up" approach, which identifies the various nodes as the fundamental contributors to the network's overall energy footprint. This detailed calculation relies on empirical data collected from diverse sources, including publicly available information platforms, open-source crawlers, and proprietary in-house developed crawlers. The key factors for estimating the hardware utilized across the network are determined by the specific requirements for operating the client software. To ensure the accuracy of these estimations, the energy consumption of the identified hardware devices is precisely measured in certified test laboratories.
An integral part of this energy accounting involves the use of the Functionally Fungible Group Digital Token Identifier (FFG DTI). This identifier is employed to accurately determine and encompass all implementations of the crypto-asset relevant to the scope of analysis. The mappings derived from the FFG DTI are regularly updated, drawing upon data from the Digital Token Identifier Foundation to maintain their currency and reliability. Information concerning the specific hardware deployed and the total number of participants within the network is based on assumptions that undergo rigorous, best-effort verification using available empirical data. It is generally assumed that participants in the network behave in a largely economically rational manner. Adhering to a precautionary principle, in situations where uncertainties exist, estimates for potential adverse impacts are conservatively adjusted upwards, ensuring a robust and cautious assessment.
Crucially, due to Polygon's architectural design as a Layer 2 scaling solution for Ethereum, its energy consumption calculation incorporates a shared security model. Consequently, a proportional share of the Ethereum network's energy consumption is also attributed to Polygon, acknowledging Ethereum's foundational role in providing security to the Layer 2 solution. This specific proportion of Ethereum's energy usage is quantitatively determined based on the gas consumption on the Ethereum network. While the documents mention reliance on "public information sites" and the "Digital Token Identifier Foundation" for data, they do not provide specific URLs for these external resources.
To calculate the energy consumption of the Solana blockchain network, a "bottom-up" methodology is utilized, placing the network nodes at the center of the analysis. This approach relies on identifying the number of active participants and the specific hardware requirements necessary to run the network's client software. Data collection involves a variety of sources, including open-source web crawlers, internal monitoring tools developed by the legal entities, and public information websites. By analyzing these data points, researchers can estimate the hardware profiles of the various nodes operating globally. To ensure accuracy, the energy consumption of typical hardware devices is measured within certified laboratory environments, providing a baseline for the power usage of each node. Furthermore, the methodology incorporates data from the Digital Token Identifier Foundation to map all implementations of the assets within the network's scope. When specific hardware data is not directly observable, assumptions are made based on the principle of economic rationality, assuming participants optimize their setups for cost-efficiency while meeting software specifications. In instances of uncertainty, a precautionary principle is applied, favoring conservative estimates that likely overstate the environmental impact rather than underestimating it. This ensures that the reported energy footprint represents a credible upper bound of actual consumption. The total network consumption is determined by aggregating the energy needs of all identified nodes, accounting for both the computational requirements of processing transactions and the energy consumed by hardware in an idle or supportive state. This rigorous framework allows for a comprehensive assessment of the network’s total power requirements over a defined reporting period, providing a transparent view of the operational costs associated with maintaining the distributed ledger's infrastructure.
To determine the energy consumption of the Zksync network, a comprehensive methodology is applied that aggregates data from various infrastructure components. The process begins by assessing the total energy requirements of the blockchain environment, considering all participants involved in transaction processing and proof generation. A key element of this calculation involves the use of the Functionally Fungible Group Digital Token Identifier (FFG DTI) system. This identifier allows for the precise mapping of all implementations and activities associated with the network, ensuring that energy data is tracked accurately across different protocols and platforms. The data used for these assessments is frequently updated based on records from the Digital Token Identifier Foundation. In instances where direct measurements are unavailable, the methodology relies on a set of standardized assumptions regarding hardware efficiency and the number of active participants. These assumptions are grounded in the principle of economic rationality, positing that participants will optimize their operations for cost-effectiveness. However, to ensure environmental integrity, a "precautionary principle" is adopted. This means that when there is uncertainty in the data or the empirical evidence, the model leans toward more conservative estimates, which generally result in higher projected figures for energy consumption. This rigorous approach aims to capture the full scope of the network's ecological footprint, from the off-chain computation performed by sequencers and provers to the finality achieved through the Ethereum mainnet. By verifying these assumptions with the best available empirical data, the methodology provides a robust framework for understanding how Layer 2 scaling solutions interact with global energy resources.
Key energy sources and methodologies
WOO is present on the following networks: Avalanche, Binance Smart Chain, Ethereum, Fantom, Linea, Near Protocol, Polygon, Solana, Zksync.
The methodology for determining the key energy sources and the proportion of renewable energy utilized by the Avalanche blockchain network relies on a multi-pronged approach that integrates geographical data with energy mix statistics. To ascertain the percentage of renewable energy consumption, the initial step involves accurately identifying the geographical locations of the network's nodes. This crucial data is gathered through a combination of public information sites, advanced open-source crawlers, and proprietary in-house crawlers developed specifically for this purpose. In instances where comprehensive geographical distribution information for the nodes is not readily available, the methodology pivots to utilizing 'reference networks.' These reference networks are carefully selected for their comparability to Avalanche in terms of their incentivization structures and underlying consensus mechanisms, ensuring that the estimated renewable energy mix remains relevant and reflective of similar blockchain operations. Once the geographical data for the nodes (either directly identified or inferred from reference networks) is compiled, this geo-information is meticulously merged with comprehensive public data sets on electricity generation. A primary source for this integration is the data provided by Our World in Data, which offers detailed insights into the global energy landscape. The energy intensity of the network is then calculated as the marginal energy cost incurred for processing one additional transaction. This granular measurement provides a precise understanding of the energy overhead per unit of network activity. The specific datasets and sources referenced for this methodology include: Ember (2025) and the Energy Institute - Statistical Review of World Energy (2024), both of which undergo significant processing by Our World in Data. The dataset titled “Share of electricity generated by renewables – Ember and Energy Institute” is a key input, comprising original data from Ember’s “Yearly Electricity Data Europe” and “Yearly Electricity Data,” alongside the Energy Institute’s “Statistical Review of World Energy.” This information is publicly accessible at Share of electricity generated by renewables – Ember and Energy Institute.
To ascertain the proportion of renewable energy utilized by the Binance Smart Chain (BSC) network, a detailed methodology focuses on identifying the geographical distribution of its operational nodes. This process begins with leveraging a variety of data sources, including public information websites, general open-source crawlers, and specialized in-house developed crawlers. These tools collectively help pinpoint the physical locations where the network's nodes are hosted. The precise geographic distribution of these nodes is a crucial piece of information for accurately assessing renewable energy integration.
In instances where comprehensive information regarding the geographic distribution of nodes is unavailable or insufficient, the methodology incorporates a fallback mechanism. This involves using reference networks that exhibit comparable characteristics in terms of their incentivization structures and underlying consensus mechanisms. By analyzing the renewable energy usage patterns of these similar networks, an informed estimate can be made for BSC. Once geographical data for the nodes (either direct or inferred from reference networks) is established, this geo-information is meticulously merged with publicly accessible data from Our World in Data. This external dataset provides crucial insights into the share of electricity generated by renewables globally, drawing from sources like Ember (2025) and the Energy Institute’s Statistical Review of World Energy (2024). The integration of this data allows for a granular understanding of the renewable energy mix at the node locations.
Furthermore, the energy intensity of the network is calculated as the marginal energy cost with respect to one additional transaction. This metric quantifies the energy expenditure incurred for each incremental transaction processed on the network, providing a measure of its operational efficiency from an energy perspective. The consistent use of reputable public data sources and a robust methodology ensures transparency and accuracy in reporting the renewable energy profile of the Binance Smart Chain network.
To ascertain the proportion of renewable energy utilized by the Ethereum network, a specific set of methodologies is applied. The initial step involves pinpointing the geographical locations of the network's nodes. This crucial geo-information is gathered through various means, including publicly available information sites, as well as both open-source and internally developed crawlers designed to scan the network. In instances where comprehensive geographical data for nodes is not directly accessible, the analysis resorts to leveraging "reference networks." These are comparable networks chosen for their similar incentivization structures and consensus mechanisms, providing a proxy for node distribution. Once the geo-information is established, it is then integrated and cross-referenced with public data obtained from "Our World in Data." This comprehensive dataset offers insights into the energy mixes and renewable energy penetration across different regions globally. The final calculation of energy intensity is defined as the marginal energy cost incurred for processing one additional transaction on the network. This approach allows for an estimation of the energy footprint associated with scaling the network's transactional volume. For detailed information and the underlying data sources on the share of electricity generated by renewables, relevant information can be found through sources such as Ember (2025) and the Energy Institute - Statistical Review of World Energy (2024), with further processing by Our World in Data, accessible via Share of electricity generated by renewables – Ember and Energy Institute.
Regarding the key energy sources and methodologies for the Fantom network, the provided documentation primarily details the methodology for calculating energy consumption rather than specifying the direct energy sources (e.g., electricity grid mix, renewable percentages) powering the network's operations. The approach to quantifying energy usage is described as a "bottom-up" methodology, where individual operational nodes are identified as the central elements contributing to the network's energy demand. This calculation process is informed by empirical data gathered from public information sites, as well as both open-source and internally developed crawlers. Hardware specifications necessary for running the client software serve as the main criteria for estimating the equipment used across the network. The energy consumption of these hardware components is quantified through measurements conducted in certified test laboratories. For a comprehensive assessment, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is used to identify all relevant implementations of the asset, with regular updates to these mappings from the Digital Token Identifier Foundation. Assumptions regarding hardware deployment and participant numbers are carefully vetted with empirical data, operating under the premise of economic rationality and a conservative estimation approach in cases of doubt. Therefore, while the methodology for measuring consumption is thoroughly outlined, explicit details on the specific types of energy sources remain undetailed within the given context. No specific external document links are available within the provided context for this section.
To ascertain the proportion of renewable energy utilized by the Linea network, a detailed methodology focuses on pinpointing the geographical distribution of its operational nodes. This process involves the meticulous determination of node locations through a combination of publicly available information sites, proprietary in-house crawlers, and open-source crawling tools. In instances where specific geographic data for Linea's nodes is not readily available, the methodology resorts to leveraging data from comparable reference networks. These reference networks are carefully selected based on similarities in their incentivization structures and consensus mechanisms, providing a proxy for estimating the node distribution. Once the geo-information for the nodes is established, it is then integrated with comprehensive public data sets provided by Our World in Data. These datasets offer insights into the share of electricity generated by renewables in different regions globally. The renewable energy proportion for the network is derived from this combined data. Additionally, the energy intensity of the Linea network is quantified as the marginal energy cost incurred for processing one additional transaction. This approach helps to understand the energy footprint on a per-transaction basis. The primary data sources for determining the share of electricity generated by renewables are compiled by Ember and the Energy Institute, specifically their "Yearly Electricity Data Europe," "Yearly Electricity Data," and "Statistical Review of World Energy." This methodology allows for a comprehensive assessment of the network's reliance on renewable energy. Share of electricity generated by renewables - Ember and Energy Institute.
To accurately determine the proportion of renewable energy utilized by the NEAR Protocol network, a systematic methodology is employed focusing on the geographic distribution of its operational nodes. The initial step involves identifying the precise locations of these nodes using a combination of public information sites, advanced open-source crawlers, and internally developed specialized crawlers. This comprehensive data collection ensures a broad and accurate understanding of the physical presence of the network's infrastructure.In instances where precise geographic information for certain nodes might be unavailable, the methodology incorporates a fallback mechanism. It leverages data from reference networks that are deemed comparable to NEAR Protocol in terms of their incentivization structures and underlying consensus mechanisms. This comparative analysis helps to fill data gaps and provide a reasonable proxy for renewable energy usage in such cases.Once the geographical data for the nodes is established, this geo-information is meticulously integrated with publicly available datasets from reputable sources, notably "Our World in Data." This integration allows for the correlation of node locations with regional energy grid compositions and the prevalence of renewable energy sources in those areas. The final aspect of this methodology involves calculating the energy intensity of the network. This is defined as the marginal energy cost incurred with respect to processing one additional transaction on the NEAR Protocol blockchain. This metric provides a granular view of the energy efficiency per unit of network activity. For detailed information regarding the underlying energy data, the following sources are utilized: Share of electricity generated by renewables – Ember and Energy Institute. This rigorous approach ensures a transparent and verifiable assessment of renewable energy integration within the network's operations.
The available documentation details the methodologies for calculating the Polygon network's energy consumption, but it does not explicitly identify the key energy sources (e.g., renewable vs. non-renewable electricity, specific grid mixes) that power its underlying infrastructure. Instead, the focus is on the methodology of consumption assessment. The energy calculation employs a "bottom-up" approach, which considers individual nodes as the primary units of energy consumption within the network. This methodology draws on empirical findings from various data points, including public information sites, open-source crawlers, and proprietary in-house developed crawlers, to estimate the hardware utilized across the network.
The primary determinants for estimating the hardware's energy usage are the computational requirements for running the client software. The energy consumption of these specific hardware devices is meticulously measured and verified in certified test laboratories to ensure precise data collection. To accurately scope all relevant implementations of the crypto-asset for energy calculation, the Functionally Fungible Group Digital Token Identifier (FFG DTI) is utilized, with its mappings regularly updated through data from the Digital Token Identifier Foundation. Assumptions regarding the hardware in operation and the total count of network participants are diligently verified against empirical data, operating under the premise that participants are largely economically rational. In line with a precautionary principle, any uncertainties default to conservative estimates, leaning towards higher figures for potential adverse impacts.
Significantly, as Polygon functions as a Layer 2 scaling solution for Ethereum, its energy consumption calculation also integrates a portion of the Ethereum network's energy usage. This inclusion acknowledges Ethereum's fundamental role in providing security to Polygon. The specific proportion attributed is determined by the gas consumption on the Ethereum network, ensuring a comprehensive view of Polygon's energy demand, considering its reliance on the main Layer 1 chain. While these methodologies provide a clear framework for quantifying energy use, specific details regarding the actual sources of this energy are not elaborated upon in the provided documents, nor are any direct links to external documents specifying these sources or methodologies furnished.
The determination of energy sources for the Solana blockchain network involves a sophisticated geolocation mapping of the global node infrastructure. By utilizing internal and open-source crawlers, the physical locations of validator nodes are identified. Once the geographic distribution is established, this information is cross-referenced with regional energy data to calculate the percentage of renewable energy utilized by the network. For regions where specific node data is unavailable, researchers utilize reference networks that share similar consensus mechanisms and incentive structures as proxies to estimate the geographic spread of the infrastructure. The primary data source for these regional energy profiles is the Share of electricity generated by renewables dataset provided by Our World in Data, which incorporates research from Ember and the Energy Institute. This dataset provides yearly electricity data that allows for a granular assessment of how much of the network's power is derived from wind, solar, hydro, and other renewable sources. In addition to the total percentage of green energy, the methodology focuses on energy intensity, which is defined as the marginal energy cost required to process a single additional transaction on the network. This figure helps quantify the efficiency of the blockchain's resource usage relative to its utility. By integrating global energy statistics with real-time node distribution data, the network can report a more accurate picture of its sustainability, currently indicating that a significant portion of its operational energy comes from renewable sources, reflecting the broader global transition toward cleaner power grids.
The identification of key energy sources for the Zksync network relies on determining the geographic distribution of its infrastructure. This is achieved through a combination of public information portals, open-source web crawlers, and proprietary software designed to locate the nodes and servers supporting the network. When precise geographic data for specific nodes is missing, the methodology utilizes reference networks that share similar consensus mechanisms and incentive structures to estimate location patterns. This geographical data is then integrated with statistical information from Share of electricity generated by renewables - Ember and Energy Institute to calculate the proportion of renewable energy being utilized by the network's participants. This dataset provides a global view of electricity generation trends, allowing for a more accurate assessment of whether the power consumed comes from sustainable or traditional sources. The energy intensity of the network is further refined by calculating the marginal energy cost associated with each additional transaction. This approach moves beyond simple averages, providing insight into the incremental environmental impact of network activity. By merging internal node telemetry with external datasets like those from the Energy Institute, the analysis can distinguish between regions with high renewable penetration and those still reliant on fossil fuels. This level of detail is essential for a transparent view of the network's sustainability profile, ensuring that the environmental benefits of Layer 2 scaling are documented alongside the specific energy mix of the underlying infrastructure.
Key GHG sources and methodologies
WOO is present on the following networks: Avalanche, Binance Smart Chain, Ethereum, Linea, Near Protocol, Polygon, Solana, Zksync.
The methodology employed to determine the Greenhouse Gas (GHG) emissions associated with the Avalanche blockchain network involves a detailed process of locating network infrastructure and integrating this geographical data with carbon intensity statistics. The initial step is to precisely identify the locations of the network's nodes, a task accomplished through the diligent use of public information sites, sophisticated open-source crawlers, and specialized in-house crawlers. This geographical mapping is fundamental to understanding the specific energy grids from which the nodes draw their power. In situations where direct geographical information on node distribution is insufficient, the methodology relies on 'reference networks.' These are selected based on their structural similarities to Avalanche, particularly concerning their incentivization mechanisms and consensus protocols, ensuring that the estimates are as representative as possible. The collected geo-information, whether direct or inferred, is then carefully integrated with public data regarding the carbon intensity of electricity generation. A significant source for this critical data is Our World in Data, which provides comprehensive global information on electricity generation’s carbon footprint. The GHG intensity of the network is quantified as the marginal emission generated per additional transaction processed. This metric allows for a precise evaluation of the environmental impact as network activity scales. The foundational data and citations for this methodology include: Ember (2025) and the Energy Institute - Statistical Review of World Energy (2024), which have been extensively processed by Our World in Data. The specific dataset used is titled “Carbon intensity of electricity generation – Ember and Energy Institute,” drawing original data from Ember’s “Yearly Electricity Data Europe” and “Yearly Electricity Data,” as well as the Energy Institute’s “Statistical Review of World Energy.” This crucial resource for carbon intensity data is available under a CC BY 4.0 license at Carbon intensity of electricity generation – Ember and Energy Institute.
The methodology for determining the Greenhouse Gas (GHG) Emissions associated with the Binance Smart Chain (BSC) network, much like the energy consumption assessment, places a strong emphasis on geographically situating its operational nodes. The initial step involves identifying the physical locations of these nodes, which is achieved through a combination of public information sites, open-source crawlers, and specialized in-house developed crawlers. Accurately mapping these locations is fundamental, as regional electricity mixes and their associated carbon footprints vary significantly.
In situations where detailed geographical information for all nodes is not readily available, the methodology incorporates a pragmatic approach. This involves utilizing reference networks that share similar characteristics, specifically in their incentivization structures and consensus mechanisms. By studying these comparable networks, reasonable inferences can be made about the likely geographic distribution and, consequently, the emissions profile of BSC's nodes. Once the geographic data is gathered or estimated, it is then meticulously integrated with publicly available information from Our World in Data. This authoritative dataset provides critical data on the carbon intensity of electricity generation across various regions, compiling information from sources such as Ember (2025) and the Energy Institute’s Statistical Review of World Energy (2024).
This integration allows for the calculation of GHG emissions based on the electricity consumption at specific node locations and the carbon intensity of those regional grids. The intensity of GHG emissions for the network is specifically calculated as the marginal emission with respect to one additional transaction. This metric quantifies the increase in GHG emissions for each incremental transaction processed on the network, offering a direct measure of its environmental impact per unit of activity. The entire process adheres to a principle of transparency, utilizing established external data sources and a consistent approach to ensure the reported GHG emissions are as accurate and comprehensive as possible, always acknowledging that the data from Our World in Data is licensed under CC BY 4.0.
The methodology for determining the Greenhouse Gas (GHG) emissions of the Ethereum network closely mirrors the approach used for energy consumption, focusing on identifying emission sources and their quantification. The initial and fundamental step involves precisely identifying the geographical locations of the network's operational nodes. This data collection is facilitated through a combination of publicly available information, as well as specialized open-source and proprietary crawlers designed to actively discover and map node distributions across the globe. Should there be an absence of specific geographic information for the nodes, the analysis intelligently defaults to utilizing "reference networks." These are carefully selected networks that exhibit comparable characteristics in terms of their incentivization structures and consensus mechanisms, providing a basis for estimating the geographic spread when direct data is unavailable. This collected geo-information is then meticulously integrated with publicly accessible data from "Our World in Data." This integration allows for the application of regional carbon intensity factors to the estimated energy consumption, thereby enabling the calculation of associated GHG emissions. The overall GHG intensity is quantified as the marginal emission generated per additional transaction processed on the network, offering a metric for the environmental impact of increased network activity. For detailed information and original data regarding the carbon intensity of electricity generation, sources include Ember (2025) and the Energy Institute - Statistical Review of World Energy (2024), processed by Our World in Data, available at Carbon intensity of electricity generation – Ember and Energy Institute. This resource is licensed under CC BY 4.0.
The methodology for determining the Greenhouse Gas (GHG) emissions associated with the Linea network closely mirrors the approach used for energy consumption, emphasizing a data-driven estimation process. The initial step involves precisely identifying the geographical locations of the network's operational nodes. This is achieved through a combination of public information platforms, in-house developed crawlers, and readily available open-source crawling technologies. In scenarios where direct information on the geographic spread of Linea's nodes is insufficient, data from reference networks with similar incentivization frameworks and consensus mechanisms is employed as an approximation. This geo-spatial information, once gathered, is then systematically integrated with public datasets from Our World in Data, which provide detailed insights into the carbon intensity of electricity generation across various regions. This integration allows for the calculation of the network's total GHG emissions based on the energy mix of the regions where its nodes are located. Furthermore, the GHG intensity is calculated as the marginal emission produced for each additional transaction processed on the network, offering a per-transaction perspective on its environmental impact. The principal data sources for the carbon intensity of electricity generation are provided by Ember and the Energy Institute, derived from their "Yearly Electricity Data Europe," "Yearly Electricity Data," and "Statistical Review of World Energy." This rigorous methodology aims to provide a transparent and conservative estimation of the Linea network's climate footprint. Carbon intensity of electricity generation - Ember and Energy Institute.
The assessment of Greenhouse Gas (GHG) Emissions for the NEAR Protocol network follows a structured methodology that prioritizes the precise geographical identification of its operational nodes. This process begins by actively determining the locations of all network nodes, utilizing a combination of publicly accessible information sites, sophisticated open-source crawling tools, and specialized crawlers developed in-house. This multi-pronged data acquisition strategy aims to gather comprehensive location data for the network's infrastructure.Should specific geographic distribution data for certain nodes prove unobtainable, the methodology incorporates the use of reference networks. These are carefully selected based on their similarity to the NEAR Protocol in terms of their incentive structures and consensus mechanisms, allowing for an informed estimation of GHG emissions in the absence of direct data. This comparative approach ensures that even with limited direct information, a credible assessment can still be made.The collected geo-information is subsequently integrated with extensive public datasets, prominently featuring data from "Our World in Data." This integration enables the cross-referencing of node locations with regional carbon intensity data of electricity generation, providing a basis for calculating the associated GHG emissions. A crucial metric derived from this methodology is the GHG intensity, which quantifies the marginal emission attributable to processing one additional transaction on the NEAR Protocol network. This metric offers insights into the environmental footprint per unit of network activity. For detailed data on carbon intensity, the following resource is referenced: Carbon intensity of electricity generation – Ember and Energy Institute. This rigorous and transparent methodology underpins the network's efforts to measure and report its environmental impact.
The provided documents offer comprehensive details regarding the methodologies for calculating the energy consumption of the Polygon blockchain network, which are predicated on a "bottom-up" approach focusing on node energy demand, hardware requirements, and the integration of a proportion of Ethereum's energy consumption due to Polygon's Layer 2 architecture. This framework is robust for quantifying electrical energy usage. However, when addressing the topic of key Greenhouse Gas (GHG) sources and their associated methodologies, the provided information is notably insufficient. The documents do not contain any specific data or discussions pertaining to the direct or indirect GHG emissions generated by the Polygon network's operations.
Crucially, there is no mention of the types of emissions (e.g., Scope 1 for direct emissions, Scope 2 for indirect emissions from purchased electricity, or Scope 3 for other indirect emissions within the value chain), nor any dedicated methodologies for calculating, monitoring, or reporting these GHG emissions. The absence of information on the energy mix that powers the network's validators and underlying infrastructure – whether it is predominantly from renewable sources, fossil fuels, or a specific national grid mix – makes it impossible to determine the carbon intensity of the energy consumed. Without such details, a comprehensive assessment of GHG sources cannot be made.
While the methodology for energy consumption includes a "precautionary principle" to make higher estimates for "adverse impacts," these impacts are not explicitly defined or quantified in terms of GHG emissions. There is no information provided on specific conversion factors used to translate energy consumption into carbon dioxide equivalents or other greenhouse gases. The documents do not offer any external links or references to dedicated environmental impact assessments or GHG reporting standards followed by the Polygon network. Consequently, based solely on the provided information, it is not possible to identify the key GHG sources or the specific methodologies employed for their quantification within the Polygon ecosystem.
Quantifying the greenhouse gas (GHG) emissions of the Solana blockchain network requires a methodology focused on carbon intensity and the geographic footprint of its decentralized nodes. Similar to the energy source analysis, the process begins by locating active nodes using a combination of public data and specialized web crawling technology. This geographic information is critical because the carbon footprint of electricity varies significantly between different jurisdictions depending on their local power generation mix. For nodes that cannot be precisely located, the analysis uses data from comparable blockchain networks to ensure the estimation remains as complete as possible. The carbon intensity of the electricity used by these nodes is derived from the Carbon intensity of electricity generation dataset, accessible via Our World in Data. This dataset, which is licensed under CC BY 4.0, provides essential metrics on the amount of CO2 equivalent emitted per kilowatt-hour of electricity produced in various countries. By merging node locations with these carbon intensity values, the network can calculate its Scope 2 emissions, which represent the indirect emissions from the generation of purchased electricity. The methodology also focuses on GHG intensity, measuring the marginal emissions generated by one additional transaction on the blockchain. This allows for a performance-based assessment of the network's environmental impact. The results are typically reported in tonnes of CO2 equivalent (tCO2e), providing a standardized metric that allows for comparison with other industries and financial systems. This data-driven approach ensures that the network’s environmental disclosures are rooted in empirical global energy statistics and verifiable infrastructure data.
The methodology for evaluating Greenhouse Gas (GHG) emissions for Zksync mirrors the geographic assessment used for energy sources, focusing on the carbon intensity of the power grids where the network's nodes are situated. By identifying the locations of validators and sequencers through specialized crawlers and public data, the analysis assigns specific emission factors based on regional electricity profiles. These profiles are derived from the Carbon intensity of electricity generation - Ember and Energy Institute dataset, which offers comprehensive information on the grams of CO2 equivalent produced per kilowatt-hour across different nations. The methodology categorizes emissions into different scopes, typically focusing on Scope 2 emissions related to purchased electricity for running the hardware. To provide a granular view of the network's impact, the GHG intensity is expressed as the marginal emission generated by a single additional transaction on the blockchain. This allows users and developers to understand the carbon footprint of their specific interactions with the protocol. In cases where node data is sparse, the model employs reference network comparisons to ensure that the global footprint is not underestimated. The integration of this geo-information with the data provided by Ember and the Energy Institute ensures that the final figures reflect the most current and peer-reviewed information available in the field of energy statistics. This evidence-based approach to carbon accounting allows the network to maintain a high standard of transparency and align with international sustainability reporting standards.