Cover photo

EigenDA: AVS Cryptoeconomic Risk Analysis

Abstract

The present article by Tokensight has been written to provide a technical overview of EigenDA, as a data availability AVS on EigenLayer. In this piece, we’ll explore its consensus architecture, some objective and intersubjective corruption scenarios that may befall, and its innovative anti-congestionary design for effective data availability.

EigenDA Breakdown

EigenDA, designed by EigenLayer, is a data availability solution that enables Ethereum rollups to store and scale transaction data securely. It aims to achieve higher throughput and scalability with lower transaction costs compared to current alternatives.

EigenDA sets a new standard for secure data availability by deriving its cryptoeconomic security from restaked ETH via EigenLayer in an objective-fault context, and from EIGEN staking in an intersubjective-fault context. It employs a dual staking quorum consisting of the same restaked ETH via EigenLayer and the native rollup token, ROLLUP.

Understanding EigenDA Operator Network

The role of the EigenDA Operator Network is to serve as validators within EigenDA’s Byzantine Fault Tolerant (BFT) consensus mechanism. To deter malicious behavior, EigenDA leverages the security of Ethereum through EigenLayer, relying on its Proof-of-Stake (PoS) system. Validators must stake capital in the form of bEIGEN tokens.

If objective misbehavior is detected, an operator’s bEIGEN stake will be slashed. In cases of intersubjective misbehavior, an operator’s bEIGEN1 token will be forked into bEIGEN2, resulting in the loss of access to the former and the inability to redeem the latter.

EigenDA Consensus Architecture

BFT is the consensus mechanism enabling a network of validators to work together and update the blockchain's state. Each validator's voting power is proportional to their bonded stake, thus tying network security to the stake's value rather than solely the number of validators. This approach incentivizes validators to commit higher stakes to ensure robust security.

Here’s the high-level overview of how BFT works within EigenDA:

  1. Proposer Selection: Each round, a proposer is chosen based on a deterministic round-robin algorithm considering each validator's voting power.

  2. Proposal: The proposer broadcasts a block proposal containing a batch of transactions to all validators.

  3. Pre-vote: Validators broadcast a pre-vote for the proposed block if valid, or nil if no valid proposal is received within a set time.

  4. Pre-commit: If a validator receives >2/3 pre-votes for the same block, it broadcasts a pre-commit for that block. If >2/3 pre-votes are nil, it pre-commits nil.

  5. Commit: If a validator receives >2/3 pre-commits for the same block, it commits the block to its local blockchain, finalizing the block. If >2/3 pre-commits are nil, it moves to the next round.

  6. Round Progression: If a proposer fails to get sufficient pre-votes or pre-commits, the protocol progresses to the next round with a new proposer until a block is committed.

And an overview of how EigenDA’s architecture works:

  1. Data Blob Request: A Rollup sequencer initiates the process by creating a block of transactions and requesting the dispersion of the associated data blob.

  2. Erasure Coding and Dispersal: The Disperser steps in erasure coding the data blob into shards, generating cryptographic proofs (KZG commitments and multi-reveal proofs), and disseminating these shards along with the proofs to the EigenDA operator nodes.

  3. Verification and Storage: The Node Operators verify they’ve received shards against the KZG commitment using multi-reveal proofs, store the verified data, and then acknowledge the successful storage by sending a signature back for aggregation.

  4. Signature Aggregation: Once the data is verified and stored across the network, the collective signatures from the operators are aggregated, cementing the consensus on data availability and integrity.

  5. Round Progression: This cycle ensures not only the availability of data but also its resilience against manipulation or loss, supporting the operation of rollups on the network in a seamless way.

Image from https://docs.eigenlayer.xyz/eigenda/overview

Objective vs Intersubjective-Attributable Faults for EigenDA

In the recently published EIGEN: The Universal Intersubjective Work Token whitepaper, EigenLayer introduced three distinct methods on how faults can be attributed to a malicious party:

Objectively-Attributable Faults faults that can be proven both mathematically and cryptographically, independent of subjective opinions. Examples include deterministic faults like execution validity, where anyone can verify if a node runs a piece of code on a distributed VM and checks if the output is correct based on predefined rules.

  • Example for DA:

Double-Signed Attestations: If nodes sign two conflicting signatures—either maliciously or non-maliciously—, then it can be proven on-chain that the nodes has committed an on-chain attributable fault.

Intersubjectively-Attributable Faults require a broad-based consensus agreement among active observers of the system. For example, verifying the accuracy of a price reported by an oracle depends on collective agreement, as it may not be immediately verifiable. Intersubjective staking involves achieving consensus among honest participants to confirm events that are not easily observable or occur off-chain.

  • Example for DA:

Data Withholding: Data could be withheld temporarily but then released, which might lead to disagreements on whether an attack occurred. To mitigate data-withholding issues, the system can specify a maximum allowable period for withholding data, such as up to 1 day, to clearly define what constitutes an attack.

Non-Attributable Faults occur when only the victim is aware of the fault, preventing third parties from conclusively determining whether a fault has occurred or if there is malicious intent within the system or by an individual. For example, in a secret-sharing system where the secret is revealed only after a predetermined period, collusion among nodes may lead to premature disclosure of the secret, which could be undetectable without external knowledge.

  • Example for DA:

Validator Collusion: When a group of validators colludes to discreetly approve incorrect data validations, the fault becomes non-attributable, as it is challenging for external observers to identify the responsible validators. This kind of collusion obscures fault origins, requiring sophisticated collusion-resistant solutions and enhanced decentralization.


In the context of intersubjective faults, assessing the profit from corrupting a DA service like EigenDA involves complex social dynamics. Instead of straightforward slashing, malicious operators will see their stake forked, a process governed by the social consensus within the EIGEN ecosystem. This highlights the critical role of consensus-based approaches in managing and resolving disputes over intersubjective faults.

Delving deeper into the relationship between data availability and intersubjective faults, it becomes clear that Liveness presents a greater risk likelihood than Safety to EigenDA. Therefore, social consensus mechanisms and forking conditions must prioritize Liveness considerations when developing risk mitigation strategies and modeling potential attack vectors for this type of AVS.

Corruption Scenarios for EigenDA Network

Cost of Corruption is defined as the cost enforced by the system on an attacker or group of colluding attackers to successfully carry out and compromise EigenDA Network's security.

Liveness Tolerance Violation (>1/3 Stake Attack)

This scenario occurs when validators holding more than one-third of the network's stake interfere with the smooth operation of the system. Such interference can result in delayed or non-production of data availability attestations, impacting the network's ability to operate efficiently. This type of manipulation is classified as an Intersubjective-Attributable Fault—due to its off-chain and concurrently observable impact—where the malicious activity can include:

  • Data Relaying Censorship: Deliberately blocking or ignoring certain transactions, thus manipulating the visibility and processing of data. This selective interference can distort the network's perception of data availability.

  • Data Relaying Stalling: Intentionally slowing down the data verification and attestation processes. This action can increase latency, affecting the timeliness and reliability of data availability, and may lead users to question the system’s effectiveness.

Cost of Acquiring ≥1/3 Stake

Dual Staking Scenario (Restaked ETH and ROLLUP Staking): With $1.5B of restaked Ether in the EigenDA AVS contract on EigenLayer and an additional $1.5B of ROLLUP staked, an attacker or group of attackers would need to acquire, at least, $500M worth of restaked ETH and $500M of staked ROLLUP, totalling $1B in required capital to corrupt the network. The dual staking mechanism effectively doubles the cost of corruption compared to the solo restaked ETH or solo native token staking scenarios.

Safety Tolerance Violation (>2/3 Stake Attack)

This condition arises when validators control more than two-thirds of the network's stake, enabling them to manipulate the data validation process. Such a level of control can lead to the certification of false data as correct, which is a direct threat to the integrity of the network. This type of manipulation is classified as an Objectively-Attributable Fault—due to its deterministic validity and on-chain observable impact—where the malicious activity can include:

  • Data Attestation Corruption: The process of certifying incorrect or compromised data as valid and available, which can deceive the network and its users into relying on false information.

  • Data Attestation Double-Signing: Engaging in the signing of conflicting data attestations, thereby creating ambiguity and mistrust regarding the true availability and integrity of data.

Cost of Acquiring ≥2/3 Stake

Dual Staking Scenario (Restaked ETH and ROLLUP Staking): With $1.5B of restaked Ether in the EigenDA AVS contract on EigenLayer and an additional $1.5B of ROLLUP staked, an attacker or group of attackers would need to acquire, at least, $1B worth of restaked ETH and $1B of staked ROLLUP, totaling $2B in required capital to corrupt the network. Effectively doubling the cost of corruption compared to the solo restaked ETH or solo native token staking scenarios.

Factors to Consider To Increase Cost of Corruption

  • Erasure Encoding Rate: Usually set between 10% and 50%, depending on the desired level of data redundancy and the storage capacity available across the node network. A higher rate increases redundancy, enhancing data protection and making it more challenging and costly for an attacker to compromise the data integrity, thus reducing their potential profit.

  • Proof-of-Custody: If an operator does not compute this proof, whilst attesting and storing blobs, some voluntary tampering to the system could be taking place.

  • Dual Quorum: System that requires two separate groups to agree on data availability, reducing the impact of any single group potentially engaging in double-signing.

Profit from Corrupting Stake

The potential Profit from an attack can be calculated as:

Profit = Value of DataCost of Acquiring the Necessary Stake — Cost of Executing the Attack*


*although being somewhat of a subjective variable, the manipulation of financial outcomes, based on corrupted data, could potentially be exploited for financial leverage, in an indirectly monetary or even non-monetary way.


For example, if an attacker incurs a $2B cost to garner $500 million worth of data, the direct financial outcome appears unprofitable. However, considering the indirect benefits like strategic dominance or long-term market manipulation may provide a broader context to justify such expenses, in a DA scenario.

Factors to Consider When Estimating and Reducing Profit from Corruption

  • Redundancy of Data: The value and sensitivity of the data being manipulated determine the potential profit.

  • Associated Costs: This includes both the acquisition cost of the necessary stake and operational expenses related to the attack.

  • Extraction and Recovery Mechanisms: The feasibility of deriving benefits and evading subsequent recovery efforts are crucial.

  • Legal and Reputational Risks: Potential legal consequences and reputational damages can significantly deter these attacks.

By considering both intersubjectively- and objectively-attributable faults, stakeholders can better understand the varied nature of potential attacks on EigenDA and develop more effective defense mechanisms.

EigenDA Anti-Congestionary Design Solutions

Looking briefly into possibly the biggest future constraint to EigenDA’s performance, as a cutting-edge DA solution for rollups, is network congestion at scale. The expected high demand and usage of blockspace by rollups provided by this AVS may impact latency, transaction throughput, and congestion pricing.

  1. Fig. A – Rollups’ Early-Stage Demand for Blockspace: Initially, the demand for blockspace from rollups on EigenDA may be somewhat unpredictable, due to the novelty of the technology and lack of a track record of reliable performance. As competition for blockspace amongst rollups at this early stage is minimal, a more homoskedastic display of this demand will be witnessed, as blockspace capacity limit is approached.

  2. Fig. B – Rollups’ Mature-Stage Demand for Blockspace: In this scenario, the initial demand is predictable since there’s now consistent adoption by rollups and a strong track record of the service performance, therefore registering consistent demand. However, in this case, as the blockspace capacity limit is approached, competition may pick up among rollups for their transaction data to be included—to the point of willing to pay congestion fees—, leading to more unpredictable demand variations (heteroskedasticity).

The EigenDA team is working on ways to decrease this heteroskedasticity of blockspace demand through innovative anti-congestionary pricing mechanisms, at a more maturing stage of EigenDA.


A few interesting components are in the works to mitigate this issue:

  • Horizontal Scalability of Nodes: Thanks to shared security, the cost of economic security on a per-node basis is reduced, as more nodes enter the system. As more nodes enter the system, performance increases and costs decrease;

  • Bandwidth Reservation for Rollups: This component allows rollups to reserve bandwidth (1MB/s, e.g) to safeguard themselves against surge pricing and fluctuating gas fees that may result from increased or volatile blockspace congestion demands;

  • Flexible Tokenomics: Rollups could benefit from the flexible tokenomics that come with the offered dual staking token model of EigenDA to pay for this bandwidth reserve, through customized token emissions or other related tokenomics method.

Conclusion

As we wrap up this in-depth piece of EigenDA, several pivotal questions remain about its long-term viability and operational efficiency:

  1. Performance Metrics: Can EigenDA consistently achieve its ambitious 10MB/s data throughput and maintain its horizontal scaling model? How will its performance align with recent advancements, such as EIP-4844 (Proto-Danksharding)?

  2. Competitive Edge: Will EigenDA effectively outperform L1 solutions in terms of data publishing rates for various rollups?

  3. Faults' Nature: How effectively will intersubjectively-attributable and non-attributable faults, especially in high-stakes scenarios involving critical data validation, be managed? What kinds of scenarios should we monitor that may trigger these novel kinds of faults?

  4. Operator Dynamics: What will be the criteria for selecting operators for EigenDA? Is there a risk of centralization, and how diverse or concentrated will the operator set be?

  5. Stake Distribution: How will the stake be distributed among operators, and what implications will this have for network security and operator reputation?

  6. Attack Vectors: Are there more effective or damaging strategies for attacks beyond data attestation corruption? What other vulnerabilities could potentially be exploited?

  7. Code Complexity: As an AVS, how complex will EigenDA's underlying code be, and what might this complexity mean for system robustness and bug susceptibility?

These inquiries are crucial for the community to consider as EigenDA becomes fully operational in the coming months. Tokensight will continue to monitor EigenDA’s development and provide updates through further technical research. Follow us on X!

Loading...
highlight
Collect this post to permanently own it.
Tokensight Research Hub logo
Subscribe to Tokensight Research Hub and never miss a post.