Cover photo

Lagrange Labs: AVS Cryptoeconomic Risk Analysis


  1. Abstract

  2. Breakdown

  3. Consensus Architecture

    3.1 ZK Coprocessor

    3.2 State Committees

  4. Lagrange Architectural Workflow

    4.1 ZK Coprocessor + Verifiable Database

    4.2 State Committees

  5. Objective vs Intersubjective Attributable Faults for Lagrange

    5.1 Objectively Attributable Faults

    5.2 Intersubjectively Attributable Faults

    5.3 Non-Attributable Faults

  6. Corruption Scenarios for Lagrange Network

    6.1 Corruption Analysis with Pooled Security

    6.1.1 Safety vs. Liveness Safety Threshold Violation Liveness Threshold Violation

    6.1.2 Factors to Consider When Estimating Cost of Corruption & Profit from Corruption

    6.2 Corruption Analysis in a Intersubjectively Staking World with Attributable Security

    6.2.1 Cryptoeconomic Security

    6.2.2 Strong Cryptoeconomic Security

  7. Lagrange Scenario Analysis: Impact of Data Size and Horizontal Scaling of Nodes on Proof and Query Processing Times

  8. Conclusion

1. Abstract

The present article by Tokensight has been written to provide a technical overview of Lagrange Labs' Hyper-Parallel ZK Coprocessor and State Committees, as AVSs on EigenLayer. In this piece, we’ll explore their consensus architectures, some potential objective and intersubjective corruption scenarios, and their innovative approaches at scaling onchain querying through off-chain ZK computations and trustless relaying of cross-chain proofs to optimistic rollups.

2. Lagrange Labs Breakdown

Lagrange Labs is comprised of two protocol services:

Lagrange's ZK Coprocessor (and Verifiable Database) aims to create a provable database from a subset of blockchain data, enabling efficient queries. Acting like a "coprocessor," it allows smart contracts to run intensive off-chain computations, which are then verified on-chain. The Coprocessor first preprocesses and indexes contract storage, transforming it into a SNARK-optimized data structure using a decentralized network of provers. When data is requested, it runs provable queries in parallel, similar to the MapReduce framework, allowing for efficient, large-scale SQL queries at a low cost to users. This setup can process storage on any EVM-based chain and provide cross-chain queries without bridges, enabling computations like averaging prices across different L2s and returning results on Ethereum. The system's natural parallelizability allows it to horizontally scale across numerous operators through both parallel computation and parallel proof generation.

Lagrange's State Committees (LSC), built on top of the ZK Coprocessor, provide a secure light-client solution for trustless cross-chain state proofs for optimistic rollups; essential for applications like cross-chain bridging and messaging. Instead of replacing consensus proofs, they offer a solution for chains where finality or consensus can't be proved using zero-knowledge methods. Each LSC is a group of restaked nodes that attest to the "fast-finality" of blocks for optimistic rollups, with collateral restaked via EigenLayer to ensure security, allowing committees to generate state proofs for any chain, regardless of its consensus mechanism.

Data as of 6/14/2024 from u--1:

3. Lagrange Labs Consensus Architecture

Lagrange Nodes, serving as provers, must restake via EigenLayer or rETH into Lagrange’s Ethereum contracts, providing at least 32 ETH of collateral. Proof-of-Stake is, therefore, the basis consensus mechanism governing this protocol. In addition, with EIGEN coming to the fore, attesters must also stake capital in the form of bEIGEN tokens.

Consequently, Lagrange derives its cryptoeconomic security from restaked ETH via EigenLayer in an objective-fault context, and from EIGEN staking in an intersubjective-fault context. As of now, it's only safe to suggest that Lagrange is pursuing a "Pure Wallet" business model (the most secure one for newborn protocols) where no native AVS token is involved, and user fees are paid in a purely neutral denomination (like ETH). Assumption prone to be faulty.

Let's now take a look at additional and "more local" kinds of consensus that have been integrated into Lagrange.

3.1 ZK Coprocessor

Lagrange’s ZK Coprocessor is the first proof type deployed on Lagrange’s Prover Network.

The ZK Coprocessor operates under this consensus mechanism that leverages ZK proofs (zkSNARKs) for recursive proving and verifiable database management. It uses a distributed and parallel processing framework akin to MapReduce (detailed in section 4.1), ensuring scalability and efficiency in handling large datasets. In greater detail:

  1. Prover Network Consensus: The Prover Network is composed of two key components: Gateways and Provers. Gateways manage queues of tasks and distribute them to Provers based on their resources and stake levels. Provers generate cryptographic proofs, receiving rewards for timely and valid proof generation while facing penalties for failing to meet such obligations. This system prevents freeloading by ensuring that rewards are proportional to the computation contributed. Lagrange operates the initial Gateway, but other operators can establish their own Gateways, setting rules for proofs, hardware, task distribution, and payment.

  2. Recursive Proving and Verifiable Database: The ZK Coprocessor manages the preprocessing and querying tasks, whereas the actual zkSNARKs are generated by the Prover Network. Each node is recursively proved, ensuring the validity of the entire Merkle structure through a final root proof. This involves preprocessing or indexing the contract’s storage at each block, inserting the data into a Verifiable Database, and running efficient provable queries over this database.

  3. Distributed and Parallel Processing: The ZK Coprocessor distributes computation across multiple nodes for efficient handling of large-scale databases. This distributed approach allows the ZK Coprocessor to preprocess, index, query, and validate data in parallel, optimizing for scalability and performance.

In addition to the above validations and computed cryptographic proofs, the ZK Coprocessor naturally integrates with the underlying blockchain’s consensus protocol (PoS) to ensure the integrity and consistency of the processed data (aided through Proof-of-Consistency provided by Reckle trees). By using cryptographic proofs and a recursive proving system, the ZK Coprocessor can provide trustless verification of data and computations, enabling secure and efficient cross-chain data processing.

ZK Coprocessor Architecture
Image from

3.2 State Committees

Lagrange’s State Commitees are the second proof type deployed on Lagrange’s Prover Network.

In LSC, for a state proof of each single block to be valid, the following recursive relationships and properties must hold:

  1. At least 2/3 voting power out of the total committee voting power for a given block must have signed the block header from the arbitrary chain that they are attesting to. The public keys of these nodes are stored in the current Committee's Merkle tree and also in the next Committee for the subsequent block.

  2. Along the previous step, validators must assure, through the recorded Merkle tree, that the Committee for the present block is the same as the Committee that validated the previous block.

  3. A contract requires an inductive ZK proof of the validity of the previous block with respect to the genesis block and an aggregated BLS signature of the current Committee of the present block.

This recursive relationship allows any block to be proven as valid via inductive proofs, starting with the base case of an initial prover set.

State Committees Architecture
Image from

4. Lagrange Architectural Workflow

The goal of Lagrange's ZK Coprocessor and Verifiable Database is to create a provable database containing a subset of the original blockchain data, which can be efficiently be queried over. Both workflows leverage ZK proofs and decentralized validation to maintain data integrity and trust.

Now overviewing how their architectures works:

4.1 ZK Coprocessor + Verifiable Database

Lagrange's Coprocessor network can be thought of as an indexer that looks at a contract's storage data and processes it into a verifiable, replicated database. In essence, it re-creates the target blockchain's database (storage, state and blocks) but in a format amenable to run efficient and distributed queries.

ZK Coprocessor

Paraphrasing Lagrange docs:

  1. "Preprocessing or indexing the contract’s storage at each block and provably “inserting” the data into a Verifiable Database, which supports efficient provable queries. This part is the most computationally intensive part of the process due to most blockchain's data structures not being “proof friendly”.

  2. Run provable queries in parallel over this new database when smart contract asks it. This computation is done in the spirit of MapReduce, as found in large scale database processing tools."

The ZK Coprocessor can process smart contract's storage on any EVM based chains and answer queries for these contract on another chain, without the need to use bridges.

Verifiable Database

The database architecture that contains a subset of the blockchain information is structured as follows:

  • Storage Database: A provably equivalent data structure containing the subset of leaves that the user wants to index from the contract’s storage trie. One such "replica" of the original database is maintained for each user's contract. The key difference between the original storage trie and the storage database is the usage of cryptographic proofs and a different design of the tree, which makes the new database much more friendly to "ZK queries".

  • State Database: A data structure containing the subset of leaves for the referenced contracts in the state trie. Each leaf/entry of the state database is linked with the corresponding storage database. Each leaf maintains some information about a smart contract that the ZK Coprocessor is indexing.

  • Block Database: Update an existing data structure linking the above state database to a given block of the chain, and that contains all state databases for blocks previously processed by the ZK Coprocessor.

Lagrange's Prover Network is modelled after the MapReduce framework, distributing computation across multiple nodes to handle large-scale databases efficiently. Instead of using a single powerful server, it processes data in parallel by dividing it into chunks. Each node performs a "map" operation to convert data into key-value pairs, followed by a "reduce" operation to aggregate results into a single output. This approach is scalable and allows for efficient handling of large datasets.

Building on this framework, Lagrange's zkMapReduce (zkMR) incorporates ZK proofs to ensure the correctness of these distributed computations. Each node generates proofs for their part of the computation, which are then recursively combined into a single proof validating the entire process. As such, zkMR enables the handling of complex computations and analysis on large datasets efficiently, making it ideal for trustless big data applications.

Verifiable Database Architecture
"The ZK Coprocessor produces an equivalent version (in blue) of the original storage trie (in red), supporting efficient queries. A proof is generated that this new database contains the same data as the original red blockchain data structure". Image from

4.2 State Committees

  1. Each node must run a validator or watcher for the relevant chain or rollup and execute a BLS signature on every finalized block; multiple nodes by the same operator can maintain a secure RPC connection to a single validator.

  2. During active proving periods, nodes also execute validations upon the Merkle tree roots in different states: of the current and the next Committees in the Merkle sequence.

  3. Nodes must independently and timely verify the correctness of the block header and Merkle roots before signing; malicious nodes face slashing penalties based on fraud proofs in case of non-compliance.

State Committees Architecture
Image from

5. Objective vs Intersubjective Attributable Faults for Lagrange

In the recently published EIGEN: The Universal Intersubjective Work Token whitepaper, EigenLayer introduced three distinct methods on how faults can be attributed to a malicious party:

5.1 Objectively Attributable Faults

Faults that can be proven both mathematically and cryptographically, independent of subjective opinions. Examples include deterministic faults like execution validity, where anyone can verify if a node runs a piece of code on a distributed VM and checks if the output is correct based on predefined rules.

Fault examples for Hyper-Parallel ZK Coprocessing and ZK State-Proof Generation:

  • Double-Signing, Signing of Wrongful Block Header, MEV Extraction (through Gateway, the in-house, for-now centralized sequencer): If validators sign two conflicting block headers for the same sequence, or wrongfully sign headers they should not, or engage in transaction ordering manipulation for their own benefit or to undermine the protocol's integrity; they can be detected and proven onchain. Ensures validators are directly held accountable through slashing for outright maliciously or non-maliciously manipulation.

Note: There is theoretically no safety risk within Lagrange post-proof generation, since the proofs would not be able to be wrongfully generated in the first place in a pure ZK system like Lagrange is built to be. The fault examples above are taken in a pre-proof generation context, to illustrate the potential faults that may occur in this scenario, either way.

5.2 Intersubjectively Attributable Faults

Faults that require broad-based consensus among active observers of the system. For instance, verifying the accuracy of a price reported by an oracle depends on collective agreement, as it may not be immediately verifiable. Intersubjective staking involves achieving consensus among honest participants to confirm events that are not easily observable or occur off-chain.

In the context of intersubjective faults, assessing the harm from corrupting in a protocol like Lagrange involves broad-based social consensus. Instead of straightforward slashing, malicious operators will see their stake forked, a process governed by the social consensus within the EIGEN ecosystem. The critical role of consensus-based approaches in managing and resolving disputes over intersubjective faults is highlighted.

A couple of interesting use cases where intersubjective faults for disputes toward Lagrange can be implemented are:

  • On the database level: Writing fraud proofs or validity proofs for various execution environments can be complex in general. As a result, using intersubjective slashing can serve as an intermediate step before slashing contracts for the AVS can be rigidly built.

  • On the storage level: Proof-of-replication or proof-of-custody measures can be created for storage-centric or storage-heavy AVSs. If the nodes do not custody distinct units of data, they can be slashed. To alleviate code complexity, the slashing protocol can be intersubjective rather than fully onchain.

Fault examples for Hyper-Parallel ZK Coprocessing and Verifiable Database:

  • Network Latency, Sybil Attacks, Merkle Tree Synchronization Issues, Frail zkSNARK Setup: If a validator/validators causes delays in communication and consensus, network throughput/finality may be affected; perform sybil attacks involving creating multiple fake identities to manipulate network processes and disrupt consensus; inconsistent Merkle tree states across nodes can lead to discrepancies in data validation and proof generation, compromising system’s accuracy; the integrity of the zkSNARK setup with regards to its randomness and parameter generations can be agreed upon by consensus.

    By setting a maximum allowable period for withholding (e.g., a few seconds), implementation of anti-sybil mechanisms, continuous data and state validation checks, Lagrange can define clear criteria for what constitutes an fault, enabling consensus on these subjective matters.

Fault examples for ZK State-Proof Generation:

  • Network Latency, Sybil Attacks, Censorship of Cryptographic Proofs or Cross-Chain Messaging: If a validator causes delays in communication and consensus, network throughput/finality may be affected; performs sybil attacks involving creating multiple fake identities to manipulate network processes and disrupt consensus; or censors cryptographic proving or cross-chain messages.

    By setting a maximum allowable period for withholding (e.g., a few seconds), implementation of anti-sybil mechanisms, and continuous validation checks through services like watchtowers, Lagrange can define clear criteria for what constitutes an attack, enabling consensus on these subjective issues.

Concrete incentives and mechanisms that facilitate light nodes to join and monitor the network these kinds of intersubjective faults would also be strongly advised. Light nodes will play a important role in observing, inputting, and aiding on consensus-reaching toward these intersubjective matters, and therefore, ultimately, be useful in fostering intersubjective cohesion and mitigating intersubjective fracture.

As per EigenLayer's recent whitepaper:

"To make the faults intersubjectively attributable, the AVS may need to focus on developing robust monitoring infrastructure including light clients. This can lower the cost-of-monitoring, ensuring that there will be a wide net of community members from EIGEN’s social consensus who will be operating the AVS’s light node software for monitoring the EigenLayer operators that have opted into the AVS."

"Intersubjective cohesion. An important requirement for social consensus to be able to resolve intersubjec- tive faults for potentially verifiable digital tasks is that all honest members of social consensus should be in cohesion about what the correct fork of bEIGEN after an intersubjective challenge is triggered."

"Intersubjective fracture. If an AVS doesn’t carefully design its light node architecture for users to utilize when resolving intersubjective faults, it presents the risk of a fracture in the social consensus."

5.3 Non-Attributable Faults

Faults that occur when only the victim is aware of the fault, preventing third parties from conclusively determining whether a fault has occurred or if there is malicious intent within the system or by an individual. For example, in a secret-sharing system where the secret is revealed only after a predetermined period, collusion among nodes may lead to premature disclosure of the secret, which could be undetectable without external knowledge.

Fault example for Hyper-Parallel ZK Coprocessing and ZK State-Proof Generation:

  • Validator Collusion: When a group of validators colludes to discreetly approve incorrect transactions or blocks, it becomes challenging for external observers to pinpoint the responsible validators. This makes the fault non-attributable, necessitating advanced collusion-resistant measures and increased decentralization to mitigate such risks. Again, light-node infrastructure will be important to potentially mitigate this risk.

Figure 1: Potential Kinds of Faults Toward Lagrange

6. Corruption Scenarios for Lagrange Network

Cost of Corruption is defined as the cost enforced by the system on an attacker or group of colluding attackers to successfully carry out and compromise Lagrange Network's security. Profit from Corruption comprises the net value the same attacker or group of attackers is able to extract after performing the attack.

In a standard pooled security context for a single AVS the below holds true:

Image from EIGEN The Universal Intersubjective Work Token

6.1 Corruption Analysis with Pooled Security

6.1.1 Safety vs Liveness

Looking deeper into the relationship between ZK Coprocessing architecture and the different kinds of possible faults, it becomes clear that Safety presents a greater risk likelihood than Liveness for Lagrange. Therefore, objective and quantitative slashing penalties should be carefully considered and prioritized when developing risk mitigation strategies and when modelling these attack vectors.

Ensuring the correctness and integrity of state proofs and validation processes is crucial to maintaining trust and reliability in this type of AVS. While liveness issues such as delays can be problematic, they do not pose the same level of risk as safety violations in this case, which can lead to incorrect data processing, proof generation, and other significant security concerns for ZK Coprocessing and proof-generation State Committees. Therefore, focusing on mechanisms to guarantee safety should be of utmost priority for Lagrange. Safety Threshold Violation (>2/3 Stake Attack)

This condition arises when a set of malicious validators control more than 2/3 of the network's stake, enabling them to manipulate the proof and query generation processes. Such a level of control can lead to the validation of faulty proofs or incorrect data replications as correct, which is a direct threat to the integrity of the network.

This type of manipulation is classified as an Objectively-Attributable Fault—due to its deterministic validity and onchain observable impact—where the malicious activity can include Double-Signing, as covered above. Liveness Threshold Violation (>1/3 Stake Attack)

This scenario occurs when validators holding more than 1/3 of the network's stake interfere with the smooth and timely operation of the system. Such interference can result in delayed or non-production of proofs, impacting the network's ability to operate efficiently, potentially to increase the opportunity costs associated with using another competing service.

In Lagrange’s Prover Network, operators commit to generating proofs within a given time period and collateralize the commitment with capital. Failure to generate a proof on time results on a penalty in the form of slashing or non-payment, which incentivizes operators to perform as promised, resulting in high liveness guarantees.

This type of manipulation is classified as an Intersubjective-Attributable Fault—due to its off-chain and concurrently observable impact—where the malicious activity can include Proof or Cross-Chain Messaging Censorship or Data Corruption, as covered above.

The other type of fault that also falls in the Liveness Violation category are the Non-Attributable Faults, which would be represented in this case by Validator Collusion, as also covered above. The main way to mitigate such a fault is through appropriate node decentralization (through DVT and robust light node infrastructure), and hence become more collusion resistant.

6.1.2 Factors to Consider When Estimating Cost of Corruption & Profit from Corruption

Factors to Consider To Increase Cost of Corruption

  • Fully Homomorphic Encryption (FHE): Enables computations on encrypted data without needing to decrypt it, ensuring the results remain encrypted and only accessible with the appropriate key. In this context, FHE can offer significant benefits, such as allowing operators to order and process transactions and generate proofs without ever seeing the actual data, ensuring complete privacy. It would also ensure that state proofs are generated securely without revealing the underlying data.

  • Trusted Execution Environments (TEEs): Secure portions of hardware that generate and securely store validator keys and databases of previously signed data. By design, they enhance security without compromising scalability.

  • Distributed Validator Technology (DVT): Incentivizes client diversity through the distribution of the validation process across multiple operators, reducing the risk of a single chokepoint in case of failure/corruption. Constitutes a deterrent for a malicious attacker to proceed or makes it significantly more resource-intensive to forge.

  • Legal Consequences: Public entity validators not only incur considerable financial costs but also jeopardize their social standing and may encounter legal repercussions if they partake in malicious actions.

Factors to Consider When Estimating and Reducing Profit from Corruption

  • Integration of Oracle/Bridge Solution: To restrict the potential PfC extracted from Lagrange, a bridge can be set-up to restrict the value flow within the period of slashing, or an oracle can have bounds on the total value transacted within a given period.

  • Withdrawal Lock-Up Period: Lock-up period applied to Provers for security against corruption attacks.

  • Associated Costs: This includes both the acquisition cost of the necessary stake and operational expenses related to the attack.

  • Legal and Reputational Risks: Potential legal consequences and reputational damages can significantly deter these attacks.

By considering both intersubjectively and objectively attributable faults, stakeholders can better understand the varied nature of potential attacks toward Lagrange and develop more effective defense mechanisms.

6.2 Corruption Analysis in an Intersubjective Staking World with Attributable Security

6.2.1 Cryptoeconomic Security

Cryptoeconomic Security: "For any attacker, the maximal profit extractable from attacking the safety (profit-from-corruption) is smaller than the minimum cost enforced by the system on the attacker (cost-of-corruption).", as per EigenLayer whitepaper.

However, there's a fundamental problem: the profit-from-corruption is non-measurable (or almost impossible to measure). The adversary may have perverse incentives outside the system’s scope of measurement. Moreover, this notion does not guarantee that the user actually gets compensated for the value that they lost in the case the attack happened in fact. We must therefore define a stronger notion of cryptoeconomic safety.

6.2.2 Strong Cryptoeconomic Security

Strong Cryptoeconomic Security: "Any user should be compensated a pre-specified amount in the event that the safety guarantee to the user is violated.", as per EigenLayer whitepaper.

This security threshold is accomplished if the Redistributable Stake obtained by the AVS is greater than the Harm from Corruption. The equation is fully measurable onchain:

  • Redistributable Stake is the amount of stake that is uniquely attributable to the affected AVS for the fault, it allows to self-specify how much security they want;

  • Harm from Corruption can be estimated by attempting to simulate scenarios where funds can be extracted, like censorship/corruption of data within the fraud/validity proof period for Lagrange, with confidence intervals, time-series scenario analysis, and Value-at-Risk concepts in mind, once AVS payments and attributable security are fully functional (very much central to Tokensight's ethos).

If an objectively-verifiable misbehavior is detected, the operator’s bEIGEN stake will be slashed. In cases of agreement-based intersubjective misbehavior, an operator’s bEIGEN1 token will be be burned and the compliant and remaining bEIGEN1 tokens forked into bEIGEN2, resulting in the loss of access to the former and the inability to redeem the latter.

Intersubjective Staking: Two Token Model
Image from

7. Lagrange Scenario Analysis: Impact of Data Size and Horizontal Scaling of Nodes on Proof and Query Processing Times

The visualization below provides a hypothetical scenario analysis on the effects of the horizontal scaling of nodes and the heavier SQL data requests on the processing times of cryptographic proofs and queries in Lagrange's hyper-parallel ZK coprocessor.

Note: The values and their measures are intended to be illustrative only, offering an approximation of the phenomena at hand.

Figure 2: Lagrange Scenario Analysis Visualization

As covered above, horizontal scaling of nodes (or super-linear security as per Lagrange's docs) is achieved through EigenLayer's pool of restaked operators and Lagrange's prover network through zkMapReduce, native to Lagrange.

The coloured lines show Query Response Times for growing sets of nodes, which naturally decrease as the number of nodes increases — although slightly upward sloped every time —, due to the growing data size being processed. This reduction in query response times occurs due to both factors that compose the effective horizontal scaling of nodes. Particularly highlighting the benefits of Lagrange zkMapReduce's parallel processing in improving query response timing efficiency.

The white dashed line representing Proof Generation Time performs a slight, logarithmic downward trajectory, assuming node scaling increases and data ingestion sizes also increase. Starting at smaller data sizes and decreasing logarithmically aligns with the expected timing efficiency gains in proof generation from increased node participation and hyper-data parallelization — as the workload is distributed more effectively.

Proof generation and query response times clearly improve with horizontal node scaling, even though data size expands along with it. The varying rates of increase in response times for different node counts illustrate the advantages of restaking shared security and hyper-parallel data processing. The visualization helps underscore the system's scalability and efficiency, showcasing the positive tradeoffs that come from Lagrange's ZK Coprocessor and Verifiable Database.

A swath of multi-agent simulations based on actual data and more precise parameters will be central to understand the risks computed through various stress scenario analysis for ZK Coprocessors. Tokensight plans to do much more in this topic going forward.

8. Conclusion

As we wrap up this in-depth piece of Lagrange Labs services, some important considerations remain about its long-term viability and operational efficiency:

  1. Technological & Competitive Edge: Will Lagrange explore and securely implement TEEs, DVTs, light node infrastructure or other valuable solutions to improve its product and front-run alternative solutions that will be built as AVSs?

  2. Faults' Nature: How effectively will intersubjectively-attributable and non-attributable faults, especially in high-stakes scenarios involving high traffic, be managed? What kinds of scenarios should we monitor that may trigger these novel kinds of faults?

  3. Operator Dynamics: What will be the criteria for selecting operators for Lagrange? Is there a risk of centralization, and how diverse or concentrated will the operator set be?

  4. Attack Vectors: Are there more effective or damaging strategies for attacks beyond double-signing and MEV extraction? What other vulnerabilities could potentially be exploited?

  5. Code Complexity: As an AVS, how complex will Lagrange's underlying code be, and what might this complexity mean for system robustness and bug susceptibility? How well defined will the parameters of slashing intersubjective be (helpful in mitigating code complexity through excessive slashing rules for objective faults)?

These inquiries are central for the community to consider as Lagrange becomes fully operational in the coming months. Tokensight will continue to monitor Lagrange's development and provide updates through further technical research.

Follow us on X at @tokensightxyz!


Collect this post to permanently own it.
Tokensight Research Hub logo
Subscribe to Tokensight Research Hub and never miss a post.