Death, Taxes, and EVM Parallelization

Envisioning a post-parallel EVM world.

We would like to acknowledge the valuable contributions of Keone Hon of Monad, Steven Landers of Sei, 0xTaker of Aori, Sergey Gorbunov of Axelar, Felix Madutsa of Orb Labs, Alex Lee of Wombat Exchange, and Yilong Li of MegaETH, whose insights, shared through our in-depth discussions, were instrumental in shaping this research article.


Introduction

In computer systems today, making things faster and more efficient often means completing tasks in parallel, not sequentially. This phenomenon, fittingly known as parallelization, was catalyzed by the advent of modern computer’s multi-core processor architecture. Tasks that were traditionally executed in a step-by-step manner are now approached through the lens of simultaneity, maximizing the processors’ capabilities. Similarly, in blockchain networks, this principle of executing multiple operations at once is applied at the transaction level, although instead of leveraging multiple processors, it’s utilizing the collective verification power of numerous validators across the network. Some early examples of implementation include:

  • In 2015, Nano (XNO) implemented a block-lattice structure where each account has its own blockchain allowing for parallel processing and eliminating the need for network-wide transaction confirmations.

  • In 2018, the Block-STM (Software Transactional Memory) parallel execution engine for blockchain networks paper was published, Polkadot approached parallelization through multi-chain architecture, and EOS launched their multi-threaded processing engine.

  • In 2020, Avalanche introduced parallel processing for its consensus (not the EVM c-chain which is serialized), and Solana incorporated a similar innovation with Sealevel.

For the EVM, since its inception, transactions and smart contract executions have been processed sequentially. This single-threaded execution design limits the overall system's throughput and scalability, particularly noticeable during periods of high network demand. As network validators face increased workloads, the network inevitably slows and users face higher costs, competitively bidding more to prioritize their transactions in the congested network environment.

The Ethereum community has long explored parallel processing as a solution, starting with Vitalik’s 2017 EIP. Initially, the intent was to achieve parallelization via traditional shard chains, or sharding. However, the rapid development and adoption of L2 rollups, which are simpler and offer more immediate scalability benefits, shifted Ethereum’s focus away from sharding to what is now known as danksharding. With danksharding, shards primarily serve as layers for data availability rather than for executing transactions in parallel. Yet, with the full implementation of danksharding yet to be realized, attention has turned to the gaining prominence of several key alternative parallelized L1 networks with EVM compatibility - notably Monad, Neon EVM, and Sei.

Given the conventional evolution of software systems engineering and the scalability success of other networks, parallel execution for the EVM is an inevitability. While we anticipate this transition with strong conviction, the future beyond this point remains uncertain yet highly promising. The implications for the world's largest smart contract developer ecosystem, currently boasting over $80 billion in TVL, are significant. What happens when gas prices plummet to mere fractions of a cent due to optimized state access? How broad does the design space become for application layer developers? Here’s our perspective on what a post-parallel EVM world might look like.

Parallelization is a means, not the end.

Scaling blockchains is a multi-dimensional problem and parallel execution paves the way for more key infrastructure development, such as blockchain state storage.

The primary challenge for projects working on a parallel EVM isn’t just enabling computations to run simultaneously; it’s ensuring optimization of state access and modification in the parallelized environment. The heart of the matter comes down to two primary issues:

  1. Ethereum clients and Ethereum itself use different data structures for storage (B-tree/LSM-tree vs. Merkle Patricia Trie), leading to suboptimal performance when embedding one data structure into another.

  2. With parallel execution, the ability to perform asynchronous input/output (async I/O) for transaction reads and updates is vital; processes risk getting stuck waiting on each other, wasting any speed gains.

Additional computational tasks like adding a multitude of additional SHA-3 hashes or calculations are all minor compared to the cost of retrieving or setting a storage value. In order to reduce transaction processing times and gas prices, infrastructure around the database itself will have to improve. This goes beyond simply adopting conventional database architecture as a raw key-value store alternative (i.e., SQL DB). Implementing the EVM state using a relational model adds unnecessary complexity and overhead, leading to higher costs for 'sload' and 'sstore' operations compared to using a basic key-value store. The EVM state doesn't need features like ordering, range scans, or transactional semantics, as it only performs point reads and writes, with writes occurring separately at the end of each block. Instead, requirements for these improvements should be focused on addressing principal considerations such as scalability, low-latency reads and writes, efficient concurrency control, state pruning and archival, and seamless integration with the EVM. Monad, for example, is building a custom state database from scratch, known as MonadDB. It will leverage the latest kernel support for asynchronous operations while also implementing a patricia trie data structure natively, both on-disk and in-memory.

We expect to see further re-works of underlying key-value databases and significant improvements to tertiary infrastructure powering the bulk of a blockchain’s storage capability.

Making Programmable Central Limit Order Books (pCLOBs) great again.

CLOBs will be the dominant design approach for trading as DeFi transitions to a state of higher fidelity.

iykyk

Since their 2017 debut, automated market makers (AMMs) have become a cornerstone of DeFi, offering simplicity and a unique ability to bootstrap liquidity. By leveraging liquidity pools and pricing algorithms, AMMs revolutionized DeFi, emerging as the best alternative to traditional trading systems, such as order books. Despite being a fundamental building block in traditional finance, central limit order books (CLOBs) struggled with blockchain scalability limitations when introduced to Ethereum. They require a significant number of transactions, as each order submission, execution, cancellation, or modification necessitates a new on-chain transaction. The costs associated with this requirement, given the immaturity of Ethereum’s scalability efforts, rendered CLOBs unsuitable in the early days of DeFi and led to the downfall of early iterations like EtherDelta. However, even with AMMs’ widespread popularity, they faced their own inherent limitations. As DeFi matured and attracted more sophisticated traders and institutions over the years, these limitations became increasingly apparent.

Recognizing the superiority of CLOBs, efforts to incorporate CLOB-based exchanges into DeFi began to increase on alternative, more scalable blockchain networks. Protocols such as Kujira, Serum (RIP ), Demex, dYdX, Dexalot and more recently, Aori and Hyperliquid, aim to provide a better on-chain trading experience relative to their AMM counterparts. However, with the exception of projects targeting a specific niche such as dYdX and Hyperliquid for perpetuals, CLOBs on these alternative networks face their own set of challenges, in addition to scalability:

  • Fragmentation of liquidity: the network effects achieved by highly composable and seamlessly integrated DeFi protocols on Ethereum have made it difficult for CLOBs on other chains to attract sufficient liquidity and trading volume, hindering their adoption and usability.

  • Memecoins: bootstrapping liquidity in on-chain CLOBs requires placement of limit orders, a more challenging chicken-and-egg problem given new and lesser known assets like memecoins.

CLOBs with blobs

Dencun Mainnet Announcement

But what about L2s? The existing Ethereum L2 stack boasts significant improvements in transaction throughput and gas costs compared to mainnet, especially following the recent Dencun hard fork. Fees are markedly reduced by replacing gas-intensive calldata with lightweight binary large objects (blobs). According to growthepie, as of 4/1, fees on Arbitrum and OP sit at $0.028 and $0.064, respectively, with Mantle cheapest at $0.015. This is a substantial difference from pre-Dencun fees since calldata previously accounted for 70%-90% of the costs. Unfortunately, this isn’t cheap enough and post/cancel fees at ~$0.01 are still considered prohibitively expensive. For example, institutional traders and market makers often have high order-to-trade ratios in which they place a large number of orders relative to the number of actual trades executed. Paying fees for order submission and subsequently modifying or cancelling those orders across multiple books can have significant impact on the profitability and strategic decisions of institutional players even at today’s L2 fee pricing. Imagine the following example:

Firm A: 10,000 order submissions, 1,000 trades, 9,000 cancellations or modifications per hour is a relatively standard benchmark. If the firm operates on 100 books across a full day, the total activity may easily result in fees totaling over $150K even if one transaction is <$0.01.

The pCLOB

With the advent of parallel EVM, we anticipate a surge in DeFi activity, led by the viability of CLOBs on-chain. But not just any CLOBs - programmable central limit order books (pCLOBs). Given DeFi is innately composable, it’s possible to interact with an unlimited number of protocols (limited only by gas limits) to create a plethora of transaction permutations. Leveraging this phenomenon, a pCLOB can enable custom logic embedded during the order submission process. This logic can be invoked either before or after an order is submitted. For example, a pCLOB smart contract can incorporate custom logic to:

  • validate order parameters (e.g., price and quantity) against predefined rules or market conditions

  • perform real-time risk checks (e.g., ensure sufficient margin or collateral for leveraged trades)

  • apply dynamic fee calculations dependent on any parameter (e.g., order types, trading volume, market volatility, etc.)

  • execute conditional orders, based on specified trigger conditions

… and still be a step function cheaper than existing trading designs.

The concept of Just-In-Time (JIT) liquidity illustrates this well. Liquidity would not sit idle on any single exchange and will generate yield elsewhere until the very moment an order is matched and liquidity is pulled from the underlying platform. Who wouldn’t want to farm every last bit of yield on MakerDAO before sourcing that liquidity for a trade? Mangrove Exchange’s innovative “offer-is-code” approach hints at the potential. When an offer from the orderbook is matched, the portion of code embedded within it will execute with the sole mission to find the liquidity requested by the taker of the order. That said, challenges still remain with L2 scalability and cost.

Parallel EVM also radically enhances pCLOBs’ matching engines. A pCLOB can now implement a parallel matching engine that leverages multiple “lanes” to process incoming orders and perform matching computations simultaneously. Each lane can handle a subset of the order book so there is not a limitation to price-time priority and executes only when a match is found. The reduction of latency between order submission, execution, and modification allows for optimally efficient order book updates.

AMMs will most likely continue to be widely used for long-tail assets due to their ability to persistently market-make in an illiquid circumstance; however, for "blue chip" assets, pCLOBs will prevail.

Keone Hon, Co-founder & CEO at Monad

During one of our discussions with Keone, the co-founder and CEO of Monad, he believes we can expect multiple pCLOBs to gain traction in different high throughput ecosystems. Keone emphasized that these pCLOBs will have a significant impact on the greater DeFi ecosystem due to the consequences of cheaper fees.

Even with just a handful of these improvements, we anticipate pCLOBs will have major ramifications in the enhancement of capital efficiency and unlock new categories within DeFi.

We get it, we need more applications, but first…

Existing and new applications need to be architected in a way that can fully take advantage of the underlying parallelization.

With the exception of pCLOBs, current decentralized applications are not parallel - their interactions with the blockchain are by nature, sequential. However, history has shown that technologies and applications naturally evolve to take advantages of new advancements, even if they weren’t originally designed with those advancements in mind.

When the first iPhone came out, the apps that were designed for it looked a lot like terrible computer apps. Same kind of thing here. It’s like we’re adding multi-core to blockchains which will lead to better applications.

Steven Landers, Blockchain Architect at Sei

The evolution of ecommerce from displaying a magazine catalog on the Internet to the existence of robust two-sided marketplaces is a quintessential example. As parallel EVM becomes a reality, we will witness a similar transition with decentralized applications. This then underscores a pivotal limitation: applications designed without parallelism in mind, will not inherently benefit from the efficiency gains of parallel EVM. Thus, simply having parallelism at the underlying infrastructure layer without redesigning the application layer is not enough. They must be architecturally aligned.

State Contention

Without any changes to the applications themselves, we still expect a mild performance increase of 2-4x, but why stop there when it can be so much higher? This shift introduces a critical challenge: applications need to be fundamentally re-architected to embrace the nuances of parallel processing.

If you want to take advantage of the throughput, you need to limit contention between transactions.

Steven Landers, Blockchain Architect at Sei

More specifically, when multiple transactions from a decentralized application attempt to simultaneously modify the same state, conflicts will arise. The serialization of the conflicting transactions is required to resolve the conflicts but this offsets the benefits of parallelization.

There are many approaches to conflict resolution, which we will not address at this time, but the amount of potential conflicts encountered during execution is heavily dependent on the application developer. Across the landscape of decentralized applications, including even the most popular protocols such as Uniswap, were not designed nor implemented with this constraint in mind. 0xTaker, co-founder of Aori, a maker-oriented high-frequency off-chain order book, discussed with us in depth the major contentions of state that will occur in a parallel world. For an AMM, with it’s peer-to-pool model, many actors may target a single pool at once. A range from a few to 100+ transactions will fight for state so AMM designers will have to think carefully about how liquidity is distributed and managed in state to maximize pooling benefits.

Steven, a core developer at Sei, a parallel EVM L1 network, echoed the importance of thinking about contention in multi-threaded development, noting that Sei is actively researching what it means to be parallel and how to make sure resource utilization is adequately captured.

Performance Predictability

Yilong, the co-founder and CEO of MegaETH, also emphasized to us the importance for decentralized applications to seek performance predictability. Performance predictability refers to the ability of a decentralized application to consistently execute transactions within a certain timeframe, regardless of network congestion or other factors. One way to achieve this is through app-specific chains, however, while app-specific chains provide predictable performance, they sacrifice composability.

Parallelization offers a means of experimentation with local fee markets as a way to minimize state contention.

0xTaker, Co-founder at Aori

Alternatively, advanced parallelism and a multi-dimensional fee mechanism could enable a single blockchain to provide more deterministic performance for each application while maintaining overall composability.

Solana has a nice fee market system that is localized so that if multiple users access the same state, they get charged a bit more (surge pricing) versus everyone bidding against each other in a global fee market. This approach would particularly benefit loosely connected protocols that require both performance predictability and composability. To illustrate this concept, consider a highway system with multiple lanes and dynamic tolling. During peak hours, the highway can allocate dedicated express lanes for vehicles willing to pay a higher toll. These express lanes ensure a predictable and faster travel time for those who prioritize speed and are willing to pay a premium. Meanwhile, the regular lanes remain accessible to all vehicles, maintaining the overall connectivity of the highway system.

Imagine the Possibilities

While the need to re-architect protocols to align with the underlying parallelization may seem challenging, the design space for what's possible in DeFi and other verticals expands significantly. We can expect to see a new generation of applications that are more complex, performant, and efficient focused on use cases that were previously impractical due to performance limitations.

Rewind to 1995 and the only internet plan was one where you had to pay $0.10 for every MB of data that you downloaded - you would be cautious with what websites you go to. Imagine going from that to unlimited and notice how people behaved and what became possible.

Keone Hon, Co-founder & CEO at Monad

There is a possibility we revert to a scenario similar to the early days of centralized exchanges - a user acquisition war in which DeFi applications, particularly decentralized exchanges, are supplied with referral programs (i.e., points, airdrops) and superior user experience as ammunition. We see a world where on-chain games with any reasonable amount of interactivity might actually be a thing. Hybrid orderbook-AMMs already exist, but instead of having the CLOB sequencer be off-chain as an independent node and decentralized via governance, it can be moved on-chain resulting in improved decentralization, lower latency, and enhanced composability. Fully on-chain social interactions are now viable too. Frankly, anything where there are a ton of people or agents doing something at the same time is now on the table.

Beyond people, smart agents will most likely dominate transaction flows on-chain more than it has presently. AI as a player in the game, has existed for some time now with arbitrage bots and autonomous execution of transactions, however, their participation will exceed existing figures exponentially. The thesis we have is that any and every form of on-chain engagement will be augmented in some capacity by artificial intelligence. The latency requirements for agents transacting will be more substantial than what we envision today.

At the end of the day, the technology advancements are just a base enabler. Ultimately, the winners will be determined by the ability to onboard users and bootstrap volume / liquidity better than their peers. The difference is, now developers have much more to work with.

Crypto UX sucks… now, it’ll suck less.

User experience unification (UXU) is not only feasible, but needed - the industry will assuredly gravitate towards unlocking this.

Objective take, thank you OpenAI.

Today’s blockchain user experience is fragmented and cumbersome - users navigate multiple blockchains, wallets, and protocols, waiting for transactions to complete with a chance of a security breach or hack. The ideal future is one where users can seamlessly interact with their assets securely without worrying about the underlying blockchain infrastructure. This process of transitioning from the current fragmented UX to a unified, streamlined experience is something we call user experience unification (UXU).

At its core, improving blockchain performance, particularly by reducing latency and reducing fees, can significantly contribute to solving the UX problem. Historically, advancements in performance often have a positively correlated impact on various aspects of our digital user experiences. For instance, faster internet speeds not only enabled seamless online interactions but also fueled demand for richer, more immersive digital content. The advent of broadband and fiber-optic technologies facilitated low-latency streaming of high-definition videos and real-time online gaming, raising user expectations from digital platforms. This escalating appetite for depth and quality catalyzes continuous innovation from companies in their developments of the next big, sexy thing - from advanced interactive web content to sophisticated cloud-based services to virtual/augmented reality experiences. Increased internet speeds have not only improved the online experiences themselves but have also consequently expanded the scope of user demand.

Similarly, the advancement in blockchain performance will not only enhance the user experience directly through reduced latency, but also indirectly by enabling the rise of protocols that unify and advance the overall user experience. Performance is a key ingredient to their existence. The fact that these networks, particularly parallel EVMs, are more performant and have lower gas fees means the on- and off-ramp will be a lot more frictionless for end users, thus attracting more developers. In our conversations with Sergey, co-founder of interoperability network Axelar, he envisions a world that is not only truly interoperable, but moreso symbiotic.

If you have complicated logic on a high throughput chain (i.e., parallel EVM) and the chain itself, given its high performance, can “absorb” the complexity and throughput requirements of that logic, then you can use interoperability solutions to export that function to other chains in an efficient way.

Sergey Gorbunov, Co-founder at Axelar

As scalability issues are resolved and interoperability across different ecosystems increases, we will witness the emergence of protocols that bring web3 user experience to parity with web2. Some examples include v2 of intent-based protocols, advanced RPC infrastructure, chain abstraction enablement, and open compute infrastructure augmented by artificial intelligence.

Orchestration of states by our node becomes accelerated with higher throughput networks since solvers can solve our intents incredibly fast.

Felix Madutsa, Co-founder at Orb Labs

The Honorable Mentions

The oracle market will become frothy as performance requirements increase.

Parallel EVM means there will be increased performance demand for oracles, a grossly underdeveloped vertical for the last few years. The rising demand from the application layer will invigorate a complacent market filled with subpar performance and poor security needed for improved DeFi composability. For example, market depth and trading volume are two powerful indictors for many DeFi primitives such as money markets. We expect the large incumbents such as Chainlink and Pyth to adapt reasonably fast as new players challenge their market share in this new age. After speaking to a senior member at Chainlink, our thinking is aligned: "[The] consensus [here at Chainlink] is that if parallel EVM becomes dominant, we may want to rework our contracts to capture value from it (e.g., reduce inter-contract dependencies such that transactions/calls aren't needlessly dependent and therefore MEV'd) but because parallel EVM is meant to improve transparency and throughput for applications already running on EVM, it shouldn't affect network stability.”

This indicates to us that Chainlink understands the impact parallel execution will have on their products and, as previously highlighted, in order to take advantages of the parallelization, they will have to rework their contracts.

It’s not just an L1 party; parallel EVM L2s want in on the fun.

From a technical perspective, creating a high-performance parallel EVM L2 solution is easier than developing an L1. This is because the setup of the sequencer in an L2 is less complex compared to the consensus-based mechanisms used in conventional L1 systems, such as Tendermint and its variants. This simplicity derives from the fact that the sequencer in a parallel EVM L2 setup only has to maintain the order of transactions, as opposed to consensus-based L1 systems where numerous nodes must agree on the sequence.

More specifically, we anticipate that optimistic-based parallel EVM L2s will dominate over its zero-knowledge counterparts in the near term. Eventually, we do expect a transition from OP-based rollups to zk-rollups via a general zk framework such as RISC0, rather than the conventional methods used in other zk-rollups. It will just be a matter of when.

Rust superiority… for now?

Programming language selection will play a significant role in the evolution of these systems. We largely favor Reth, the Rust implementation of Ethereum, over any other alternative. This preference is not arbitrary as Rust offers a number of advantages over other languages, including memory safety without garbage collection, zero-cost abstractions, and a rich-type system, among others.

As we see it, the competition between Rust and C++ is shaping up to be a significant contest in the new generation of blockchain development languages. This competition, though often overlooked, should not be dismissed. The choice of language is critical as it impacts the efficiency, security, and versatility of the systems developers build.

Developers are the ones who bring these systems to life, and their preferences and expertise are vital to the direction of the industry. We firmly believe that Rust will eventually come out on top. However, porting one implementation to another is far from a straightforward task. It requires significant resources, time, and expertise, which further underscores the importance of choosing the right language from the outset.

It would be remiss for us not to mention Move in the context of parallel execution. While Rust and C++ are often the focus of discussions, Move has several features that make it equally well-suited.

  • Move introduces the concept of "resources," which are types that can only be created, moved, or destroyed, but not copied. This ensures that resources are always uniquely owned, preventing common issues like race conditions and data races that can arise in parallel execution.

  • Formal verification and static typing: Move is a statically-typed language with a strong emphasis on safety. It includes features like type inference, ownership tracking, and overflow checking, which help prevent common programming errors and vulnerabilities. These safety features are particularly important in the context of parallel execution, where bugs can be harder to detect and reproduce. The language's semantics and type system are based on linear logic, similar to Rust and Haskell, which makes it easier to reason about the correctness of Move programs, thus formal verification can help ensure that concurrent operations are safe and correct.

  • Move promotes a modular design approach, where smart contracts are composed of smaller, reusable modules. This modular structure can make it easier to reason about the behavior of individual components and can facilitate parallel execution by allowing different modules to be executed concurrently.

Future Considerations: EVM needs therapy for how insecure it is.

While we’ve painted an incredibly optimistic picture of our on-chain universe post-parallel EVM, none of it matters if EVM and smart contract security deficiencies are not addressed.

Distinct from network economic and consensus security, hackers exploited the smart contract security vulnerabilities of DeFi protocols on Ethereum, by extracting over $1.3bn in 2023 alone. Thus, users prefer walled garden CEXs or hybrid “decentralized” protocols with centralized validator sets, sacrificing decentralization for perceived security (and performance) in favor of an improved on-chain experience.

How much does the average user care about decentralization?

The lack of inherent security features in the EVM's design is the root cause of these breaches.

Drawing likeness to the aerospace industry, where rigorous safety standards have made air travel remarkably secure, we see a stark contrast in blockchain's approach to security. Just as people value their lives above all else, the security of their financial assets is of utmost importance. Key practices like exhaustive testing, redundancy, fault tolerance, and strict development standards underpin aviation's safety record. These critical features are currently missing in the EVM and in most cases, other VMs as well.

One potential solution is a dual VM setup where a separate VM, such as CosmWasm, monitors the real-time execution of the EVM smart contracts, much like how antivirus software functions within an operating system. This structure enables advanced examinations, such as call stack inspection, specifically aimed to reduce hacking incidents. However, this approach would require major upgrades to existing blockchain systems. We expect newer, better positioned solutions such as Arbitrum Stylus and Artela, to successfully implement this architecture from their onset.

Existing security primitives in the market have a tendency to be reactionary and respond to upcoming or attempted threats via inspection of mempools or smart contract code audit/review. Although these mechanisms help, they fail to address the underlying vulnerabilities in VM designs. A more resourceful and proactive approach must be taken to revamp and increase the security of blockchain networks and their application layers.

We advocate for a foundational overhaul in blockchain VM architecture to embed real-time protection and other critical security features, potentially through a dual VM setup, to align with practices that have proven successful in industry’s that have a long and battle-tested history of doing so (e.g., aerospace). As we look forward, we’re keen to support infrastructure enhancements that emphasize preemptive methods, ensuring that advancements in security match the industry's progress made in performance (i.e., parallel EVM).

Conclusion

The advent of parallel EVM marks a significant turning point in the evolution of blockchain technology. By enabling the simultaneous execution of transactions and optimizing state access, parallel EVM unlocks a new era of possibilities for decentralized applications. From the resurgence of programmable CLOBs to the emergence of more complex and performant applications, parallel EVM sets the stage for a more unified and user-friendly blockchain ecosystem. As the industry embraces this paradigm shift, we can expect to see a wave of innovation that pushes the boundaries of what is possible with decentralized tech. Ultimately, the success of this transition will depend on the ability of developers, infrastructure providers, and the broader community to adapt and align with the principles of parallel execution, ushering in a future where the tech seamlessly integrates into our daily lives.

The advent of parallel EVM holds the potential to reshape the landscape of decentralized applications and user experiences. By addressing the scalability and performance limitations that have long hindered the growth of key verticals such as DeFi, parallel EVM opens the door to a future where complex, high-throughput applications can thrive without compromising the trilemma.

Realizing this vision will require more than just advancements in infrastructure. Developers must fundamentally rethink the architecture of their applications to align with the principles of parallel processing, minimizing state contention and maximizing performance predictability. And even then, despite the bright future ahead, it is crucial that we place an emphasis on the prioritization of security alongside scalability.


Reforge is a prescient, early-stage venture capital firm, grounded in research and committed to backing premier founders in blockchain and adjacent frontier technologies.

+++

This post is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. You should consult your own advisors as to legal, business, tax, and other related matters concerning any investment. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Reforge. While taken from sources believed to be reliable, Reforge has not independently verified such information. Reforge makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This post reflects the current opinions of the authors and is not made on behalf of Reforge or its Clients and does not necessarily reflect the opinions of Reforge, its General Partners, its affiliates, advisors or individuals associated with Reforge. The opinions reflected herein are subject to change without being updated.

Reforge Research logo
Subscribe to Reforge Research and never miss a post.
#defi#web3#security#infrastructure#blockchain
  • Loading comments...