Coprocessing what?

A look into coprocessing

A few days ago, a startup by the name of Ritualnet hit the scene with a monster raise – 25 million dollars at a low 9 figure valuation. We’ve already seen a few of these startups focused on coprocessing but I think it’s good to take a hard look at this coprocessing usecase – to see if there’s any meat on the bone here.

Why Coprocess?

One of the key issues with the EVM is twofold – it’s constrained by the goal to make this VM as decentralized (and thus lightweight) as possible, and the overhead of running things through the EVM itself.

On this latter point, consider an operation like addition of two numbers.

In the EVM, this is 4 operations, pushing two numbers on the stack, adding, and storing the result – contrast this to say the WASM VM where it’s just 1 operation to do this. In addition, there’s much more work done to speed up and ensure safety around say cryptography libraries in C as opposed to the Solidity implementation.

Add this up over an entire program execution, and it becomes meaningful overhead – not to mention the gas constraints involved over running in a decentralized setting.

As a fun fact, the computation involved with WASM is so much lower that Arbitrum stylus offers pricing for WASM operation in ‘ink’, 1/1000th of a wei, because wei is too big to represent the true cost of operation for these individual operations.

This is an obvious constraint for apps trying to run in this compute environment – so this is where coprocessors step in.

Coprocessors:

Coprocessors allow for external compute to be brought into EVM.

Arguably we have coprocessors already, but in a different name – keepers.

Keepers are good for routine upkeep operations for dapps

  • Some of these keepers are for upkeep operations of public functions (e.g. poke contract)

  • Some are for more trusted usecases (e.g. peg maintenance)

It’s this latter usecase that coprocessors try to focus on – providing guarantees for this trusted execution.

zk coprocessors focus on this need by allowing for provability of this compute – before the computation can be executed.

Areas of interest

One area that asynchronous computation shines the most is in governance.

The baseline here is not a synchronous operation, but rather a long multiday governance process. While zk can’t replicate a protocol upgrade, for routine parameter changes like LTVs or interest rate changes where this can be automated – this is particularly interesting.

Another big usecase is that of allowing for trustless snapshots. Tokens that want to enable onchain governance need to have governance built right in from the get-go – that is, having the ability to determine balances at a specific timestamp so that this can’t be manipulated via flashloans or any other mechanism.

As we start moving into directions where we see more experimentation enabled by things like Morpho-Blue, or Uniswap v4 – we’ll definitely see more room for this to grow.

Issues

Overhead

One of the key issues with the zk (which is by far the most popular approach) is that zk is expensive as overhead. There’s great work being done to make it more easier to use for devs by abstracting the complexity of circuits, but one of the pain points is around state access.

Unlike Gelato or some existing zk solutions, zk requires proof around all operations, including data retrieval from the chain itself. The current setup for Risc 0 and some others is that all the data used for the operation get passed directly to the coprocessor during the onchain job call.

Axiom, one of the leaders in zk-coprocessing, stands out in that they allow for state reads in the proofs themselves, as it also brings in the ability to read historical slots – e.g. the price of ETH USDC on the Uniswap v3 TWAP 3 days ago.

Hyperoracles adopts a similar solution, but they adopt a subgraph based solution, where dapps can pull from this zk subgraph instead, saving compute at runtime since it comes preprocessed.

Asynchonicity

The biggest issue at the end is the asynchronicity.

Dapps (especially defi) thrive on having synchronous  – it’s what allows atomic arb, or flashloans exist. The need to add asynchronous transactions not only adds to the cost of running dapps (and additional questions around monetization) – but also around how to create dapps that are built with asynchronicity in mind.

Perp based protocols (e.g. GMX, Kwenta, etc) are probably the biggest and best example of defi dapps that are asynchronous – but this is out of necessity due to the impact of oracle running. In other words if they could do without asynchronicity – they would.

Conclusion:

The hope is that in tapping into this niche of coprocessing opens up a wide design space to create novel protocols that can harness more computation to be either more expressive or increase the safety by adding key checks.

I’m sure they will – but there are some fundamental issues around data access, and synchronicity limit the usecases, and the real test will be whether buidlers actually use these coprocessors or just fallback to centralized solutions.

Loading...
highlight
Collect this post to permanently own it.
William logo
Subscribe to William and never miss a post.