Cover photo

Enhancing AI with Restaking

TL;DR

  • Nektar’s restaking integrates Ethereum staking security with verifiable computation for AI-enabled dapps

  • Off-chain verifiable compute devices ensure correctness and privacy for complex tasks

  • Coprocessors support AI use cases such as verifiable training and inference, ensuring data integrity and correct execution of AI models

  • Coprocessors leverage Ethereum’s historical state for liquidity management, DeFi, and security

  • Coprocessors vary by security model, offering different security, efficiency, and cost tradeoffs

  • Cryptoeconomic coprocessors offer instant settlement via economic bonds and decentralized insurance

  • Restaking aims to enhance AI protocols and incentivize trust in decentralized AI applications through innovative economic and verification mechanisms

Verifiable computation is essential to scaling blockchains in the AI era. Nektar unlocks new generation of AI-enabled smart dapps by bringing Ethereum staking security to cryptoeconomic coprocessors via restaking.

Blockchain Limitations

Ethereum is known for its computational and storage constraints and fees for each operation. Yet, due to its secure store of state, it maintains the status of the permissionless and decentralized global computer. It offers non-functional benefits like open access, self-sovereignty, censorship resistance, and native composability to blockchain applications. However, these applications must maintain low computational demands to remain feasible for on-chain execution.

Various solutions have been proposed to overcome these limitations. Notably, the Ethereum ecosystem has shifted towards moving compute off-chain to third parties rather than solely improving blockchain performance. A prime example is rollups, which have successfully implemented cheaper and faster transactions by batching them together while maintaining the security properties of the base layer through proofs.

The Role of Coprocessors

But what if you don’t need to execute more transactions but instead require more computing power to run a complicated task with computation- or data-heavy workload as part of a single transaction? Enter off-chain verifiable compute devices that do not maintain any state themselves but can provide results with proof of computation. It prevents false claims of work done and guarantees:

  • The work is done correctly.

  • The data stays private.

Off-chain verifiable compute

Customized computational environments tailored to specific, challenging, and tedious tasks are called coprocessors, aiming to maximize work efficiency. Verifiable computation allows them to perform off-chain tasks without compromising the trust-free nature of blockchains and ensures:

  • Validity of the input and output

  • Accuracy of the methods used

Coprocessors enable more complex application logic involving data-driven analysis and intensive calculations that can only be carried out off-chain. They can be applied to DePINs, Robotics, and more.

AI/ML Applications

Machine learning operations are too expensive and limited on-chain, making coprocessors particularly useful for AI use cases:

  1. Verifiable Training: Proving that the correct dataset and learning algorithm were used in the creation of a model. For example, ensuring that no data poisoning occurred in a dataset of non-copyrighted works.

  2. Verifiable Inference: Inference involves running live data through a trained AI model to make a prediction or solve a task. Verifiable inference processes eliminate the risk of manipulation, safeguarding users from potential harm from faulty AI computation.

  3. Verifiable Execution: AI agent projects can benefit from proper verification by relying on ML inference from off-chain verifiable environments. Agent actions must be fully traceable and verifiable to confirm that the AI performs all steps correctly.

As a result, the following proofs can generally be provided:

  • Correct Application: A specific model and set of parameters (e.g. weights of a neural network) were used to compute the output from given inputs.

  • The integrity of Parameters: The parameters used are the ones claimed (i.e. they haven’t been tampered with or altered).

  • Correct Execution: Each step in the computation (e.g., for each layer in a model) was executed correctly.

Challenges include data quality, concept drift, and proof systems’ performance limitations. However, by combining ML with off-chain verifiable computation, coprocessors can empower a wide range of new AI opportunities.

Blockchain Data Use Cases

Coprocessors enable off-chain computations to tap into Ethereum’s complete historical state without requiring additional trust assumptions from the application itself — a capability that isn’t feasible with smart contracts today. Often AI-powered, such blockchain applications can provide advanced features to assist informed decision-making.

For example, active liquidity managers can leverage complex strategies based on historical trade data, price correlations, volatility, momentum, and more while enjoying the advantages of privacy and trustlessness.

They can also be used for intelligent DeFi applications to check creditworthiness and evaluate profiles of lenders and borrowers, given their on-chain history.

Another useful AI application area is security, such as on-chain monitoring systems that can detect suspicious activity and power risk management for smart contracts, wallets, or portfolio managers.

The Types of Coprocessors

Coprocessors differ in the security model and level of assurance needed for different types of computations. Based on their security assumptions, they can be categorized into trustless (ZK), trust-minimized (MPC/TEE), optimistic, and cryptoeconomic.

Sensitive calculations, like matching orders, require maximum security and minimum trust assumptions, making zk-coprocessors a good choice with their strong guarantees. However, zk-coprocessors have downsides in efficiency and flexibility.

Multi-party computation (MPC) enables collaborative computing on sensitive data, while trusted execution environments (TEE) provide secure hardware-based enclaves. They may have acceptable tradeoffs for less sensitive computations, like analytics or risk modeling. While providing weaker assurances, they enable a wider array of computations more efficiently.

Optimistic coprocessors offer cost-effective solutions, but they suffer from significant latency issues. They require honest parties to challenge them with fraud-proof within the challenging window effectively. Therefore, the time to security guarantees is delayed.

Finally, cryptoeconomic coprocessors are optimistic coprocessors with a large enough economic bond on execution and an on-chain insurance system that allows others to secure compensation for erroneous computation. This economic bond and insurance can be purchased through decentralized trust marketplaces like Nektar. The advantage is instant settlement, but the downside is the cost of acquiring insurance.

Cryptoeconomic coprocessors

Different types of coprocessors exhibit distinct cost, latency, and security characteristics. The level of security needed depends on the applications’ risk tolerance. Combining different types of coprocessors can help achieve the desired security vs. efficiency tradeoff and lead to an optimized user experience.

Securing Coprocessors with Restaking

Cryptoeconomic coprocessors prove more cost-effective when the purchased insurance covers the at-risk value. Sometimes, unverified but efficient coprocessors are a reasonable engineering compromise for certain non-critical computations.

When the value at stake is high enough to be a critical security requirement, the task can source economic trust from repurposed staked ETH on the Nektar network. This requires developing an Actively Validated Service (AVS) to power the infrastructure.

For AI use cases, smart contracts can access off-chain ML models cost-efficiently and natively when provided by GPU-capable restaking operators to serve decentralized inference. The model output will also include a proof that guarantees the computational integrity of model operations. End users can trustlessly verify that the node running a model did not tamper with the output of a model from their query.

Other tasks like finetuning, quantization, distillation, and training can also be supported. By harnessing Ethereum’s native network and capital, Nektar enables a strong level of decentralization and security for such VM-enshrined operations over AI networks.

There are other efforts to enhance AI with restaking, namely:

  • AI protocol settlement on PoS network secured with restaking.

  • Incentive system for the challengers running proof of AI inference.

  • Decentralization trust ensures dataset and session privacy.

Conclusion

The advances in off-chain computing mark a move towards a proof-based future where computation is centralized. Still, verification is trustless and highly decentralized, evolving Ethereum into a secure and scalable platform for a broader range of AI applications.

References

Nektar Network logo
Subscribe to Nektar Network and never miss a post.
#ai#restaking#blockchain#crypto#verifiable computation