Cover photo

Dynamic Fee Markets

In many networks, fee markets are a resolution mechanism intended to solve spam and/or prioritize (or permit inclusion) ordering of transactions. This is largely due to the limitations of a block chain design which incorporates transactions into the blocks of the network, which must be broadcasted in whole, and becomes a forcing function to limit the size of the block (or factor it in).

Quilibrium's scalability design does not suffer from these conditions – spam isn't free anyway, transaction ordering is contingent on verifiably (and deterministic) random permutation, and the overall rate of transactions that can be processed under the current size of the network (under optimal conditions of this current size) is well beyond the combined quantity of all other chains.

By consequence, transaction fees were essentially considered an augment that could be made to a function call. Through the various forms of testing that the network has been undergoing to ensure we are fully battle hardened for 2.0's release, it became clear there is one factor that was being overlooked, resulting in a need to push the release window out further to appropriately address this, in a way that ultimately is a net benefit for provers without risking certain pitfalls of artificially constrained supply/demand problems (i.e. scalping). For the sake of making this post easier to digest, I have split it out into three perspectives: the technical, the economic, and an ELI5.

Technical Perspective

While the size of the data for QUIL tokens is relatively small and not as significant in terms of the issuance curve, nor the proof portion of executing actions on them, there existed the possibility of small amounts of stored data being used in applications to generate excessively large proofs of execution. These proofs grow long (MPC-in-the-head is one of the larger types of proofs), and especially longer when used in non-token use cases, and weren't originally factored in on space with respect to the emissions curve at all in the initial design. But, it became obvious it could be a griefing vector in the least worst case, so these proofs thus became part of the overall network state growth calculation.

The idea was simple regarding the change: a malicious actor wishing to reduce emissions could easily create large amounts of proof data in an application by pre-emptively declaring range over the application's proof range such that the provers took no execution fees, and by consequence could reduce the reward issuance.

To do that, they'd have to provide constant proving over this data because otherwise initiating an application requires provers to take them on, and an application whose goal is explicitly to store large amounts of data would be incentivized most by provers seeking greater spread of emissions, so to recoup the cost imbalance they would need to plan for the cost of the provers for the data combined with the reduction of emissions, which would balance out economically such that the winning move is to not spam.

That is, except one catch – the proofs. While in general the execution proofs will scale relative to the data, it is theoretically possible to create a proof that is far larger in comparison, escaping the data prover cost consideration while directly affecting the cost of data. When modeling the attack, it became clear that it could be exploited (albeit, still not cheaply) by someone who wanted to essentially halt emissions entirely, which ran contrary to the intended cryptoeconomic model as designed. Execution caps are not desirable, as there are certainly many long running operations that can and should live on the protocol, with some caveats. This brings us to the need for a protocol-level enforcement of a data fee market inclusive of execution proofs.

Let's revisit the Proof of Meaningful Work emissions formula (formatted more cleanly):

Recall:

  • n - generation of the network, measured as the truncated integer exponent of the initial difficulty (the VDF iterations required on average to reach ten seconds of real world clock time), ergo, currently, 1.

  • d - total network storage in GB

  • s - prover ring set membership, zero indexed

Additionally, recall that this is the optimal emission formula (you are proving each core shard individually). In the multiproofs scenario, there is a scaling factor that reduces the reward earned to further incentivize maximal individual distribution and replication of data.

For a fee market to be viable, it must efficiently priced based on a few factors:

  1. The data to be settled, in bytes (deterministic by public schema)

  2. The execution to be performed (in circuit size, if online)

  3. The resulting proof size, in bytes

This reaches an interesting issue. Whereas execution was previously considered to be a prover-dependent consideration (and therefore up to them), data has deterministic bounds. Importantly, the key insight is that the resulting output proof is concretely a definitively priceable item in terms of bytes. Consider execution is not infinite – an infinite loop cannot become expressed as a pure garbled circuit, as it would be infinitely long. To support an unbounded computation, we have the notion of continuations (something important for our MetaVM, but this application is not shipping with 2.0). So rather than impose an execution cap, we can impose a maximal circuit boundary before continuations must be used. If the garbler reaches this limit, execution state must be stored, and the effective transaction must be continued with greater payment. Given the possibility of lock(s) being held by an effective transaction in progress, this imposition creates a problem for shared resources.

To rectify this, the durability of a continuous function call is given a limited TTL of the subsequent shard frame until the conclusion of the call. If this TTL is passed, the transaction is failed, and to avoid the potential for griefing vectors with locks, locks are freed and the fee is forfeit to the prover. In a network where provers can pick and choose freely which transactions are handled via a public mempool, this would be now an opportunity for malicious provers to intentionally disregard the continuation execution. As the network's transactions are forcibly included and unlinkable to the sender, to prevent such would require an attack on both the onion routing and the mixnet first and would be detectable by other provers, the identification of such would then eject the malicious prover from the ring and allow the continuation of the function call.

We can also, in future releases, introduce additional offline proofs or enhancements to our MPC-in-the-head proof system, to reduce the size, if research emerges with such an opportunity. This would have a net benefit to network state growth (indeed, a reduction in the latter case). We even considered some off the shelf options, but while their proof sizes were great, their speeds were unusable.

Given this collective information, we have three important pieces of information:

  1. (Known upfront) Whether a call will lead to continuations

  2. (Known upfront) What the total data settled will be in size

  3. (Not necessarily known upfront) What the size of the proof will be

Therefore, to support transaction fees correctly, the fee market must of course adjust with relevance to impact on overall network state, but also the fee must be paid after the stateful outcomes have occurred. For Ethereum, this is achieved by the use of "gas", with a global fee market (now two with blobs versus regular transactions). For Quilibrium, since we don't need a "maximum gas per block", we can instead depend on the price of bytes in terms of how it impacts the network.

Because QUIL token transactions are explicitly bounded in known size upfront and do not depend on variably sized loops, we can also depend on static byte lengths here in terms of how they affect the cost basis. This adjustment, however, means that fees paid for transactions must necessarily in the online mode roundtrip after the determination of a cost. In the offline mode it can simply accompany the transaction. But how do we prevent online mode from being a griefing vector itself?

Structurally, the fee wrapper for a transaction thus becomes (roughly protobuf-ish):

DecryptableRequest {
	bytes domain;
	bytes application;
	bytes encoded_inputs;
	AccountRef payer;
	bytes proof;
	KeyRing key_ring;
	DeliveryMethod delivery_method;
}

Whereas before the encoding only included the first three and last two fields, the wrapper now incorporates payer and proof explicitly, where payer is the address of the payer, and proof is a constructed proof that verifies the address sufficiently can pay for the transaction. If the execution proof size is unknown and becomes too large for the request, the transaction aborts and the payment is forfeit in the same way that passed TTLs for explicitly continuous functions are.

The final wrinkle in incorporating fees is that of processing proof mints for new QUIL. This will be the one exceptional circumstance encoded into the protocol where the operation (being extrinsic anyway) is free.

To support secondary fee multipliers for a fee market so provers can collectively price in the effective rate at a given time, the contents of a core shard frame are extended to include a baseline_multiplier value, which in turn goes into effect as the lowest of the multipliers provided by the provers in the last eight proving windows for a given ring.

With this out of the way, the fee market behaviors must be described.

Economic Perspective

To reduce complexity of additional interval-driven consensus work, at the difficulty recalibration interval, we will simultaneously recalibrate baseline fees. The fee must take into consideration the impact of solely new state growth. Reduction of state, e.g. joining tokens together, deleting objects, should implicitly be free, as it positively impacts the emissions rate. To set a starting point for this, the baseline fee will correspond directly to its impact in terms of affect on emissions, truncated to zero, for the duration of the recalibration interval: baseline_fee = max((current_emission_rate - affected_emissions_rate)^2*units_per_quil/current_world_state_size, total_bytes). In other words, a baseline fee can never be zero, and when emission rates reach a plateau, the fee at a minimum must be 0.000000000125 QUIL per byte of the transaction.

As a trivial example: if the global emission rate were 1024 QUIL per interval for all provers, and the impact on emissions rate would reduce the rate to 724 QUIL for all provers, and the intervals between recalibration is an hour, the cost of this would have a baseline fee of 90,000 QUIL for any transaction of its size during this time. Meanwhile, a regular token transaction in the same circumstance would only incur cost via splitting the coin, at approximately 0.000025 QUIL.

By the time the network would make gigabyte transactions a reasonable price in baseline fees, the secondary dynamics of fee markets come into play: provers interactively declare a fee multiplier. To prevent squatting/scalping oriented attacks, the provers commit to a multiplier of their fee in their prior frame, and the lowest multiplier in the full ring cycle becomes the maximum multiplier they may use. Since fee market data is public, and anyone can join a prover ring, node operators can find hot spots in the market, and take on proving in that ring to reduce the multiplier, allowing the market to efficiently price data. As prover rings take time to enter and exit without misbehavior penalty, this smoothing process may be slow (on the order of a minute and a half), but is a better correction speed than networks with congestion pricing dynamics.

ELI5

Let's say you and your friends decide to start an arcade. Some people show up with friends and don't play the games, and some of them are even just sitting on the cabinets preventing other people from playing the games, which means they aren't spending any money, making things worse for others and taking up space!

So, you set an entry fee, and sometimes people have to wait to enter. The arcade becomes very successful, and keeps getting bigger, with more exciting games. You're even starting to build some of the games yourselves! But it costs more money for the arcade to operate (insurance, advertising, and over time, upkeep), and the games that are being rented by the arcade game makers, demand a cut of the quarters being put in them.

The entry price helps at first, but becomes less important as they can hold more games and people, and the games themselves cost enough money that they pay for the arcade itself. Eventually, your arcade is so large, you don't need the entry fee anymore!

But those arcade cabinets are getting expensive to customers. To make sure the arcade cabinets are fairly priced, you set a rule with the cabinet makers that they must set the cost per play competitively with the other cabinets, and if all the cabinet makers start overpricing, well, there goes the customers who were feeding them quarters.

Loading...
highlight
Collect this post to permanently own it.
Quilibrium Blog logo
Subscribe to Quilibrium Blog and never miss a post.