What to Expect When You're Expecting 2.0

Launch readiness notes and FAQ

As we wrap up the final launch checks of 2.0 mainnet, folks have been perilously spending time figuring out optimal strategies for running node clients and clusters (which has been incredibly cool to see, personally speaking), and it has lead to surprising outcomes. While the average core count for a logical node remains in the double digits, there are some bold explorers out there who have attempted to go to the farther reaches of clustering, finding interesting pitfalls with respect to the second portion of the proving process. What this article intends to do is help folks understand a little better what the launch process is going to be like, what the circumstances are of being a prover on mainnet, how to think about seniority as a factor when winding down or spinning up new hardware, and a few assorted questions that are frequently asked (and frequently answered).

Launch Process

As previously mentioned, 2.0 rollout begins with a 24 hour "stasis lock" while the binaries are released to signers. This stasis lock is essentially a period of time in which signatories sign the release, but given some will run the client from source, we want to ensure that there is fairness in place for those who wait for the full release process to go smoothly, and that 24 hour period is sufficient to allow signing, people to upgrade their nodes, and the initial mesh to build. During this time, there will not be much log activity, aside from waiting for the release signature, which will serve as a unique value that seeds the network with a fair genesis value. After the final signature is provided, the network will exit the stasis lock and begin prover enrollment. Prover enrollment is based on seniority, an aspect previously elaborated on in One Ring to Prove them All.

Seniority by far is the greatest tangible measure of a prover's commitment to running a node, as it is a value which directly corresponds to time spent participating on the network honestly, maintaining updates as appropriate, and demonstrating the capabilities of the hardware under the concretely measurable evaluation over the 1.4.19/.20/.21 series. Also as previously mentioned, some folks have rotated keys during this time, and so prover enrollment will enable the combination of multiple keys to yield a "combined" seniority value (the quotations are important, as the combination is not simply a sum, overlapping regions of time do not yield additional benefit, and these keys are only usable in any combination once).

Monitoring the successful conclusion of the stasis lock and entry of prover enrollment will allow us to move on to the next step: the applications. The bridge application will counter-intuitively need to be deployed first, as the token application will have the carried-over state from previous network iterations, including the presently paused bridged QUIL addresses. After the bridge application is deployed, the token application and corresponding carried over state will be deployed. With these in place, the bridge will be unpaused by the release signatories, and we are officially fully launched for 2.0 mainnet.

For the full week after this launch, provers running during 1.4.19/.20/.21 will have an opportunity to mint out their earnings, but as mentioned before, after that week has passed, the mint support for these eras of proofs will be closed forever.

A note about optimal proving

Since this launch has taken longer than initially planned, it has left room for people to get very creative in maximizing the rewards their nodes can earn, and for the folks who have in particular much greater familiarity with the underlying cryptography used, made them wonder why certain aspects were not optimal – one example being that the KZG proofs using DFT were not parallelized. Mainnet does not run things in quite the same order the previous release had been doing: there are multiple layers of proofs that are at hand, in parallel for mainnet, relative to the different tiers of the shard tree. But crucially, these layers, especially at the core/data shards, are driven fully down to the data workers, and so the high degree KZG proofs in use presently (at least, for the high core count cluster nodes) at the master worker level in 1.4 will instead be done by the data workers, all individually relative to the range of core shards they are covering.

This leads to a clever question I've received in a few different ways: if seniority dictates priority of prover slot enrollment, and I have [insert arrangement of keys of relative seniority details], should I wait as close to the full week before migrating? This is a hard question to answer, in part because it is based on many unknowns – how many people will upgrade their fleet within the stasis lock period, how many of the older keys will be used (or are lost forever), how sparse the shard tree coverage will initially start out, and ultimately, based on that information, what place seniority would yield in terms of rewards. The simplest observation is that reward issuance will continue to decrease over time per the protocol. The complexity is that even with that simple observation, there are combinations of configurations (or merely circumstances such as few high seniority provers with high core counts showing up on day one) that may end up earning more rewards per day than they were in 1.4.x, and certainly combinations that will be earning fewer. There is no ideal formula or spreadsheet that will successfully fully game this out in advance – it will be a combination of gambler's ruin (holding on to the extra seven days of prior release significantly risks prover slot priority) and prisoner's dilemma (if most take the extra seven days to migrate, that prover slot risk is reduced, with the greatest benefit being to the first movers at the end of those days). In the long term, a guaranteed mostly-optimal strategy is to upgrade immediately.

Some FAQs

  • "What do the log values mean? Someone said ts is the reward value, that doesn't seem right."

    • ts is the timestamp value of the log, in unix epoch time. I do not regret to inform people that they did not somehow find a protocol bug into billions of QUIL instantly. Similarly, increment does not mean QUIL earned per step. You can always query your exact earnings over 1.4.19/.20/.21 via the --balance command flag or GetTokenInfo RPC. Please note that this is the Internet, and not everyone is being truthful when they are saying what they have earned, either intentionally or unintentionally, and screenshots are evidence of pixels, but not of math. The network will allow people to prove what they have actually earned.

  • "Some people are using faster hardware to get to a higher increment then moving to larger clusters once the hardest increments have passed (700k), isn't that cheating?"

    • No. Increments are scaled for rewards relative to the number of cores that produced the proofs, so 700k increments under 3 cores does not produce a reward at the scale of 700k increments under 1024 cores, as an extreme example. Each increment individually has the degree of parallelism required to produce the proof baked into it.

  • "What will the division of node labor look like post-2.0?"

    • It's going to vary! Initially, we will likely only see sufficient node coverage for global shards, and the tree depth will remain strictly "logical" shards in nature. Given the required coverage needed through a greater number of workers before the network will permit logical shards to branch out to smaller ranges, it is likely we will not start to see workers branch out into working strictly at the core shard level until 2025, given current network growth trends. But every time I have tried to predict how quickly people will join the network, I have been wrong by an order of magnitude, so this is not gospel and can happen sooner.

  • "How do I combine keys for seniority?"

    • 2.0's release will have specific steps for how to load multiple key pairs together for this purpose. The short synopsis is that you will be able to load multiple .config bundles together and it will inform you of the seniority score it would have. If you haven't changed keys the entire time you've been running nodes, you literally will have to do nothing besides upgrade – this step is only for node operators that have explicitly rotated keys. Combining keys is not a reusable step. Once this has been performed on the network for prover enrollment, it is locked in.

  • "Can't the VDF and KZG proof be sped up for even faster rewards?"

    • It theoretically (and most certainly) can with either GPU implementations or at the least using certain processor-specific instructions. We made use of neither during this time to ensure as many people could participate as possible, but the theoretical possibility does not preclude the notion that others might have done so or are actively trying to do so. We ask that as this is an AGPL licensed protocol, that they kindly contribute such improvements back to the protocol as the license requires. (Note this does not apply to applications deployed to the network, but only code for the protocol of the network itself)

  • "Are there any additional partnerships being announced with teams building on Q?"

    • More on this very soon. 🙂

Loading...
highlight
Collect this post to permanently own it.
Quilibrium Blog logo
Subscribe to Quilibrium Blog and never miss a post.