Cover photo

Understanding Ethereum Network Upgrades: Lecture 4

The Other EIPs and What Was left Out

This lecture covers the various other EIPs involved in this network upgrade, as well as the EIPs which did not end up being included.

Index

  1. Lecture

  2. Office Hours

  3. Supplemental Resources


Lecture

â–¼Lecture Transcript

The following transcript was auto-generated, and may content syntactical errors.

00:00:02.000 --> 00:00:09.000

All right, everyone, welcome back. Where in week 4 of understanding Ethereum network upgrades.

00:00:09.000 --> 00:00:29.000

And with our most speed run. This is gonna be a speed run. Today because Our topic is the other EIPs and what was left out and that is a huge amount of information.

00:00:29.000 --> 00:00:43.000

So we're gonna hop right into it. And so just as a reminder. We are talking about actually 2 upgrades together, Cancun and D.

00:00:43.000 --> 00:00:46.000

So.

00:00:46.000 --> 00:00:52.000

Cancun, we've already talked to you about EIP 4 8 4 4. Dang charting.

00:00:52.000 --> 00:01:02.000

That was the topic of. Lecture 3. But that's not The only thing that is in the execution layer upgrade.

00:01:02.000 --> 00:01:11.000

So Matt, do you wanna kind of talk about? How the process went of after EIP 4 8 4 4 was like decided upon What else?

00:01:11.000 --> 00:01:18.000

What was? Considered for inclusion generally.

00:01:18.000 --> 00:01:30.000

Yeah, we had. In general, like I said in the previous calls and previous lectures, the Forks are typically run by one maini IP with a bunch of drivers.

00:01:30.000 --> 00:01:43.000

In this case, the debate was between 4, 4, 4, and something called EOF is the main drivers of the fork with a bunch of different grab-bag, EIPs that I'll discuss the main drivers of the fork with a bunch of different grabbag, EIPs that I'll discuss that some were included in some were cut.

00:01:43.000 --> 00:01:49.000

I think on all Cancun and have included about 6 or 7 IPs that we're going to talk about today.

00:01:49.000 --> 00:02:04.000

We already discussed at length 3 4 4 of course, but you know typically with these forks there are Not really room to test develop and ship 2 major changes at the same time.

00:02:04.000 --> 00:02:11.000

So I think this will get interesting going forward as we. Potentially decouple execution layer and consensus layer.

00:02:11.000 --> 00:02:23.000

Upgrades so maybe we'll ship EOF and its own fork that exists only on the execution layer and it updates the EVM and the consensus layer doesn't really need to worry about it or vice versa, right?

00:02:23.000 --> 00:02:37.000

The consensus layer needs changes. How do we worry about that? So that's just context here. The main IPs, 4, 4, 4, we'll discuss what was left out, namely, and how we got those conclusions.

00:02:37.000 --> 00:02:53.000

But yeah, there was really not much room in this hard fork other than for it before. So we really only included about or small EIPs and some special minor spec changes, which we'll get into purely because we wanted to focus the bulk of our testing effort on getting.

00:02:53.000 --> 00:03:00.000

4 4 4 out the door as soon as possible without creating more dependencies for ourselves.

00:03:00.000 --> 00:03:01.000

Yeah, it's the same. Oh, go ahead, Tom.

00:03:01.000 --> 00:03:09.000

And so, Oh well just to point out you know when we talk about this hard for we're really actually talking about 2 simultaneous hard forks.

00:03:09.000 --> 00:03:16.000

Is the way to think about it. I think we've said that a couple times, but it's a good reminder.

00:03:16.000 --> 00:03:26.000

So. Eips don't actually have to apply to both the consensus layer and the execution layer, but in the case of EIP 4 8 4.

00:03:26.000 --> 00:03:35.000

Before it does apply to both. And so that you won't see the same EIPs on both.

00:03:35.000 --> 00:03:38.000

Upgrades but then we you might see this also represented as the Dan Coon upgrade and that would list everything in there.

00:03:38.000 --> 00:03:44.000

So I'll turn it back over to you, Matt.

00:03:44.000 --> 00:03:53.000

Yeah, and I think this is the reason I included this slide as is because we had initially only scoped EIP 4 4 4 for the consensus layer.

00:03:53.000 --> 00:04:05.000

But we now have maybe 4 or 5 on the consensus layer. So I think that's funny that we started initially with just one and then we ended up with some scope creep as we always do.

00:04:05.000 --> 00:04:14.000

But we don't need to. Like sit on this slide for too long. I think we can just dive right in since we're gonna have a lot of material.

00:04:14.000 --> 00:04:34.000

Okay. As I mentioned previously, the Self-destruct. Operation in the EVM is kind of unbounded non linear gas cost for destructing contracts essentially and clearing their storage from the Ethereum state.

00:04:34.000 --> 00:04:40.000

So we signal deprecation of self-destruct in the Shanghai hard fork.

00:04:40.000 --> 00:04:45.000

By saying, hey, you know, if developers are using this. Reach out to us, but we're planning on deprecating self-destruct fully as a pattern in Cancun and we are now in Cancun.

00:04:45.000 --> 00:04:55.000

So going forward there will be kind of new approaches to self-destruct, primarily that it can be completed within the same transaction.

00:04:55.000 --> 00:05:06.000

If you create and destroy a contract, within the same transaction. That's a seemingly fine pattern.

00:05:06.000 --> 00:05:16.000

However, we're kind of removing the old self-destruct pattern that allows for some interesting gas.

00:05:16.000 --> 00:05:23.000

Golf kind of a little bit of gas cheating and it's also causes the nodes that run on the network to do unbounded amount of work.

00:05:23.000 --> 00:05:31.000

For one cost in gas. So that's sort of a DOS vector. We are clogging in this, you know, releases going forward.

00:05:31.000 --> 00:05:42.000

So we do want to enable some of these patterns with self-destruct and create because a lot of times smart contracts will create and destroy subordinate contracts within like one frame of execution.

00:05:42.000 --> 00:05:50.000

So if. For example, you're doing something on, on Uniswap. Maybe they, in order to do some complexity underneath the hood.

00:05:50.000 --> 00:06:02.000

They might create and then delete contracts essentially within those transaction executions. The same goes for MEV searchers.

00:06:02.000 --> 00:06:08.000

A lot of the times MEV bots and searchers will use self-destruct to create and destroy contracts.

00:06:08.000 --> 00:06:21.000

Within kind of the same block essentially. In order to do variety of med things and to kind of hide gas and sneak things into the state without actually committing them to the state.

00:06:21.000 --> 00:06:22.000

So it's not necessarily a design flaw, but it's something that we recognize as needing an update.

00:06:22.000 --> 00:06:34.000

So there's EIP, 61, and 90. That's a new format for Verkal that's compatible or excuse me, new format for self-destruct.

00:06:34.000 --> 00:06:39.000

It's compatible with. Right now we've settled them on an IP that's really just to say.

00:06:39.000 --> 00:06:50.000

If you destruct and create within the same transaction. That's valid, but it's not necessarily going to be a pattern that we support long term.

00:06:50.000 --> 00:06:56.000

So self-destruct is basically going away and RizZER being replaced by another opcode.

00:06:56.000 --> 00:07:01.000

The vertical tries makes this a little more interesting. We will need a vertical compatible self-destruct.

00:07:01.000 --> 00:07:13.000

So 61 90 did not actually. It is included in the fork as a means to deactivate self-destruct, but since we don't have vertical tries yet, we might tweak this again.

00:07:13.000 --> 00:07:24.000

But our goal really is with the CIPs to get folks stop to stop using. Self-destruct.

00:07:24.000 --> 00:07:29.000

Yeah, there's a little quote from, you know, Vitalik down there about proof size.

00:07:29.000 --> 00:07:34.000

Blah, blah, blah. I don't know why that, you know, that this is.

00:07:34.000 --> 00:07:36.000

Not as important. We can go on to the next slide.

00:07:36.000 --> 00:07:44.000

I'll just say that. Having Ben. You know, an instructor in the Consensus Academy boot camp.

00:07:44.000 --> 00:07:55.000

We've seen people. The original intention self-destruct was that Smart contracts, you don't need them anymore, you can remove them and not have to keep that in state.

00:07:55.000 --> 00:08:03.000

So a lot of times we talk about. I think to general audiences about how like, oh, you know, immutability, once something's written to the blockchain, it can't be undone.

00:08:03.000 --> 00:08:14.000

That is absolutely untrue. I don't know why we got into saying that because that's like fundamentally false.

00:08:14.000 --> 00:08:23.000

But we Do have, you know, I think some of the original thinking around self-destruct was it would allow for that state bloat to be removed.

00:08:23.000 --> 00:08:34.000

And there was sort of a naive thinking about how it'd be used and then.

00:08:34.000 --> 00:08:49.000

Creatively smart contract developers found a lot of interesting ways to use it including like creating some interesting as Matt was was saying I would call those like instant upgrades, able patterns to like sort of change how things worked.

00:08:49.000 --> 00:09:07.000

So there has been a lot of feedback from the DAP developer community around how this is handled and that's why I would say my opinion is that that's why the scope has actually reduced and what was done because there was a lot of push back in implementation.

00:09:07.000 --> 00:09:24.000

Alright, speaking of the smart contract, speaking of the DAP developer community transient storage is actually another great example of something that came out of the DAP developer community, specifically Uniswap.

00:09:24.000 --> 00:09:40.000

Awesome. Yeah. So Transient storage was another one that is Basically. And attempts at fixing gas costing for stuff that was enabled by solidity and other languages interpretations.

00:09:40.000 --> 00:09:55.000

And you know, I think this is a funny common theme in Ethereum is that, patterns will be created that are roundabout essentially, instead of using like, a storage variable that's transient between frames of execution.

00:09:55.000 --> 00:10:04.000

You could use basically as loaded a store to put data into the state very briefly. And then retrieve it within the next kind of frame of execution.

00:10:04.000 --> 00:10:16.000

Instead of just having a piece of information that is available to an entire essentially contract execution. So previously it would say, hey, like I have this information that I'm gonna keep in the state for.

00:10:16.000 --> 00:10:25.000

You know, between, maybe I have one smart contract that I need to interact with and another contract to interact with another because all within one transaction.

00:10:25.000 --> 00:10:39.000

Transient storage allowed us to but kind of a interim memory that will live within those execution frames, but is not stored into the state, which means we can look over the cost as far as gas is concerned to developers.

00:10:39.000 --> 00:10:48.000

So lower gas costs to developers and users, but also we don't need to bloat the Ethereum state with stuff that only really needs to stay around essentially as RAM instead of ROM.

00:10:48.000 --> 00:10:55.000

So that's kind of the way that I think about transient storage. Which is a T Store operation in the opcode, you know, T store, T-load.

00:10:55.000 --> 00:11:07.000

The, S. Is more like Sloane and Estore is storing to the state so that's kind of the ROM operations and T store is more like kind of a RAM operation in between execution frames.

00:11:07.000 --> 00:11:22.000

I think this is a great example of a fork or of an EIP that was included due to the championing of the team underlying it because, if you see in this little blurb from GitHub that was included due to the championing of the team underlying it because, if you see in this little blurb from GitHub that I included, we had people at Uniswap implement it in

00:11:22.000 --> 00:11:27.000

all of the clients. So we had PRs to, you know, the basic project from Uniswap and others.

00:11:27.000 --> 00:11:30.000

So it's really easy to be like, yeah, I support this EIP because it's already built and all we need to do is test it.

00:11:30.000 --> 00:11:36.000

And the fundamentals are sound, right? So if we have those kinds of approaches, it's really easy to get things included.

00:11:36.000 --> 00:11:53.000

And that's why I said in the previous previous lectures. Please prototype stuff people really love it it makes it easy for us to champion things on cordevs and it makes it easy for things to be included in hard forks.

00:11:53.000 --> 00:12:00.000

So I think that You just swap wanted this. Because they have one of the largest contracts in the world.

00:12:00.000 --> 00:12:09.000

So having any gas cost savings for them is tremendously huge, especially when they're potentially doing things like self-destruct and create.

00:12:09.000 --> 00:12:19.000

Where they could be just using, you know, kind of brand based EVM storage and that's Again, sort of a misnomer, but I think it's an easy way to think about it.

00:12:19.000 --> 00:12:29.000

As part of your week board readings in. Like videos. We've included, EIP one.

00:12:29.000 --> 00:12:40.000

1 5 3 and we have a video it's a short video of the one of the proposals of this moody sale of one of the authors.

00:12:40.000 --> 00:12:45.000

Explaining it, I believe, community conference 2020 21 or 2022. So definitely check that out.

00:12:45.000 --> 00:13:00.000

That should now be posted in the in the course. All right, and now we're gonna switch over to EIP 4 7 8 8

00:13:00.000 --> 00:13:08.000

Yeah, this is another Cancun related the IP. The beacon block I'll try to break this down into a few pieces here.

00:13:08.000 --> 00:13:17.000

So After the merge occurred, ironically enough, there are now 2 blocks. For every slot.

00:13:17.000 --> 00:13:27.000

There is the execution block. Or the execution layer block that you know and love from pre-merge times, which stores all of the changes to state all of the execution that's done on the EVM.

00:13:27.000 --> 00:13:36.000

All that other good stuff. It's wrapped around it by the beacon block, which includes all the attestations that are needed in proof of steak to basically say, hey, yes, this block is valid.

00:13:36.000 --> 00:13:49.000

It also includes kind of information about the state of the beacon chain. And more information about kind of Previous blocks, next blocks.

00:13:49.000 --> 00:13:58.000

It's information that's needed for proof of stake. So we have consensus blocks or beacon blocks as they're commonly referred to and execution.

00:13:58.000 --> 00:14:06.000

Blocks now which are essentially both one block, but in reality they're stored kind of in 2 separate mechanisms.

00:14:06.000 --> 00:14:13.000

This is all well and good. I think it's the simplification is that in one slot we have one block of course but encapsulates all this data.

00:14:13.000 --> 00:14:22.000

They live in different clients so that we can block lives in the CL. The execution payload block kind of lives with a little more information in the execution layer.

00:14:22.000 --> 00:14:30.000

But what this EIP is trying to do is to expose that beacon block route to the EVM.

00:14:30.000 --> 00:14:37.000

So the EVM is completely blissfully unaware to the state of proof of steak so the beacon chain.

00:14:37.000 --> 00:14:44.000

It really only knows certain information about the deposit contract. So the proof of state deposit contract.

00:14:44.000 --> 00:14:53.000

This exposes a new opcode that allows the EVM to validate and make certain trust assumptions about arbitrary Beakon chain state.

00:14:53.000 --> 00:15:13.000

This is really useful for folks like staking pools. Restaking constructs, bridging, MEV, there's a lot of kind of protocols that exist today that like Lotto for example they want to be able to make assumptions in their smart contracts about the state of the beacon chain.

00:15:13.000 --> 00:15:22.000

So, 4 7 8 8 allows us to do that when it's shipped. It will give the kind of block route and some more information about the beacon block.

00:15:22.000 --> 00:15:31.000

Within the EVM, which again allows trust assumptions to be made and also kind of verification to be done of the beacon state within the EVM.

00:15:31.000 --> 00:15:44.000

Which is very, very valuable for, like I said, primarily bridging. And staking, pools, liquid sticking tokens and other things, because those are the folks that are right now.

00:15:44.000 --> 00:15:49.000

Which is all well and good, but we want to remove trusted oracles as much as possible from the system.

00:15:49.000 --> 00:15:55.000

So if LIDO can get assumptions about the beacon chain without having to use an oracle.

00:15:55.000 --> 00:16:04.000

That's a win because it removes one trust assumption from the construct of their staking pull. There's some.

00:16:04.000 --> 00:16:08.000

Design choices at the bottom that are frankly, I don't think that important for the context of this discussion.

00:16:08.000 --> 00:16:18.000

There was thought about using the block hash for. Basically updating this opcode to you repurpose block hash in a way that could allow it to be exposed from the beacon block as well.

00:16:18.000 --> 00:16:26.000

And we don't want to use the state route. As it basically requires us to calculate a ton more stuff.

00:16:26.000 --> 00:16:39.000

Around what's going on instead of using the block route so state routes require you to calculate a ton of different nodes of state as well as parent and children and sibling nodes in the, you know, kind of tree structure that we have in Ethereum.

00:16:39.000 --> 00:16:41.000

So we wanted to avoid that so we can do a single computation about the block. And then the EVM knows what's going on.

00:16:41.000 --> 00:16:55.000

It can make trust assumptions. It can also check the previous stress assumptions of previous blocks. And this just opens a wide variety of use cases.

00:16:55.000 --> 00:17:10.000

Keeping things out of the tries the state try and things like that. Allow for like Matt was saying like you don't have to then traverse the try for any sort of data and that's like really important to to remember that's why We like to keep things.

00:17:10.000 --> 00:17:17.000

We're possible outside of the Patricia Merkel try unless you have to put it in there and.

00:17:17.000 --> 00:17:25.000

The state the state try is one of them Alright, 56 56.

00:17:25.000 --> 00:17:26.000

Yeah, this is another one that we shipped in, we will be shipping in Cancun.

00:17:26.000 --> 00:17:37.000

It's essentially very straightforward. It provides an efficient EVM instruction for copying. Copying memory areas.

00:17:37.000 --> 00:17:44.000

So right now in the EVM, if you want to copy data, you kind of have to go through some weird hoops.

00:17:44.000 --> 00:17:50.000

These, this is again another off code change that's intended to lower gas. Consumption overall.

00:17:50.000 --> 00:17:55.000

The funny part again is that languages are actually providing a lot of this utility under the hood.

00:17:55.000 --> 00:18:06.000

So solidity provides ways for you to copy memory areas, same with viper, but underneath the hood they're not kind of using this gas efficient.

00:18:06.000 --> 00:18:14.000

Copy instruction today. They might be, they're basically using a combination of, you know, loading and unloading and like we mentioned with T.

00:18:14.000 --> 00:18:27.000

Store. But in this case, you want to be able to do things like static analysis. You want to be able to kind of understand what's happening with function calls and reference the same data, within memory.

00:18:27.000 --> 00:18:36.000

So it's again a simple gas costing change. It allows us to enhance the languages to use less gas and use less workarounds.

00:18:36.000 --> 00:18:52.000

A lot of the fun abstractions that solidity and viper do under the hood are actually because we don't have up codes like T but yeah, it's it's this specific opcode was added in reality because it's Just a gas oversight.

00:18:52.000 --> 00:19:00.000

There's a lot of oversight. We analyze main net blocks to show that approximately 10 and a half percent of memory copies would be performed better with an M copy instruction.

00:19:00.000 --> 00:19:23.000

That's a lot of. Just wasted gas and computation. And like I mentioned, it's also really valuable for static analysis because we understand in advance of contract execution and contract deployment what kind of memory calls we're going to be doing, which means that we can do things like jump test and now analysis, which I will not be covering at all in this.

00:19:23.000 --> 00:19:37.000

Slide. But you can understand where memories going to live. So we can avoid things like out of bounds, memory access or indexing into other areas of state because we know in advance and static analysis where things are going to live.

00:19:37.000 --> 00:19:59.000

That's great. The other special case here is, pre-compiles. So if we change the way that certain memory call operations work, we want to have it and make sure that the m copy instruction works with those precompiles in a way that's sufficient and we also want to make sure that it's compatible with things like EVM 3 8 4 which is basically just more

00:19:59.000 --> 00:20:22.000

cryptographic instructions being used in the in the EVM. In this case it's 384 bit instructions which is currently at Just a giant overhead and also it's just basically adding more functionality to certain things like the call, opcode, but at this case, we want to make sure that we have predictable results across hard forks, which we did not before with memory copying because folks

00:20:22.000 --> 00:20:29.000

would use functions and loads and stores to move things around in a way that was. Not so valuable.

00:20:29.000 --> 00:20:37.000

And decide we've mentioned this concept of gas golfing. So the just a quick explanation on that.

00:20:37.000 --> 00:20:48.000

That is the idea of like what is the most efficient way. To use. Gas, in order to store variables.

00:20:48.000 --> 00:20:58.000

Or, complete operations, right? So the analogy to golf is that like you wanna, you know, the best result you can get a golf is a hole in one.

00:20:58.000 --> 00:21:08.000

What is the closest to a hole in one that you can get. A lot of the work around the past 3, no, the the past.

00:21:08.000 --> 00:21:14.000

Few EIPs that we just mentioned. We mentioned 3 of them. They weren't in order.

00:21:14.000 --> 00:21:23.000

Kind of came out of this gas optimization world where people were discovering ways to, optimized gas.

00:21:23.000 --> 00:21:35.000

And discovering certain features of the, the EVM that if you did certain things, you would get gas savings even though that wasn't intended.

00:21:35.000 --> 00:21:50.000

One thing that we always joke about is that like, you know, what as a way to sort of just check people's understanding in the in the boot camp is like what is cheaper to store a 0 or nothing.

00:21:50.000 --> 00:22:00.000

And at this point in time, or until very until very recently, it was always cheaper to store a 0 than to store blank.

00:22:00.000 --> 00:22:10.000

In in in memory so those are the kind of things that you know people realize all right so now we're onto EIP 7 5 1 4.

00:22:10.000 --> 00:22:23.000

Yeah, that's a funny call back to push 0, which if you were following the Shanghai upgrade, it was also cheaper to store, you were forced to store 00 instead of a single 0 due to the way that the bite arrays work.

00:22:23.000 --> 00:22:30.000

So, Ethereum gas golfing is really, in my opinion, just quite ridiculous, but it's kind of necessary if you don't want to make your users pay.

00:22:30.000 --> 00:22:37.000

A ton of gas, but we are slowly working our way through those kind of edge cases where instead of asking people to play funny games, we just provide them efficient instructions.

00:22:37.000 --> 00:22:43.000

So hopefully this stuff will get corrected more and more over time and the languages can just consolidate on.

00:22:43.000 --> 00:22:59.000

Really efficient operations. Cool. This next one is actually only on the consensus layer. This is EIP 7 5 1 4, which.

00:22:59.000 --> 00:23:13.000

Turn limit essentially. Which means we I'll change the amount of folks that can enter proof of steak at any given time per epoch.

00:23:13.000 --> 00:23:24.000

So we've lowered the essentially the full bound from 12 to about 8. So instead of having every 6 min and 12 new validators plus can come online, we lowered that to to 8.

00:23:24.000 --> 00:23:34.000

This was a kind of stopgap solution to address the size increase of the validator set, which is getting quite large and causing a lot of issues on the consensus layer and networking stack.

00:23:34.000 --> 00:23:44.000

So this is a short term alleviation. We really need to fix the problem that. Proof of steak is somewhat inefficient when it comes to signature aggregation, networking latency.

00:23:44.000 --> 00:23:53.000

Because when we reach numbers on May net like millions plus of validators, we really need to address that.

00:23:53.000 --> 00:24:09.000

What this EIP does not do is touch the exit queue. So as many exits as we're previously allowed are now allowed I believe the number is either 12 or 16 so we don't care how many people leave you know get out of here because we want to make sure the set doesn't get too large.

00:24:09.000 --> 00:24:27.000

But yeah, this basically creates a linearly, growing or validator set, that we have a little bit more control over and we don't necessarily Have a significantly mature solution to this now, but it's being worked on in the next hard fork for the consensus layer called Electra.

00:24:27.000 --> 00:24:48.000

There will be a lot of EIPs around improving the signature aggregation of main net, Ethereum proof of stake and improving the attestation process and decoupling some of these hard assumptions that are being made on the big chain.

00:24:48.000 --> 00:24:58.000

We're basically doing tech that clean up from the merge at this point and we are realizing that you know we want to make sure that Ethereum is validating.

00:24:58.000 --> 00:25:05.000

A validating on proof of state Ethereum is viable long into the future. Which means we need to have a validator set that is not unduly straining hardware, network latency and more.

00:25:05.000 --> 00:25:16.000

So we're working on all those hard problems. And we have the short-term solution.

00:25:16.000 --> 00:25:17.000

Alright.

00:25:17.000 --> 00:25:21.000

Okay, I actually discussed this one before blob base fee. I'm not gonna do it again.

00:25:21.000 --> 00:25:32.000

It is it is one of the other VIPs. Essentially it's an it's an opcode that allows you to query the blob fee market and understand what the base fees are to know when you want to pay or to not pay.

00:25:32.000 --> 00:25:41.000

In reality, this is just gas accounting again. It allows smart contracts. To say maybe in a layer 2 contract on L one to say hey.

00:25:41.000 --> 00:25:50.000

What's the price of blobs right now if the price is too high? Maybe they wait until the next slot to bid on data availability space in 4 4 4.

00:25:50.000 --> 00:26:00.000

So this is just a very straightforward opcode that returns the base fee. In almost in an identical fashion as EIP, 1559 transactions.

00:26:00.000 --> 00:26:05.000

And then with that base fee that fluctuates based on the amount of blobs being used.

00:26:05.000 --> 00:26:10.000

Smart contracts can say, hey, I want a blob or no, I don't necessarily want a blob.

00:26:10.000 --> 00:26:19.000

So it's just a way for us to query the fee market and understand the price at any given point for blobs.

00:26:19.000 --> 00:26:32.000

Okay, so now let's get to. Those that are Left at the station. They did not get on the train.

00:26:32.000 --> 00:26:40.000

Yeah, so I'll talk about these little ones first before we talk about EOF, which was a major potential driver of this fork which was left out.

00:26:40.000 --> 00:26:44.000

It's being currently debated for Prague hard fork which Might not be included there either.

00:26:44.000 --> 00:26:54.000

Which is a shame. I think it's a very, it's a huge change to EVM. Anyway, I'll get there when we get there.

00:26:54.000 --> 00:26:58.000

Okay, we have some grabber GAPs. This one's very quite old. 663.

00:26:58.000 --> 00:27:07.000

This was a request by this. Lidity team, that allows you to. Essentially access items in the stack up to a depth of 256 items.

00:27:07.000 --> 00:27:21.000

Previously the stack was limited to a depth of 16 for duplicating it and swapping, stack items.

00:27:21.000 --> 00:27:33.000

So if you're familiar with the way that Stacks work in data structures. Just essentially an order of operations kind of first and first out queue, where you do execution frames in the EBM.

00:27:33.000 --> 00:27:43.000

The EVM is stack based. And this The IP would allow you to Up access items in the stack up to a depth of 256.

00:27:43.000 --> 00:27:48.000

The stack gets quite large. So instead of being able to only influence the top 16 items, you can now basically traverse the entire stack and swap or duplicate items in there.

00:27:48.000 --> 00:28:05.000

Efficiently. Cool, it wasn't included. I feel like there was considerations from the EVM side that made this either gas and efficient or node and efficient.

00:28:05.000 --> 00:28:11.000

It probably also creates a hard thing to do for maybe static analysis that improves there. But I'm actually not quite sure why the debate on this one fell south.

00:28:11.000 --> 00:28:22.000

It might have been just that we were doing enough with the EBM as it stands. 2, 2 5 3 7 is the BLS pre compile.

00:28:22.000 --> 00:28:25.000

This one we just need to ship. We've had these curves, elliptic curve, pre-compiles ready forever.

00:28:25.000 --> 00:28:40.000

But there's BLS signature verification kind of starting to be used all over in Ethereum, for, you know, things on, on the consensus layer for things in 0 knowledge.

00:28:40.000 --> 00:28:50.000

So this EIP would have a pre-compile for the BLS 12 3 81 curve to allow cryptographic operations on that curve.

00:28:50.000 --> 00:28:54.000

By clients in an efficient nature. Again, we didn't include it. I think it's going to get included in Prague.

00:28:54.000 --> 00:29:04.000

We are very much so getting pre-compiled happy. We have, I think, 11 precompiles that are up for debate right now for the next fork.

00:29:04.000 --> 00:29:20.000

I think we'll just do it. Precompiles take like no time and you don't really need, you need to test them of course but once they're deployed they're kind of just like additional function handlers and helpers that exist on chain in the clients, excuse me, exist in the clients to help with on chain operations. Efficiency is good.

00:29:20.000 --> 00:29:33.000

I like efficiency. I think we should ship this one. The pay opcode This one is essentially a way to.

00:29:33.000 --> 00:29:43.000

Pay someone that may not be the minor of a block with gas essentially or with whatever you're putting into that.

00:29:43.000 --> 00:29:50.000

That function call. The reason being is that we want to enable use cases. Like kind of like MEV where maybe I want to.

00:29:50.000 --> 00:30:01.000

Pay someone to do something kind of out of band that's not involved in proof of steak.

00:30:01.000 --> 00:30:11.000

So I need a way to pay then someone that is not the minor, when I'm doing kind of these EVM operations, it's again, this is not a transfer per se.

00:30:11.000 --> 00:30:21.000

It's kind of a way to basically just transfer gas within the kind of transaction. Execution of smart contracts and things.

00:30:21.000 --> 00:30:25.000

So you can imagine a use case where

00:30:25.000 --> 00:30:30.000

I have, an MEV bot that I want to pay certain things or certain searchers during block execution.

00:30:30.000 --> 00:30:46.000

In order to maybe reveal underlying data. So maybe there is an oracle that I can essentially bribe to reveal certain data that I can then use in my, extraction.

00:30:46.000 --> 00:30:52.000

There's also things around reentrancy attacks, and DOS vectors with this one.

00:30:52.000 --> 00:31:04.000

To allow for basically to avoid having to do straight transfers of, within these execution calls because reentrancy.

00:31:04.000 --> 00:31:13.000

Attacks can cause some headaches when you're doing things like. Trying to pay Ether before and after.

00:31:13.000 --> 00:31:25.000

EVM execution in transfers. That was kind of long winded. We can probably discuss this one more during the QA, if needed, but it's a very straightforward opcode that allows.

00:31:25.000 --> 00:31:35.000

Pseudo-ish transfers. Without actually calling. Address functions here.

00:31:35.000 --> 00:31:42.000

Oh god, okay, SSE. So this is one that was debated at length. SSC is a new encoding format.

00:31:42.000 --> 00:31:51.000

If you've heard of RLP. Which is Ethereum's kind of main encoding format as of today.

00:31:51.000 --> 00:32:03.000

SSE is what we would like to use in the future. We chose to delay incorporating SSE into the Cancun fork because we think it would have caused major delays.

00:32:03.000 --> 00:32:13.000

It probably would have. But in reality there's like 6 EIPs where we would change SSI, RLP to SSE in many cases.

00:32:13.000 --> 00:32:23.000

The consensus layer is already using SSE for a lot of stuff. So we have. Kind of big and little Indianness in Ethereum right now at the same time, which is not good.

00:32:23.000 --> 00:32:30.000

And we also have RLP and SSC on the same data, which is not as good in many cases.

00:32:30.000 --> 00:32:38.000

So In the future, we want to SSE everything, which is to say we want to get rid of our LP and have SSE.

00:32:38.000 --> 00:32:45.000

We will still need to support RLP and node software because you'll need to verify historical state and RLPs.

00:32:45.000 --> 00:32:47.000

So in reality, this doesn't become as fancy as we want it to be until we can do state expire in history expiry.

00:32:47.000 --> 00:33:02.000

But we chose to defer changing the encoding format just because it's A huge thing that could cause issues.

00:33:02.000 --> 00:33:13.000

So, Again, the benefits are. Are here we have Basically more optimized transaction size. So node storage goes down, RLP.

00:33:13.000 --> 00:33:15.000

I'm not sure if it's little or big Indian, but either way we hear storing a lot of leading zeros.

00:33:15.000 --> 00:33:23.000

So presumably little Indian. I don't know if I have that backwards, whatever.

00:33:23.000 --> 00:33:30.000

We'll figure it out later. Either way, there's a lot of leading information that we could strip, and mean low lower notes, node storage for consensus layer and execution layer clients.

00:33:30.000 --> 00:33:41.000

Faster proof verification is a good one. This is because again, SSE is simply a faster encoding format and it's not as heavyweight as RLP.

00:33:41.000 --> 00:33:50.000

Yeah, and the other trade off is that we need to, we need to see how to.

00:33:50.000 --> 00:33:57.000

Bring the community up speed on this format. I think we see a lot of RLP verification being done on the execution layer now.

00:33:57.000 --> 00:34:02.000

So what does it look like when we get rid of that or modify it? We don't quite know.

00:34:02.000 --> 00:34:08.000

That's partially why we delayed this. We don't know the extent of the contracts using RLP, to do basically hard-coded transaction validation, which isn't great, but you will need to decode and encode our LP.

00:34:08.000 --> 00:34:25.000

So having separate. Mechanisms where you don't know in advance necessarily what you have is complex and we need to find a way to mitigate that weird risk.

00:34:25.000 --> 00:34:36.000

Okay, now let me get into some of the EVM, crazier ones. We'll talk about EOF on the next couple of slides, but to start there's 2 EIPs for something called EVM max, which is modular arithmetic extensions.

00:34:36.000 --> 00:34:49.000

We have 2 EAPs, one that is compatible. With EOF, which I'll discuss after the slide, the one that is compatible with the existing legacy VM implementation.

00:34:49.000 --> 00:35:05.000

Basically, this is a way for us to speed up even more cryptographic operations within the EVM, reducing gas costs by up to 90, 95% for 2 56 bit.

00:35:05.000 --> 00:35:27.000

Operations This is like a, again, a pre-compile that has more. Math within it like I mentioned on the previous kind of slides pre compiles exist to have the nodes in the EVM be able to do certain cryptographic things a lot quicker like hash functions or cryptographic functions.

00:35:27.000 --> 00:35:32.000

Yeah, there's like I think 9 precompiles on main net right now. I encourage you to go look them up.

00:35:32.000 --> 00:35:43.000

They're kind of interesting, but they basically reduce cost for a lot of complex operations in the EVM by putting those complex operations directly into the clients themselves.

00:35:43.000 --> 00:35:46.000

Referencing or instead of adding weird opcodes or weird EVM instructions to do extremely complex elliptic curve cryptography.

00:35:46.000 --> 00:36:07.000

We've put that into the kind of implementations of the clients themselves and we've addressed those precompiles to say, hey, like if you want to go check out modular arithmetic extensions which help with really big math.

00:36:07.000 --> 00:36:16.000

You can go to this smart contract address and your node will run those computations natively probably using a Java rust or Go library.

00:36:16.000 --> 00:36:22.000

Cool. All that's to say it reduces gas cost and it reduces. A lot of.

00:36:22.000 --> 00:36:31.000

You know, overhead on nodes, which is again gas cost. Yeah, EOF, I think is the next big one.

00:36:31.000 --> 00:36:39.000

Maybe we pause before questions here.

00:36:39.000 --> 00:36:40.000

Yeah, we have a few questions.

00:36:40.000 --> 00:36:45.000

I know we have, okay, EIP, 51. Okay, yeah, maybe we can just go through them and all at the very end.

00:36:45.000 --> 00:36:51.000

I'll just go through EOF quickly. It's I think we only have 3 more slides and then we only have 3 more slides and then we're ready for Q&A.

00:36:51.000 --> 00:36:58.000

Alright. Eos, as I mentioned, is a huge bucket of work that's been worked on for quite a while.

00:36:58.000 --> 00:37:10.000

It's actually implemented in all of the clients. If not most, I think all probably most. It is a new container format for smart contracts.

00:37:10.000 --> 00:37:14.000

So the

00:37:14.000 --> 00:37:17.000

Initially IP is to basically set out a set of rules for what objects can look like within the EVM.

00:37:17.000 --> 00:37:45.000

And when I say objects, I mean essentially smart contract execution containers. This is a fun. Way to say we have, and today we have kind of Smart contracts that live in essentially their own little bubble but they can really touch almost anything in a theorem state and they can really touch you know, a lot of different things.

00:37:45.000 --> 00:37:53.000

Eof provides a very, logical separation between smart contracts and the rest of the Ethereum environment.

00:37:53.000 --> 00:38:02.000

It also tells you when you're deploying those contracts exactly where the code of that smart contract will live and exactly where the data of that smart contract will live.

00:38:02.000 --> 00:38:08.000

That's why it's containerized, right? Because I'm saying I have code that lives in these chunks of memory.

00:38:08.000 --> 00:38:13.000

I have data that lives in these trunks of memory. They also can't be bigger than this.

00:38:13.000 --> 00:38:24.000

So my code can't exceed this size. My data can't exceed this size. And then you have really valuable static analysis that can be done and code validation that can be done because you know exactly.

00:38:24.000 --> 00:38:34.000

Where the data is gonna live, where the The code is going to live so we can do again things like junk desk analysis, static relative jumps.

00:38:34.000 --> 00:38:47.000

We can do a lot more code validation upfront and we can also lower gas costs because instead of having an unbounded contract deployment where we don't necessarily know upfront how much code and how much data space we're going to need.

00:38:47.000 --> 00:38:57.000

We can enumerate that stuff. Upfront, which means I'm charged for what I'm using as opposed to I'm being charged for a blanket create statement.

00:38:57.000 --> 00:39:07.000

We get a lot of new OP codes, so static relative jumps or our jump, replacing the jump instructions because Jump instructions can kind of.

00:39:07.000 --> 00:39:25.000

Go wherever. Like I said, there's that junk desk analysis that's done on like on contracts today that says you are only allowed to jump within these ranges, but that jump this analysis is costly because we don't want smart contracts to be able to jump into essentially the memory of other contracts because they can.

00:39:25.000 --> 00:39:48.000

Whole data that's incorrect they can Run functions that might live at the wrong place, which means we have unpredictable results that don't are essentially meetingless So today we do jump desk analysis to make sure that any jump that's performed and if you don't know what a jump operation is, I, it's quite straightforward if you've done and compsite algorithms class basically allows you to

00:39:48.000 --> 00:39:54.000

move, from where you are in, kind of the, if execution was a, a row from left to right.

00:39:54.000 --> 00:40:05.000

It allows you to move items in those execution. Operations so that you know The turning machine that is the EVM can keep going correctly.

00:40:05.000 --> 00:40:17.000

Anyway, static relative jumps. Remove all this since we know with EOF where things live we can jump really easily we don't have to worry about what's happening because we know the ranges in advance.

00:40:17.000 --> 00:40:21.000

Let's keep going. I'll answer a ton of questions at the end because there's even more stuff.

00:40:21.000 --> 00:40:37.000

There is Basically more EIPs that disable, like I said, the jump stuff. And then we have EIP for stack validation, that basically does a lot of that static analysis stuff in advance by enforcing rules at the container level.

00:40:37.000 --> 00:40:51.000

So instead of basically trying to deploy a smart contract and saying afterwards, hey, this is no good. With the UF kind of container rules, we will have a lot of this validation.

00:40:51.000 --> 00:40:59.000

In advance and for free because we have contracts basically dependent code that is. Disallowed in certain instances.

00:40:59.000 --> 00:41:06.000

So this is, it's a fancy way of saying we're doing different kinds of static analysis that's cheaper and use less resources on the machines.

00:41:06.000 --> 00:41:21.000

Because we have, again, a much clear understanding of where things live and how things work in the isolated containers of EOF, EVM execution.

00:41:21.000 --> 00:41:31.000

I don't know if there's a part 3. Okay, here we go. Yeah, in conclusion for EOF, byte code is traditionally an unstructured sequence of instructions.

00:41:31.000 --> 00:41:38.000

So, hey, I have this byte code. I'm gonna deploy it. You know, if you've heard of ABIs and byte codes, it's kind of just like taking compiled contracts and dumping them on chain.

00:41:38.000 --> 00:41:53.000

Whereas with EOF, instead of doing that, you would have a container which very much structures the byte code and the EVI is in all the deployment of the contract into a logical unit and that logical unit is a lot easier to reason about.

00:41:53.000 --> 00:42:09.000

It's cheaper to deploy a cheaper to use. And It isn't really dependent on anything outside of the EVM, which is kind of nice because it means we don't have to test the implementation alongside.

00:42:09.000 --> 00:42:20.000

Client state stuff. So, Merkel, Patricia try stuff. It's kind of like a black box as in The client is agnostic to what's happening in these EOF contracts just the same way as it is the EVM.

00:42:20.000 --> 00:42:23.000

So we have EVM inputs and outputs. We store all the inputs already in the client.

00:42:23.000 --> 00:42:31.000

And we store the outputs after, you know, blocks. So. It's the same process that we have today.

00:42:31.000 --> 00:42:35.000

It's just a much cleaner way of interacting with these contracts. It also doesn't.

00:42:35.000 --> 00:42:44.000

This allow legacy contracts so you can continue to deploy legacy contracts. Or you have focused contracts if you want the gas cost savings.

00:42:44.000 --> 00:42:52.000

I think this is another good example of where things just get weird in Ethereum because we have implementations built in a lot of the major clients.

00:42:52.000 --> 00:42:58.000

However, it doesn't mean that this one's getting pushed through, primarily because the testing burden is such.

00:42:58.000 --> 00:43:10.000

And making changes to the EBM. Philosophically disagrees with certain people. Because this sort of allows EVM upgradability in a way that's Unique.

00:43:10.000 --> 00:43:25.000

So imagine you can target container versions of EOF. So this is UFV one that I described, but eventually there might be an EOF V 2 on chain which has different validation roles, different code, checks, things like that.

00:43:25.000 --> 00:43:39.000

People don't necessarily like a versioned EVM because it opens the door to kind of code is no longer law and we have very many many versions of the EVM running at the same time doing many many different things.

00:43:39.000 --> 00:43:45.000

So I think this one, the reason that even though the tech is built, the tech has been tested and the tech is in clients already.

00:43:45.000 --> 00:43:56.000

The Philosophical debate around this, as mentioned in our previous lectures, is so controversial that it's not been accepted to A hard fork, even though it's basically done.

00:43:56.000 --> 00:44:11.000

So at the end of the day, even if you do do all of the hard work to build something, you have to make sure that you have the community sentiment and support you need to push through an IP because EOF is being stonewalled for one reason or another, primarily philosophical challenges.

00:44:11.000 --> 00:44:21.000

And, yeah, it just goes to show that not necessarily things are, guaranteed even if you do a lot of the hard work in advance and research.

00:44:21.000 --> 00:44:24.000

This is like years of research.

00:44:24.000 --> 00:44:25.000

Okay. I think this is a perfect time. Yeah.

00:44:25.000 --> 00:44:42.000

Just jump in a QA. We had a little over 10 min. Very much on pace. Let's start with just we're gonna order these just to make them kind of make sense So regarding EIP, 59, 20, is it possible to pay the service of an oracle that doesn't receive.

00:44:42.000 --> 00:44:49.000

Ether for example in a token like link as part of the EIP

00:44:49.000 --> 00:44:50.000

Yeah.

00:44:50.000 --> 00:44:55.000

I believe the answer is no. This is specifically stating, because, think about it this way.

00:44:55.000 --> 00:45:07.000

In order to use an ERC. 20 token, you have to interact with that contract. And that doesn't necessarily if you want to do something like that within the EVM, you need to.

00:45:07.000 --> 00:45:15.000

Interact with a separate contract as opposed to using the pay off code which which operates within a certain contract.

00:45:15.000 --> 00:45:18.000

Because when you're mutating a Ethereum state, If I want to mutate the USDC contract, for example, I have to go out and touch that.

00:45:18.000 --> 00:45:43.000

Contract to do stuff with instructions. And then that comes back to me with the results. There's this notion of something called a pre image in Ethereum where before a block is actually mined, there's a huge set of instructions that go in it that encapsulates all the transaction changes, all of the smart contract calls, everything.

00:45:43.000 --> 00:45:56.000

And then we have not post image, but post the post image is basically a block. That block states all of the changes to state and it has all the outputs and the new balances of ether in everyone's wallet.

00:45:56.000 --> 00:46:02.000

Cause the funny part about it is that . C. 20 tokens aren't actually stored like at your contract address.

00:46:02.000 --> 00:46:07.000

There is an index of your address somewhere on a contract that says I'm owed this many tokens.

00:46:07.000 --> 00:46:20.000

Whereas ether itself is maintained by the state of Ethereum. So the payoff code allows you to change that state of Ethereum, but not the contract that you might be interacting with.

00:46:20.000 --> 00:46:26.000

I don't control the USDC contracts. I don't control the US, you know, like link token.

00:46:26.000 --> 00:46:31.000

I don't necessarily control that token. And the Ethereum state also does not control that token.

00:46:31.000 --> 00:46:42.000

It's around the smart contract that can influence that state. So pay is limited to ether, but it allows you to specify who you want to pay and how you want to pay people in contract execution that's not a transfer.

00:46:42.000 --> 00:46:50.000

So it's kind of a unique way to Again, I don't like the word bribe, but it allows you to basically pay folks during certain things that you're doing that are not straight transfers.

00:46:50.000 --> 00:47:03.000

So if you want to say, hey, you know, if you're a MEP searcher and you want to pay people out of band, this is a way to do that.

00:47:03.000 --> 00:47:10.000

Okay, so for our next question, an earlier slide, I think it was. I'm gonna try and find it real quick.

00:47:10.000 --> 00:47:19.000

Mention gossip network simulation. Could you elaborate on what was done and what question did it answer?

00:47:19.000 --> 00:47:20.000

Alright.

00:47:20.000 --> 00:47:27.000

Yes, I think this is a previous lecture actually. So the gossip network simulation that we did was a round coupled and decoupled blocks.

00:47:27.000 --> 00:47:37.000

We found in the testing afford 4 4 that when you Couple the blobs in the box together.

00:47:37.000 --> 00:47:55.000

There is a huge spike in latency because you're requiring nodes to move large packets of data around the network simultaneously, which can create latency bottlenecks as opposed to decoupling of blobs and blocks, which means I can gossip them separately and in different priority cues.

00:47:55.000 --> 00:48:06.000

Cool, whatever. When you use coupled blocks and blobs at the existing 3 6 target. So again, target of 3 blobs per slot, maximum of 6.

00:48:06.000 --> 00:48:22.000

If you couple those together, this the slide on the previous lecture mentioned, you know, there was kind of a right hand skew where there's more latency around just how fast blocks and messages are propagated in the network overall because we can, we can.

00:48:22.000 --> 00:48:45.000

Test that by having our CL basically Do a ton of extra work. We can find a super beefy machine and we can say, hey, and listen to literally everything that happens on the network, connect to as many peers as you can and try to get on all the subnets, all the sync committees like all these things like there's A lot of my new shits of the proof of stake in this

00:48:45.000 --> 00:48:53.000

case, but in reality, we just can connect, we can measure all of those things by creating essentially a super node that listens to everything instead of just some things like normal nodes.

00:48:53.000 --> 00:49:12.000

The network simulation there showed a basically significant increase of latency in in gossip and block propagation and message propagation when using coupled blocks and blobs because you're pushing around a full block and a blob is one message, which can create latency in the system.

00:49:12.000 --> 00:49:25.000

Let me split those up. Allowing the nodes to gossip them independently, it's it reduce the network latency overall because nodes can process smaller messages more quickly and they can send them back out more quickly.

00:49:25.000 --> 00:49:31.000

Can I say a few words on the analysis in the EIP process? Yes. So.

00:49:31.000 --> 00:49:42.000

This and analysis was done by a researcher at Consensus, named Anton who essentially, you know, works on the beacon chain as a researcher looking for outstanding topic areas.

00:49:42.000 --> 00:49:50.000

In reality, when we're doing this kind of hard for testing, A lot of these things become apparent as like hunches, for example.

00:49:50.000 --> 00:49:55.000

So we noticed that network latency was increasing in the test nets versus kind of main net Ethereum.

00:49:55.000 --> 00:49:59.000

And then we use that to drive research exercises that say, hey, you know, we are noticing this.

00:49:59.000 --> 00:50:01.000

Let's actually go get hard data. So we had a researcher say, hey, you know, the network latency is getting really big.

00:50:01.000 --> 00:50:16.000

They created that kind of Gossip scraper, p twop no that I mentioned that it accepts every incoming connection it can and it accepts every duty that it can as node.

00:50:16.000 --> 00:50:22.000

And then we measure the latency of those block propagation messages and those gossip propagation messages.

00:50:22.000 --> 00:50:24.000

And that was the result that we had there. So it was absolutely made public. I can try to dig it up.

00:50:24.000 --> 00:50:39.000

But in reality, it was kind of just round trip time on overall Gossip messages in the network because there are many reliable tools to

00:50:39.000 --> 00:50:40.000

Measure these the propagation of these. So nodes have something called a notion of block propagation time.

00:50:40.000 --> 00:50:50.000

It's basically we since we are running a universal computer in Ethereum, we know the time that certain things are supposed to happen.

00:50:50.000 --> 00:51:00.000

Everyone is supposed to be on the same clock. And frankly, you get penalized if you're not running on the same clock that if you're him uses, so people often have to tweak very finely.

00:51:00.000 --> 00:51:10.000

Their system clocks in order to be not missing attestations, not missing signatures. Fine.

00:51:10.000 --> 00:51:20.000

But in the case of this, kinda lost my train of thought here.

00:51:20.000 --> 00:51:33.000

Oh yes, so since since we know in advance kind of in Ethereum where when things are supposed to happen, how fast blocks are supposed to propagate to the network, we can measure the results of expected outcomes versus latency.

00:51:33.000 --> 00:51:42.000

Using those slot timings. So we have a 12 s slot on the beacon chain.

00:51:42.000 --> 00:51:53.000

There's about 4 things that are supposed to happen in that small second slot. We can measure at what point each of those happens and we can also compare them to other nodes and then we can get some latency metrics there.

00:51:53.000 --> 00:52:01.000

So for example, if I have a block that is filled with goodies. I am the producing node.

00:52:01.000 --> 00:52:09.000

I am the block proposinger. I mine the block on my internal machine. I check all of the values of that mine block.

00:52:09.000 --> 00:52:15.000

In the EBM. That's why we have the execution layer client. If the consensus layer didn't need to validate the blocks.

00:52:15.000 --> 00:52:21.000

We didn't, we wouldn't need any of that stuff. But we have a block. A consensus layer node that is producing blocks.

00:52:21.000 --> 00:52:29.000

The execution layer will check the work of that block to make sure it's totally valid. It'll then propagate it to its EL peers and the consensus layer will propagate the beacon block with a reference to the EL to its peers.

00:52:29.000 --> 00:52:47.000

And we can measure the time of roundtrip time to all those peers to say this is the overall network latency for block propagation, we can do the same thing for blobs and we can do the same thing for transaction gossip on the EL and we can do the same thing for some other PDP messages, but that's the gist.

00:52:47.000 --> 00:53:00.000

I don't know if that served to make anything more or less confusing.

00:53:00.000 --> 00:53:04.000

So I think we'll end there for today just because that's a nice wrapping point. We will continue to answer.

00:53:04.000 --> 00:53:09.000

Questions on Friday at office hours and we can go into depth on any EIPs that you have questions about.

00:53:09.000 --> 00:53:19.000

So basically, we'll come back with the same set of slides, but if we want to dive in deeper on any of that.

00:53:19.000 --> 00:53:29.000

Matt and I have not had a chance to talk about. How we might modify some of our guest speaker dates given that we now actually have dates for the test nets.

00:53:29.000 --> 00:53:37.000

Going live, but we don't want to miss opportunity for potentially an observation. So I would just say look out for that.

00:53:37.000 --> 00:53:54.000

Matt and I are to confer and figure out what makes sense. But we might do a special guest session where we're actually just kind of observing one of those, those forks as they occur on the test nets, one of those, those forks as they occur on the test nets, depending on if we can make the timing work.

00:53:54.000 --> 00:53:58.000

So that's just another announcement. Depending on if we can make the timing work. So that's just another announcement.

00:53:58.000 --> 00:54:01.000

This is kind of a unique aspect of doing this course. So that's just another announcement. This is kind of a unique aspect of doing this course alive is that we have those happening.

00:54:01.000 --> 00:54:05.000

And this is kind of a unique aspect of doing this course alive is that we have those happening. And so, that isn't necessary, but, doing this course live is that we have those happening.

00:54:05.000 --> 00:54:09.000

And so, that isn't necessary, but yeah, isn't, required, of course, live is that we have those happening.

00:54:09.000 --> 00:54:15.000

And so, that isn't necessary, but yeah, isn't, required for you to attend, but we think it could be really, really helpful to give people a sense.

00:54:15.000 --> 00:54:22.000

So we'll stop there today. And thank you everyone for coming. We'll see you back at office hours and we'll talk about guest speakers and things coming up.


Office Hours

â–¼Office Hours Transcript

The following transcript was auto-generated, and may content syntactical errors.

00:00:05.000 --> 00:00:16.000

I think today because we covered so much material, it's basically like 50 min of. Material like all at once and we did get to answer some questions.

00:00:16.000 --> 00:00:24.000

We wanted to just Basically come back to the same deck and sort of see what. Questions this group had.

00:00:24.000 --> 00:00:35.000

Based off of what we talked about. So, you know, this is going to be like a true, a true office hours where it's almost like AMA and we can get into the details and go deeper.

00:00:35.000 --> 00:00:37.000

We have 2 questions already, amazing.

00:00:37.000 --> 00:00:45.000

Oh, this is awesome. Okay, nice. Well, I do see that you're peaking the deck as well.

00:00:45.000 --> 00:00:52.000

So I'm very appreciative that you peak the deck and we're able to get into there and do some do some stuff.

00:00:52.000 --> 00:00:55.000

Okay, so let's start with

00:00:55.000 --> 00:01:03.000

What the first one and I'll

00:01:03.000 --> 00:01:04.000

Yeah, we can.

00:01:04.000 --> 00:01:10.000

Okay, so I'll just read it out. Ability EIP 4 7 8 8 the ability of a contract.

00:01:10.000 --> 00:01:21.000

To reference. The Beacon State seems open to new attack vectors. I love, first of all, let me just point out like You should definitely.

00:01:21.000 --> 00:01:32.000

Love that love that you're thinking that way because that is definitely how we need to think about this Would you know and could you share how the impact?

00:01:32.000 --> 00:01:36.000

On of this EIP on security was. Tested.

00:01:36.000 --> 00:01:45.000

Yeah, so ironically enough they did not actually fill out the security consideration section of the EIP.

00:01:45.000 --> 00:01:56.000

So who knows what's going on there. I'm curious what you think this attack would be, primarily because we're only exposing the block root.

00:01:56.000 --> 00:02:00.000

Which doesn't really actually give you

00:02:00.000 --> 00:02:08.000

I suppose if you were off on a fork or if you're fed in erroneous block route by your peers.

00:02:08.000 --> 00:02:18.000

It could cause you to make assumptions within your smart contracts. That would. Be incorrect.

00:02:18.000 --> 00:02:28.000

But we, or there could be something like a reorg. I think these are more kind of edge cases as opposed to necessarily attack vectors.

00:02:28.000 --> 00:02:37.000

Because your consensus layer will still be gossiping with its peers. To determine the heaviest head of the chain.

00:02:37.000 --> 00:02:39.000

And what that means, you know, I don't, I'm presuming based on this question that you are familiar with how the 4 choice rule works at a basic level.

00:02:39.000 --> 00:02:54.000

But it's kind of a round robin, how the 4 choice rule works at a basic level, but it's kind of a round robin where validators proposed blocks and the blocks with the most votes by the consensus layer peers and by votes I mean attestations.

00:02:54.000 --> 00:03:01.000

Will be the new tip of the chain. So you're only supposed to have one block proposal first per slot.

00:03:01.000 --> 00:03:09.000

There's something called equivocation in the Ethereum protocol. So if I propose 2 blocks in a slot, I can be slashed.

00:03:09.000 --> 00:03:15.000

And if I, you know, In theory, there should be no validator that proposes out of turn.

00:03:15.000 --> 00:03:27.000

Because your block will immediately be orphaned and You're not, no one knows where to look for your block to create attestations in order to progress the chain.

00:03:27.000 --> 00:03:31.000

In the context of 4 7 8 8 I suppose you could provide a beacon block root of a block that would be reorged out of the chain.

00:03:31.000 --> 00:03:44.000

The consensus layer hasn't seen a depth reorg or a reort of a depth more than maybe.

00:03:44.000 --> 00:03:59.000

Slow double digit blocks. I'm talking like. 7 8 9 10. I don't think we've really experienced rewards because the 4 choice rule is actually pretty good at what it does and it's really expensive to try to reorg the chain.

00:03:59.000 --> 00:04:13.000

And by that I mean you need to Control either enough of the validators. To arbitrarily reorganize the chain, which means you control between 33 and 50% of the validators depending on what you're doing in the entire network.

00:04:13.000 --> 00:04:26.000

That's why the number with LIDO being at 33% is so crucial and people are talking about it a lot because there could be arbitrary halting of finality at 33% with the ability to tweak weird reorgi things.

00:04:26.000 --> 00:04:48.000

Around 50%. So the I don't actually know what the beacon state like for my understanding is that the I presume that these people will not operate on a unsafe head because there's a concept of a safe head and an unsafe head.

00:04:48.000 --> 00:04:57.000

In the the beacon chain and by that it's basically a safe head is something we presume won't get reorged out based on enough.

00:04:57.000 --> 00:05:07.000

Length of time that's passed in an epoch since a block has been proposed and an unsafe hat is the tip of the chain like I mentioned at the very head that might not have enough.

00:05:07.000 --> 00:05:15.000

Attestation to avoid being reorged out. Or there could be a late block proposal, for example, that might cause issues.

00:05:15.000 --> 00:05:22.000

So there's the notion of a safe head and unsafe head on the consensus layer blocks. My guess.

00:05:22.000 --> 00:05:30.000

Semi-educated guests is that when you're writing a smart contract to take advantage of 4, 7, 8, 8 that you will only use as safe head or a finalized block.

00:05:30.000 --> 00:05:52.000

And you won't operate on an unsafe head because then like you're saying it could open you to a vector where you have improper You have the improper block route or somebody reorgs your block out and you are making assumptions about the state of the vegan chain that no longer are valid and your smart contract.

00:05:52.000 --> 00:05:53.000

Yeah.

00:05:53.000 --> 00:05:58.000

Which is interesting. If you had a attack vector in mind and I'm missing the question specifically, please let me know.

00:05:58.000 --> 00:06:16.000

I'd love to try to answer any like a very specific question if you have something in mind. But that would be my assumption is that it's actually on application developers to avoid using Kind of blocks in order to make assumptions and to rely on the notion of either a safe head or a finalized block.

00:06:16.000 --> 00:06:41.000

To progress, whatever they're doing. Whether that's bridging or, you know, re kind of equivocating steak and moving things around, because there's a lot of weird kind of validating games that are played on the CL that are intended to primarily do things with MEV or to compound interest.

00:06:41.000 --> 00:06:50.000

Or to create new liquidity within liquid seeking. So my guess is that it's on the smart contract developers.

00:06:50.000 --> 00:06:59.000

I think it's funny that they haven't actually filled in the security considerations on the EIP.

00:06:59.000 --> 00:07:08.000

But I'm gonna share in the chat a blog post. That is around 4, 7, 8, 8, the consensus just put out.

00:07:08.000 --> 00:07:17.000

That has a lot of information on the information or excuse me, a lot of information on the flows of data surrounding the beacon block.

00:07:17.000 --> 00:07:27.000

But again, the beacon block route itself is the hash header of the previous block.

00:07:27.000 --> 00:07:37.000

So presumably they're operating only on safe head. Or not even on the head block at all and we're using the root of the previous block to make a state assumption about.

00:07:37.000 --> 00:07:40.000

What's going on?

00:07:40.000 --> 00:07:45.000

This would be a good one. This is a great one. I mean, if we want to just kind of.

00:07:45.000 --> 00:08:05.000

Talk about how participants in the course can actually start getting involved. I think like so. I don't I'm not gonna I'm not gonna dox like client, but I suspect like client actually might not be too far from where you are right now.

00:08:05.000 --> 00:08:06.000

Oh yeah, he's in Colorado somewhere.

00:08:06.000 --> 00:08:09.000

Matt, given the time of year. And. Yeah, so.

00:08:09.000 --> 00:08:19.000

You know, I think we, kind of know all these people, but we know, I personally know like client as author the best.

00:08:19.000 --> 00:08:27.000

And so. One of the things that I would be willing to very much open up. As part of like the final project in this class, you know, we have 2 tracks.

00:08:27.000 --> 00:08:54.000

One is a, non developer track and one is like a developer track but there's probably additional track that's like hey if you're just willing to go and send you know, participate in the discussion on EAP and you drive it forward that and you can just cite that you did that.

00:08:54.000 --> 00:09:02.000

That is also counts as a final project because that is at then that would be kind of like the researcher, EIP, the contributor track.

00:09:02.000 --> 00:09:10.000

So, we can open that up. And so it's interesting to kind of look into this and maybe that's already been brought up, but if it hasn't.

00:09:10.000 --> 00:09:18.000

I would encourage you to do that and depending on the response we get, we might actually be able to convince one of the authors.

00:09:18.000 --> 00:09:30.000

To like probably like client but maybe Danny Ryan maybe a couple of others to actually come on and try and address that because What you articulated, Matt, that is where my mind.

00:09:30.000 --> 00:09:36.000

Went that. It's not, oh, and here's the peep in on it, which is really good.

00:09:36.000 --> 00:09:40.000

We already heard from Puga. So this is why.

00:09:40.000 --> 00:09:52.000

We. Love eat cat herders because they are able to. Create such great content. But, Yeah, I think this to me.

00:09:52.000 --> 00:10:04.000

It's I get worried because coming. From the side of having been. More involved initially on the smart contract side.

00:10:04.000 --> 00:10:21.000

There's a lot of things that in behaviors that I've seen on the smart contract side that make me now feel like I've seen on the smart contract side that make me now feel like, oh, like everything you're saying, that I was like, oh my gosh.

00:10:21.000 --> 00:10:22.000

Hmm.

00:10:22.000 --> 00:10:30.000

Where this I see exactly where this could be a problem because of smart contract developers are folks trying to. Whether intentionally or unintentionally using unsafe head versus safe head.

00:10:30.000 --> 00:10:49.000

Could be. Rela could be susceptible to if you know if there were a malicious actor or just one that was not even a malicious actor but a Proposer who I think would end up getting slashed.

00:10:49.000 --> 00:10:57.000

They may fall in a situation where they use they could get the wrong beacon block root and that could have.

00:10:57.000 --> 00:11:10.000

Some effect and I don't know. I think the challenge is that you're seeing with. Any sort of network is that like.

00:11:10.000 --> 00:11:15.000

It's hard. You can design a system and the system can operate as it's supposed to.

00:11:15.000 --> 00:11:25.000

But then If there is an incentive to hunt the edge cases. And MEV is a really good example of an edge case.

00:11:25.000 --> 00:11:46.000

There could be cascading impacts, particularly if like there's something related to you know someone who like if if there is a staking pool that then ends up having like 33% of the the We wouldn't call it hash power, I guess the steak, 33% of the steak.

00:11:46.000 --> 00:11:55.000

Yeah, and I think that also the since you're using kind of the parent state essentially that lags one block behind the unsafe head.

00:11:55.000 --> 00:12:00.000

That you have to like Tom saying if you're making assumptions about what occurs in the next block.

00:12:00.000 --> 00:12:05.000

Like, for example, did no one get slashed or if you're like eigenlayer.

00:12:05.000 --> 00:12:12.000

And you're trying to make assumptions about stuff. In the block before the, you have the beacon root exposed.

00:12:12.000 --> 00:12:22.000

It's kind of interesting. Yeah, I think we can move on here. If there's not any other questions, cause I think question number 2 is awesome.

00:12:22.000 --> 00:12:25.000

And we can also come back to this one. Again, if that's not enough. But.

00:12:25.000 --> 00:12:30.000

The one thing I'll say about this one is remember we kind of asked like a discussion question early on.

00:12:30.000 --> 00:12:39.000

It's like, if any one person know anything about it Them this is kind of Pack like, you know, that was like a general question, but like practically.

00:12:39.000 --> 00:13:01.000

What that means is exactly kind of what Matt and I are doing right now as we're thinking about this question and we're like, okay, who is the sort of like the first thing I think about is like because no one person could know you think about okay reaching out to the EIP authors to understand where they were thinking and why they didn't fill that in and then kind of thinking about like, okay, well.

00:13:01.000 --> 00:13:10.000

because they didn't fill that in, like, what are they basing this information on? And you know, you can kind of see.

00:13:10.000 --> 00:13:17.000

That. Protolambda says here. Pro lambda who is part of the tank chart you know named after.

00:13:17.000 --> 00:13:26.000

Project sharing talks about like optimism and talking about, the pre deploy and not precompile and bridge usage.

00:13:26.000 --> 00:13:40.000

And so you kind of see what I would call the references to prior art, which becomes really important. You kind of have to like trace down what the thinking behind this and then also like.

00:13:40.000 --> 00:13:53.000

Different suggestions. So the discussion that we have on, magicians. So now you're starting to see how this, hopefully this is a great question because it ties together a lot of things that we've referenced like how this actually works in process.

00:13:53.000 --> 00:14:01.000

Like this becomes really important for then thinking about like, okay, well, how did like, how was this?

00:14:01.000 --> 00:14:08.000

Thought of and what prior artist being cited and why do we think this is gonna work. So, we'll stop there.

00:14:08.000 --> 00:14:14.000

Okay, so change security did do an audit. Let's Just pop into that real quick.

00:14:14.000 --> 00:14:28.000

Yeah, I think this one's fun because it's not a normal pre compile. It's actually a system privileged to compile because it touches multiple layers of the chain and it's actually independent of the EVM in its own way.

00:14:28.000 --> 00:14:38.000

So it's very fun. Semantics. But we're basically having a smart contract process that route and store it within the contract storage.

00:14:38.000 --> 00:14:49.000

And you know, provide it to folks like a temporary database on chain. But it's a it's a system contract, which is a little differently than a normal pre compile.

00:14:49.000 --> 00:14:52.000

Because

00:14:52.000 --> 00:15:01.000

It's not like a function of the client that it's deployed to. So typical pre compiles basically say like they tell your client to leave.

00:15:01.000 --> 00:15:09.000

The fundamentals of the EVM behind and just go to a special place to get new functions like cryptographic functions that are very expensive.

00:15:09.000 --> 00:15:21.000

Or Yeah, it's basically just cryptographic functions right now for the most part. There's other pre compiles, but yeah, it helps you get complex math and things and complex functionality into the EVM.

00:15:21.000 --> 00:15:25.000

But in this case, it's it's men as a data store across 2 layers. So it uses specific system.

00:15:25.000 --> 00:15:35.000

Interactions that don't exist necessarily within the clients as like a precompile address. It's it's it's a fun one.

00:15:35.000 --> 00:15:43.000

It's, I think we only have 2 of these, the first being the And then this one.

00:15:43.000 --> 00:15:53.000

But yeah, this is an interesting. VIP because it does a lot of interesting stuff and I'm really I'm cracking up that they didn't fill out the rest of that EIP and it's already proposed.

00:15:53.000 --> 00:15:59.000

It's already being put on chain, so.

00:15:59.000 --> 00:16:00.000

Yeah.

00:16:00.000 --> 00:16:05.000

They should at least drop in the security audit. I mean, that's I think like even if they just dropped in this summary that's a pretty good Overview.

00:16:05.000 --> 00:16:06.000

Yeah.

00:16:06.000 --> 00:16:10.000

It's interesting. So there's all, remember, there's always humans behind this, so.

00:16:10.000 --> 00:16:19.000

Sometimes stuff you think is gonna happen doesn't happen just because. Someone forgets. Okay, great question.

00:16:19.000 --> 00:16:23.000

Let's move on to number 2. I'd like to clarify what might be my own confusion.

00:16:23.000 --> 00:16:36.000

Could you please say a bit more about the difference between gossip networks? On the system or the consensus layer and execution layer gossip networks totally independent.

00:16:36.000 --> 00:16:41.000

Are they running now with the same?

00:16:41.000 --> 00:16:42.000

Good.

00:16:42.000 --> 00:16:46.000

Catamila algorithm. So that to Le Ron. What about the beacon and execution blocks?

00:16:46.000 --> 00:16:52.000

Are they propagated on the same network? Or an independent one.

00:16:52.000 --> 00:16:59.000

Yeah, I can answer this one. So the answer is no. Excuse me. Yes, they are completely separate networks.

00:16:59.000 --> 00:17:04.000

The CL uses something called Node Discovery Protocol V. 5 or just Disc V.

00:17:04.000 --> 00:17:11.000

5 for short. We are looking to adopt it on the execution layer. But they peer and gossip.

00:17:11.000 --> 00:17:20.000

So the CL does not. Have awareness of VL peers. They're using a different, a table.

00:17:20.000 --> 00:17:31.000

There they have the same cademia algorithm as to answer the questions specifically but they're using a different hash table to determine the node records and peers and stuff like that.

00:17:31.000 --> 00:17:38.000

The blocks. Are propagated separately as well. The execution payloads and things.

00:17:38.000 --> 00:17:54.000

We, we, oftentimes in Ethereum, we show a picture that I have shown in this very lecture series of the post merge block that has the things nightly encapsulated within each other.

00:17:54.000 --> 00:18:02.000

It actually doesn't really look like that in practice. You have beacon blocks and you have execution blocks that are referenced to each other based on the slot, but they are not the same.

00:18:02.000 --> 00:18:08.000

So if you look at even, you can chain dot, you know, beacon. Whatever the hell you can't really pronounce it.

00:18:08.000 --> 00:18:10.000

It's like, begin.

00:18:10.000 --> 00:18:33.000

Beacon chain. If you click into one of the recent blocks, you'll see. You have different block numbers and different slot numbers.

00:18:33.000 --> 00:18:34.000

Yeah.

00:18:34.000 --> 00:18:35.000

The reason being is because we have 18 million blocks on main net. We're getting very close to 19 million blocks actually if you scroll down Tom Just look pick any of those blocks.

00:18:35.000 --> 00:18:42.000

So we have we're only at slot 8 million because that with the launch of the beacon chain we started the slot numbers at 0 and we you're in the epoch actually we want to go

00:18:42.000 --> 00:18:44.000

I'm in the EPAC. Yes, sorry. My

00:18:44.000 --> 00:18:53.000

So these these blocks and slots are separately kind of tracked. And if you click in there, you can see information.

00:18:53.000 --> 00:19:05.000

On the attestations, withdrawals and transactions. So we present the information to you in a nice clean and tidy way, but they will have kind of different block routes.

00:19:05.000 --> 00:19:13.000

So if you see here exactly beacon block root. And then you have just a regular, if you go back to the previous.

00:19:13.000 --> 00:19:19.000

Page we have a beacon block root and then we have a regular block root which is actually different.

00:19:19.000 --> 00:19:23.000

Because again, the beacon block exists to gather all of the attestations and the proposal payload and that's it.

00:19:23.000 --> 00:19:42.000

It's very straightforward as far as like what does the consensus layer care about and the answer is sync committees, which is out of scope of this discussion, basically, attestations and the proposed block itself.

00:19:42.000 --> 00:19:50.000

So the proposal, and to see. And they, we link all this information together in the Block Explorer, but in reality you need to query 2 separate clients to get that information in full.

00:19:50.000 --> 00:20:01.000

So what Beacon Chain likely does is they have an archive note on the execution layer and an archived note of the execution layer and an archived node in the consensus layer and each time a blocked node in the consensus layer and each time a block is processed.

00:20:01.000 --> 00:20:15.000

They dump it into a database somewhere. But yeah, the, so if you look here execution payload, you have an execution block number, which is different than that slot number because the beacon slot locked will be different.

00:20:15.000 --> 00:20:22.000

We have a different block hash. We have a different parent hash and the fee recipient is unrelated to what we're discussing right now.

00:20:22.000 --> 00:20:28.000

Yeah, so to answer the question more simply, they're totally independent for the most part.

00:20:28.000 --> 00:20:43.000

The peers on the CIA and the EL are, they're almost completely unaware of what each other's doing, which is by design because we don't want tab what we don't wanna end up with is one monolithic client again that is bogged down by.

00:20:43.000 --> 00:20:48.000

Truly a tremendous amount of networking latency. So we've separated those out.

00:20:48.000 --> 00:20:54.000

We've separated those networks out and it allows us to have more speedy block propagation on both sets.

00:20:54.000 --> 00:21:02.000

Because typically people logically separate these clients, which means that the processes that maybe starved for resources. In one mechanism or another are able to kind of defer some of that to the other client.

00:21:02.000 --> 00:21:12.000

So instead of having to do all the block gossip, all the transaction gossip. That's the really big one, right?

00:21:12.000 --> 00:21:23.000

The EL has to take care of the mempool. Which includes transaction gossip. And that causes a ton of overhead because There's a huge public memo at any given point in time.

00:21:23.000 --> 00:21:30.000

Whereas the CL is less concerned with the transactions and is more concerned with attestations and BLS signatures.

00:21:30.000 --> 00:21:39.000

But the CL has a different networking problem where they have to aggregate a ton of signatures in a 4 s kind of slot period.

00:21:39.000 --> 00:21:49.000

We have 12 s slots, but for some reason we really only care about what happens in 5, 6 s of that slot and the rest of it is a little bit of empty space.

00:21:49.000 --> 00:22:02.000

So when you propose a block the execution layer has about 4 s from the beginning of the slot to propagate the block without having a penalty incurred

00:22:02.000 --> 00:22:13.000

The consensus layer has the remaining time in this lot. It's about 8 s to aggregate all of the BLS signatures that are used to attest to that block and then propagate those to the network.

00:22:13.000 --> 00:22:18.000

So I have my peers on the CL and they come to me and they say, Okay, I like your block.

00:22:18.000 --> 00:22:24.000

Your block looks really good. It's valid. And I'm going to vote on that at vote with.

00:22:24.000 --> 00:22:32.000

My consensus layer and my proof of stake validator that this is correct. But I also have to reprocess the block.

00:22:32.000 --> 00:22:42.000

Like there's a lot of stuff that happens and a lot of latency. So we've separated these 2 networks out so that the literal network latency isn't so great that we can't.

00:22:42.000 --> 00:22:49.000

Propagate transactions, blocks, and attestations all at the same time. On the same client, cause it becomes.

00:22:49.000 --> 00:22:59.000

Pretty I/O bounded and network bounded very fast. The funny part is that most people run these machines on the same drives anyway.

00:22:59.000 --> 00:23:07.000

Which is also totally fine, but the way that the scheduler in the OS works typically handles some of this works of the.

00:23:07.000 --> 00:23:17.000

Stuff as far as like. Again, networking latency and other latencies. Cool.

00:23:17.000 --> 00:23:26.000

Did I answer that question? I think there was There's probably some I'm missing, so if you have any follow ups, please feel free to go ahead and pop them over.

00:23:26.000 --> 00:23:38.000

I'll ask a follow up. So we because the EL right now. Manages the mempool and the mempool.

00:23:38.000 --> 00:23:53.000

Consists of submitted. Transactions. I think it's like the like it's technically more than that, but we'll just simplify it to say that the mempool consists of submitted transactions that have not been added to a block yet.

00:23:53.000 --> 00:24:05.000

So it's like a pre. Pre state it is like the universe of all transactions that could be submitted but each individual client actually keeps their own version of the mempool.

00:24:05.000 --> 00:24:28.000

So there is. In theory in overall, but that there's actually like in practice. Clients only have access to the mempool of the transactions that are submit to them and I believe that has to do with the fact that that's how That's literally a product of the gossip protocol, right?

00:24:28.000 --> 00:24:29.000

What peers you have.

00:24:29.000 --> 00:24:34.000

It's like you're going to basically. Yeah, exactly. The transaction submitted by someone.

00:24:34.000 --> 00:24:41.000

So, those are gossip to the peers. And so, if we were to look.

00:24:41.000 --> 00:24:54.000

And any 2 Ethereum clients. Yeah, execution layer clients. We would not be surprised that they did not have the same mem pool.

00:24:54.000 --> 00:24:55.000

I may be.

00:24:55.000 --> 00:24:58.000

This is further complicated by that I'm not even taking to account private members and all of that so I, I say all that.

00:24:58.000 --> 00:25:12.000

Because is there going to be with EIP 4 8 4 4? A and eip 4 8 4 4 and then it's a a company mint.

00:25:12.000 --> 00:25:13.000

I don't know if it's in here. Oh yeah, blot based.

00:25:13.000 --> 00:25:26.000

Yeah. 7, yeah, 7 5 1 6. A blob. With similar properties to the execution layer.

00:25:26.000 --> 00:25:28.000

Transaction mempool.

00:25:28.000 --> 00:25:38.000

Yeah, there is a blob pool. It's entirely dependent on client implementation. I believe in Basu.

00:25:38.000 --> 00:25:46.000

We don't actually do that much transaction ordering because again, only 3 to 6 blobs can be included in a block.

00:25:46.000 --> 00:25:54.000

So you basically just take a look at what's happening in your mempool and you pick the most profitable blobs and you don't really think too much about it.

00:25:54.000 --> 00:26:01.000

In the regular transaction mempool with an execution layer, there's a ton more to consider. Like things like non-scaps.

00:26:01.000 --> 00:26:17.000

DOS vectors. Priority fees. We have something we call the layer, a layer transaction pool in Basu, which essentially sorts the transactions into layers of most profitable and immediately executable.

00:26:17.000 --> 00:26:26.000

Followed by profitable but long tail so maybe they can't be executed right now or maybe they have a low priority fee but they're very profitable transactions.

00:26:26.000 --> 00:26:34.000

And then we have kind of unexecutable. Non-scap transactions at the bottom in a layer that we call Purgatory.

00:26:34.000 --> 00:26:38.000

But that's that is literally the way that basically implements it. It's different in GET.

00:26:38.000 --> 00:26:43.000

It's different in nether mind. And it's, yeah, it's interesting.

00:26:43.000 --> 00:26:54.000

But the blob one, since we can only again. You may have a hundred blobs, but that's different than the memo where we have thousands and thousands of transactions that we're consistently evaluating against heuristics.

00:26:54.000 --> 00:27:09.000

But the blob pool is kinda. Mini and it hangs out. Those blobs are propagated on both networks, I believe, because the consensus layer needs to be aware of the blob payloads.

00:27:09.000 --> 00:27:16.000

The execution layer needs to be aware of what people are paying in E to get those blobs on chain.

00:27:16.000 --> 00:27:27.000

Because the data availability is stored. Not necessarily the state of the blob. Like, so we have, remember, if we, if you remember earlier, we talked about transaction type 0 5.

00:27:27.000 --> 00:27:31.000

You need to submit a transaction to tie the blobs. To the chain essentially on the execution layer, but the consensus layer is where the blobs themselves stay.

00:27:31.000 --> 00:27:44.000

So we need both to be aware of what's going on. And you can only pay for things on the execution layer.

00:27:44.000 --> 00:27:52.000

Which is the fun part because all the state that ever was all the ether that ever was, that state is propagated and stored on the execution layer.

00:27:52.000 --> 00:27:59.000

Since we've decided. In a in a good choice. I believe this is a good choice. We've separated the or we've put the blobs on the consensus layer from a data storage perspective.

00:27:59.000 --> 00:28:09.000

And a like retrieval perspective. Because it's arbitrary data, right? We don't want it to be in the EVM.

00:28:09.000 --> 00:28:17.000

We don't want it to bloat Ethereum state. That's literally the whole point of fluoride 4 4 is to avoid bloating state by having ephemeral data.

00:28:17.000 --> 00:28:22.000

So it's perfect for the consensus layer where we don't need to keep track of the actual data itself.

00:28:22.000 --> 00:28:31.000

We just need to have consensus that that data is available and it exists and it's valid. So.

00:28:31.000 --> 00:28:38.000

2 sides of the same coin, but if you think about it in the execution layer, you know, you got to pay for stuff and that requires state rights.

00:28:38.000 --> 00:28:57.000

Which means you have to have the 2 tied together, but the actual arbitrary data lives in the consensus layer and we use those BLS signatures to kind of aggregate and what we had the KCG commitments as I mentioned in the previous thing which ties it to the execution blocks and the beacon blocks.

00:28:57.000 --> 00:29:09.000

So we have commitments to the blobs that tie them back to the chain. We have ephemeral blobs on the CL and we have a record of payment and basically burned ether on the execution layer to pay for some of that stuff because we also burn the blob base fee.

00:29:09.000 --> 00:29:22.000

So we burn the regular base fee in Ethereum 1559 transactions and we also burned the blob base B on the execution layer.

00:29:22.000 --> 00:29:26.000

So even more burn, the more the people use blobs, the more we're going to burn.

00:29:26.000 --> 00:29:29.000

All around good stuff.

00:29:29.000 --> 00:29:40.000

Gotcha. Thanks for thanks for going into depth on that. Matt, I think one more. Follow up just to check for.

00:29:40.000 --> 00:29:55.000

An additional understanding on this. If a because there is so there's a limit of.

00:29:55.000 --> 00:30:04.000

Transactions that can be included in a block. Based off it based off of

00:30:04.000 --> 00:30:14.000

I have to make sure that my mind is not. Floating away based off of the gas limit. If I if I'm remembering my terminology correctly.

00:30:14.000 --> 00:30:15.000

Sorry, can you question?

00:30:15.000 --> 00:30:24.000

In an execution layer. So just to complete the analogy. There is, there isn't, there's a limit on the side.

00:30:24.000 --> 00:30:35.000

There's a limit on block size. The block gas limit is what enforces that because like we put in you know, certain number of transactions.

00:30:35.000 --> 00:30:41.000

I think You can kinda see. Here.

00:30:41.000 --> 00:30:42.000

Okay.

00:30:42.000 --> 00:30:49.000

I don't think I actually don't think that's correct. I don't think the block gas limit has anything to do with the blobs because they're propagated separately from the blocks.

00:30:49.000 --> 00:30:50.000

Gotcha.

00:30:50.000 --> 00:30:53.000

They're decoupled. They're propagated on the consensus layer, which doesn't have a notion of gas.

00:30:53.000 --> 00:31:09.000

So there is no there's no limit in terms of, but there is a limit. But I think like, isn't there a logarithmic like you can add up to basically 6 blobs per block.

00:31:09.000 --> 00:31:11.000

Yeah. There's a limit. There's a hard limit of 6. Yeah.

00:31:11.000 --> 00:31:15.000

And that is not capped by gas. Okay.

00:31:15.000 --> 00:31:21.000

And then a female that fluctuates based on the target of 3. It's the same thing as you think about the gas limit and then target gas.

00:31:21.000 --> 00:31:22.000

Gotcha.

00:31:22.000 --> 00:31:28.000

So I know gas limit has been a very big topic of discussion the last 3 days. And like.

00:31:28.000 --> 00:31:37.000

Yeah, I think there is just like a thread or post from italic about increasing the gas limit recently.

00:31:37.000 --> 00:31:38.000

Yeah.

00:31:38.000 --> 00:31:47.000

Yeah, and the consensus layer isn't totally, unaware of the concept of gas. It's just that since you're not executing Ethereum transactions.

00:31:47.000 --> 00:31:54.000

Like in a way that can fluctuate based on the compute. They don't really need gas to enforce DOS protection.

00:31:54.000 --> 00:32:09.000

That's why we have steak. They just they enforce the rules that would normally have been enforced via forcing people to pay a ton of gas to get something crazy done.

00:32:09.000 --> 00:32:10.000

Yeah.

00:32:10.000 --> 00:32:13.000

Because you can grief the chain if you're willing to spend a ton of money, right? On the execution layer, you can do that on the consensus layer.

00:32:13.000 --> 00:32:24.000

You cannot necessarily do that. So that's why we have slashing protection. Because there's no concept of fluctuating gas for a degree of work on the CL.

00:32:24.000 --> 00:32:30.000

The CL just execute stuff as fast as it can in the Gasper algorithm, which is just the 4 choice algorithm.

00:32:30.000 --> 00:32:37.000

It's the proof of state algorithm, blah, blah, blah. It just runs through the rules as quickly as it can on the machines that it's given.

00:32:37.000 --> 00:32:40.000

And then it stops when it's done, when it's done with the work that it has to do.

00:32:40.000 --> 00:32:47.000

So we in theory have a bounded amount of work that we can fully reason about and we use the stake to enforce that.

00:32:47.000 --> 00:32:58.000

Whereas on the execution layer, you have clients that can do unbounded amount of work. In theory, the work is bounded by the 30 million gas limit on each block.

00:32:58.000 --> 00:33:08.000

If we raise that, that changes. But you can grief the chain over repeated blocks, right? Like I can pay all of the gas in the world to fill up every single block with nonsense.

00:33:08.000 --> 00:33:09.000

If I fill up enough blocks in a row with nonsense, I can actually finalize the nonsense.

00:33:09.000 --> 00:33:23.000

And if I control 50% of the validators, I can basically arbitrarily rewrite previous data in the chain and do crazy long range attacks.

00:33:23.000 --> 00:33:35.000

Bad stuff. Don't let anyone get to 50% of steak. Yeah, I don't know if I'm serving to confuse folks more here, but It seems we had someone join and then leave sadly.

00:33:35.000 --> 00:33:45.000

Didn't get another question. Do you have any other questions in the chat? I can keep babbling.

00:33:45.000 --> 00:33:50.000

We can even do general stuff. I mean, we, we got time here. We don't have to touch on.

00:33:50.000 --> 00:33:57.000

Dankoon, perhaps. I'm happy to go over anything I know about the 2 layers.

00:33:57.000 --> 00:34:06.000

Or the protocol itself.

00:34:06.000 --> 00:34:17.000

And feel free to come up mute if you're comfortable as well if you want to ask question.

00:34:17.000 --> 00:34:42.000

And we can also give folks 20 min back. If needed, I'm not gonna ramble for no reason, but I'm happy to do so.

00:34:42.000 --> 00:34:54.000

Okay, we'll do one last call for questions. If not, we can. Wrap a little bit early today.

00:34:54.000 --> 00:35:00.000

We go. Okay. Awesome. Those are some great questions.

00:35:00.000 --> 00:35:01.000

Yeah, they were questions.

00:35:01.000 --> 00:35:11.000

Yeah. So we'll be back. Next week, on our Wednesday time, talking about testing.

00:35:11.000 --> 00:35:16.000

We'll have a guest speaker. Just in Florentine about talking about testing. And then.

00:35:16.000 --> 00:35:19.000

2 weeks problem. On him.

00:35:19.000 --> 00:35:22.000

And in 2 weeks, yeah, and then we'll have office hours. And so. Thanks everyone.

00:35:22.000 --> 00:35:28.000

I'll go ahead and stop the recording


Supplemental Resources

Proceeds from collectibles editions of this post will be distributed as follows:

100% to Education DAO Optimism Treasury (0xbB2aC298f247e1FB5Fc3c4Fa40A92462a7FAAB85)

Loading...
highlight
Collect this post to permanently own it.
Wiki of Web3 logo
Subscribe to Wiki of Web3 and never miss a post.
#ethereum#lectures/seminars#courses