Cover photo

Understanding Ethereum Network Upgrades: Lecture 3

EIP-4844: Proto-Danksharding

EIP-4844 provides cheap data for L2s by extending blocks with short-lived data blobs. This lecture will explain the significance of and provide the context to understand EIP-4844.

Index

  1. Lecture

  2. Office Hours

  3. Supplemental Resources


Lecture

Lecture Transcript

The following transcript was auto-generated, and may content syntactical errors.

00:00:03.000 --> 00:00:13.000

Awesome. Thanks everyone. This is an exciting week of our sessions. Week 3 is on EIP 4 8 4 4.

00:00:13.000 --> 00:00:16.000

The reason being is that we are finally getting to the network upgrade itself in question. This is the Cancun.

00:00:16.000 --> 00:00:27.000

Denub upgrade. And we are going to be discussing today the primary driver of the upgrade, which is EIP, 4, 8, 4, 4.

00:00:27.000 --> 00:00:37.000

As I mentioned in previous sessions, these upgrades tend to have One big EIP slash change that is accompanied by a bunch of smaller changes.

00:00:37.000 --> 00:00:43.000

Today we're discussing the real big one, the meat of it, that's taking up the bulk of the development time.

00:00:43.000 --> 00:00:52.000

The brings the bulk of the benefit of the hard forward and then on next week's session, which I believe actually is in January, will be taking a break for the holidays.

00:00:52.000 --> 00:00:55.000

Is that correct, Tom? We'll be taking a break.

00:00:55.000 --> 00:01:03.000

Yeah, so we'll be taking a, just, let everyone know. We'll be taking actually, quite a sizeable break.

00:01:03.000 --> 00:01:14.000

We're gonna, but we're gonna give you a ton of links and extension materials that you can continue to sort of keep up and then we'll be coming back on January tenth.

00:01:14.000 --> 00:01:22.000

Awesome. Yep, so we will discuss the remaining EIPs of the fork and what did not make it in and how that scoping process looks a little bit next week.

00:01:22.000 --> 00:01:25.000

But let's get into the meat of it. There's a lot of stuff to discuss today.

00:01:25.000 --> 00:01:40.000

I'm going to start with layer two's in general for those on the phone. I'm not going to break them down too too much, but a layer 2 solution is what the kind of preferred approach to scaling in Ethereum network transactions is.

00:01:40.000 --> 00:01:57.000

And there are multiple different types of layer twos. We won't cover them necessarily. But they're all based around the Ethereum virtual machine and the reason being is that we want smart contracts and code to be able to move around between these networks so that in reality a user will eventually not know where they're transacting.

00:01:57.000 --> 00:02:10.000

They'll just have lower fees. So the real mechanism for this, process to work is that you have, again, a roll up of transactions of execution that happens on different types of networks.

00:02:10.000 --> 00:02:19.000

So you have multiple different kinds of networks. You have things like linear optimism, Arbitrum, Starknet, ZK.

00:02:19.000 --> 00:02:28.000

These are all layer 2 solutions that look a little bit different. And they leverage the Ethereum main net layer one for interoperability and security.

00:02:28.000 --> 00:02:29.000

So you have, again, a bunch of different transactions happening essentially off chain from the layer one perspective.

00:02:29.000 --> 00:02:42.000

They, these roll-ups will batch together transactions, execute them on different kinds of sequencers and environments.

00:02:42.000 --> 00:02:49.000

And then they'll take the results of those transaction execution and put that on layer one. The reason being is that we don't necessarily have decentralization on layer 2 or maximal security on layer 2.

00:02:49.000 --> 00:02:59.000

We don't have maybe hundreds of thousands of notes. We don't have all of that.

00:02:59.000 --> 00:03:01.000

Whereas on Ethereum Mayn, we have, you know, tons and tons of nodes working in proof of steak with economic security.

00:03:01.000 --> 00:03:12.000

Finality, all that good stuff. So, you know, layer one is a little bit slow.

00:03:12.000 --> 00:03:21.000

Gas can be a little bit expensive because block space on layer one is very coveted. So these layer twos will take all of those transactions that normally would have been.

00:03:21.000 --> 00:03:28.000

Single digit or even double digit, you know, dollar amounts to process on layer one and they'll make them cheap cheap.

00:03:28.000 --> 00:03:39.000

Or you know, cents on the dollar on layer 2 because they can kind of. Defer some of their security assumptions to the layer one network and make things cheaper by.

00:03:39.000 --> 00:03:46.000

Centralizing a bit. Although many layer two's are working on progressive decentralization, which is awesome.

00:03:46.000 --> 00:03:54.000

All that is to say, we have kind of these 2 networks. Layer twos, you take advantage of Ethereum layer one.

00:03:54.000 --> 00:04:07.000

They help scale a theorem layer one by offloading transactions to the other networks. And eventually they'll interoperate with each other and they'll enter operate with layer one so that assets are extremely portable.

00:04:07.000 --> 00:04:17.000

Cool. Many times layer ones have their native gas denominated in E. The difference between a side chain or a kind of.

00:04:17.000 --> 00:04:27.000

You know, alternative layer one is that gas is not denominated in E on layer twos for the most part gas is denominated in And there may also be a native token that determines governance of that chain.

00:04:27.000 --> 00:04:35.000

But that doesn't have much to do with gas. And the reason being is we want asset portability.

00:04:35.000 --> 00:04:44.000

We want interoperability with Ethereum main net and we want those assets to be able to move back and forth and also between the layer two's very easily.

00:04:44.000 --> 00:04:51.000

Amazing. This is all quickly becoming a lot more robust. There's also a lot of kind of cross chain things happening on layer 2.

00:04:51.000 --> 00:05:04.000

But again, since the underlying Ethereum network is the means of security for these layer two's the interoperability comes somewhat for free and it's it's really great.

00:05:04.000 --> 00:05:10.000

Why is this an issue? We, it sounds great. What we don't need to change this process, do we like it's it's not that bad.

00:05:10.000 --> 00:05:19.000

It's it sounds great. The real reason is call data. Ethereum's layer one network is becoming kind of bloated.

00:05:19.000 --> 00:05:25.000

There's a round, a hundred 50 to 200 GB purely of world state information.

00:05:25.000 --> 00:05:32.000

The world state of Ethereum is a snapshot look in time of every smart contract, every account balance.

00:05:32.000 --> 00:05:44.000

Every kind of interaction at the current block. And it is big. The reason being is because we shove things into call data, and contracts get large.

00:05:44.000 --> 00:05:56.000

So it's really an arbitrary data storage on chain at a high gas cost. It was initially conceived to allow smart contracts to kind of.

00:05:56.000 --> 00:06:14.000

Store and interact with data between sets of transactions that needed to be long lived But as roll ups proliferated, they started making heavy use of this call data because they needed to put arbitrary and data on layer one that proves the execution that happened on layer 2.

00:06:14.000 --> 00:06:27.000

So on the right hand side of the screen here we have a linear, linear post transaction where the L one linear messaging service is receiving call data, L one linear messaging service is receiving call data essentially from a smart contract.

00:06:27.000 --> 00:06:35.000

Which is a connection to that linear layer too, and it's using the call data. If you will, I'll go through this link at the bottom, the input data down here to say, hey, I have all these transactions.

00:06:35.000 --> 00:06:46.000

They were executed successfully. These are the state transitions associated with those transactions. And now I'm going to put the results on layer one.

00:06:46.000 --> 00:06:53.000

To live in perpetuity because I need to be able to prove my state on the layer 2 at any given point as well.

00:06:53.000 --> 00:07:02.000

So it inherits all the security properties of Ethereum layer one because you have the layer one chain attesting to the state of the.

00:07:02.000 --> 00:07:08.000

Smart contracts that handles account balances ether balances on the layer 2 That might be a little confusing.

00:07:08.000 --> 00:07:17.000

I'll break it down a little bit when I go into this page, but the problem with this is that this data lives forever.

00:07:17.000 --> 00:07:21.000

It lives for ever on chain. It's it bloats Ethereum state, which means it makes the state data larger.

00:07:21.000 --> 00:07:35.000

It makes note operators. Have to have more expensive storage. Even more storage, like I said, the state kind of grows at a steady rate.

00:07:35.000 --> 00:07:42.000

Because of use of this call data it's not the only reason the state grows as more and more users come on chain.

00:07:42.000 --> 00:07:46.000

There are their account balances. And their smart contract interactions. Will also load the state and make it larger.

00:07:46.000 --> 00:07:52.000

And call data is not only used by layer 2 solutions, that it's used by tons of different apps for a ton of different reasons.

00:07:52.000 --> 00:08:06.000

And it is a good thing, but at the same time, it's not an ideal solution for layer 2 because many times they don't need that data to live forever.

00:08:06.000 --> 00:08:12.000

They only need that data to live for maybe 14 days where there's a challenge period that says, hey, I don't think that.

00:08:12.000 --> 00:08:19.000

I don't think that that state transition looks right. Or that those account balances on their 2 are correct.

00:08:19.000 --> 00:08:23.000

So if you've heard of things called fraud proofs in layer 2, that's essentially what we're talking about.

00:08:23.000 --> 00:08:26.000

So this data lives forever. We only really need it for a little while to make sure that the chain looks good as we progress forward.

00:08:26.000 --> 00:08:36.000

But it's It's costly in terms of gas and it's costly in terms of the data that we store forever.

00:08:36.000 --> 00:08:45.000

So I'm going to break open a transaction here. This one happened an hour and 38 min ago on block main net block, 18 million.

00:08:45.000 --> 00:08:52.000

And what was this call? So the, the linear contract has a call called finalized blocks.

00:08:52.000 --> 00:09:00.000

Where they roll up all those blocks that are happening on layer 2. They batch them and they post the data on layer one.

00:09:00.000 --> 00:09:08.000

So if you go to more details. This input data is the call data. This is what the layer 2 is.

00:09:08.000 --> 00:09:19.000

Calling in so that smart contract and posting as data to live in the Ethereum world state So if you look down, where it's calling the finalized blocks function, which takes in block data.

00:09:19.000 --> 00:09:30.000

Roof data, a type of proof, and then a state root hash, which is presumably on the layer 2, the state group that means, hey, all these state transitions were valid.

00:09:30.000 --> 00:09:37.000

So within that, there's a lot of stuff here. You can see that there is around 2,000 fields.

00:09:37.000 --> 00:09:42.000

These are all maybe 5, 6, 7, 8 Byte in length. And it gets costly.

00:09:42.000 --> 00:09:53.000

This is just one transaction on, a theory of main net. It cost 50 gas or 50 g in terms of gas, which is not a lot if you look at it, but it can be a lot.

00:09:53.000 --> 00:10:07.000

It cost a hundred dollars in the transaction fee. So these are it's not only expensive to the node operators, it's expensive to the layer twos to post data on layer one.

00:10:07.000 --> 00:10:14.000

So why do we care about this? Well, we want to make a better solution if we want Ethereum.

00:10:14.000 --> 00:10:25.000

If we want Ethereum to scale using layer twos, it shouldn't cost a hundred dollars every time you want to put data on layer one and that data also shouldn't live for ever and below the state indefinitely.

00:10:25.000 --> 00:10:32.000

As more layer twos come online, that'll kind of get us some hot water. And we will you know, have issues, right?

00:10:32.000 --> 00:10:39.000

So the call data, very important for the state of being now, but where we need to get rid of it in terms of roll-up usage.

00:10:39.000 --> 00:10:46.000

Cause it's just gonna cause a lot of headaches and cost. I'll pause for any questions on the call data stuff.

00:10:46.000 --> 00:10:51.000

We have one. Has anyone ever challenged a rule up during the challenge period? That is a great question.

00:10:51.000 --> 00:10:54.000

I believe, hang on, going, getting ahead of myself. I believe for ZK roll-ups the mechanism is a little bit different.

00:10:54.000 --> 00:11:02.000

I'm not sure if anyone's actually challenged Arbiter or optimism in their challenge period.

00:11:02.000 --> 00:11:12.000

My guess would be yes. Because you're monetarily rewarded for that kind of thing. It's similar mechanism to slashing in proof of steak. It's similar mechanism to slashing in proof of steak.

00:11:12.000 --> 00:11:13.000

So the maybe Tom can do a little googling in the background while I continue. So the, maybe Tom can do a little googling in the background while I continue, but I think the answer is probably yes. And the background while I continue, but I think the answer is probably yes.

00:11:13.000 --> 00:11:20.000

And they use things again to do a little googling in the background while I continue, but I think the answer is probably yes.

00:11:20.000 --> 00:11:25.000

And they use things again called fraud proofs to make sure that that execution is completed in the right way.

00:11:25.000 --> 00:11:26.000

Okay, that was called data. Now, what are we doing about it? We're building Cancun.

00:11:26.000 --> 00:11:36.000

So this is the Ethereum Network upgrade that this whole series has been about. It's only focused on shipping this one primary feature.

00:11:36.000 --> 00:11:48.000

IP 4 4 4. There are again several ride along EIPs that we'll talk about. And we have a new transaction type, that we'll talk about.

00:11:48.000 --> 00:11:52.000

And we have a new transaction type, 0 x 0 5 that says, Hey, I'm posting blocks on chain.

00:11:52.000 --> 00:11:55.000

Blah, blah, blah. I'll get into the nitty gritty.

00:11:55.000 --> 00:12:03.000

Don't focus too much on this. Many other big primary EIPs were considered for inclusion, something called EOF or EVM object format.

00:12:03.000 --> 00:12:14.000

The core development community tends to only favor one large change at a time. Followed by several smaller changes. This can cause debate.

00:12:14.000 --> 00:12:27.000

Hot, you know, conversations back and forth around what should be included. I think as we go into the prog fork, we're going to see a big debate about EOF again and Verkal tries on the other side of that debate.

00:12:27.000 --> 00:12:38.000

We'll see how it shakes out. I am not confident in EOF, unfortunately, because I think it's a great upgrade that is actually primarily ready, but I don't think the community appetite is there.

00:12:38.000 --> 00:12:45.000

Okay, let's get into it specifically. Whoa. That, oh, I went all the way to the end somehow.

00:12:45.000 --> 00:12:53.000

Sorry. Cool. Yeah, p, 4, 4, blob space. Something called prototype sharding.

00:12:53.000 --> 00:13:01.000

It's Just a name. It's made by it was primarily created by 2 folks named Proto, Protolanda and Dank Brad.

00:13:01.000 --> 00:13:04.000

There's the name, proto dang sharding. It's the first step on data availability sharding.

00:13:04.000 --> 00:13:12.000

Charting is kind of a misnomer here. We're not really creating multiple shards of Ethereum.

00:13:12.000 --> 00:13:24.000

We are creating blob space. And this blob space is cheap. Data availability on a theorem layer one for L two's and technically for anyone that wants to make use of the blob space.

00:13:24.000 --> 00:13:33.000

By extending our existing block infrastructure on chain with short-lived data blogs. Data blobs, excuse me.

00:13:33.000 --> 00:13:40.000

The reason that this was pushed earlier on is because there was strong lobbying from layer 2 and others to get this supplied ASAP.

00:13:40.000 --> 00:13:47.000

As I mentioned on the EIP call previously, there was some strong champions that pushed this EIP. There was a website created.

00:13:47.000 --> 00:13:55.000

There was community work done. There were exchanges that were talked to client teams that were talked to, prototype implementations built.

00:13:55.000 --> 00:14:02.000

Amazing. We chose to include it in the Cancun fork because we needed to scale a layer 2 as soon as possible.

00:14:02.000 --> 00:14:11.000

To get a lot of the pressure off of Ethereum main net and to drive prices across all of the EVM like chains down in reality.

00:14:11.000 --> 00:14:21.000

Cool. Nodes are the blob space was created for arbitrary data storage. So I'll get to this specifically on the next slide, but in reality it's arbitrary storage.

00:14:21.000 --> 00:14:30.000

Just like the call data. But nodes in the network are only required to store these blobs for about a month, after which they can be pruned and discarded.

00:14:30.000 --> 00:14:38.000

Which means we can do stuff very cheaply because we don't need to store data indefinitely on disk that will cascade the price.

00:14:38.000 --> 00:14:46.000

So these nodes are the nodes in the network like Ethereum validating nodes participating in proof of stake only will need to store these blobs for about a month.

00:14:46.000 --> 00:14:53.000

It totals about 30 to 60 gigs on disk for a consensus layer client. That is a far cry from.

00:14:53.000 --> 00:15:02.000

Indefinitely expanding call data which can balloon tens of gigs. You know, in a month it stays on chain forever.

00:15:02.000 --> 00:15:08.000

So we've settled on this implementation. And we also set, we also did something called the KCG ceremony.

00:15:08.000 --> 00:15:14.000

I'll explain it now purely because it's becomes relevant to how blobs are actually created.

00:15:14.000 --> 00:15:21.000

Alongside this, hard fork, we ran something called the KCG ceremony, which is essentially a way for us to get trusted randomness.

00:15:21.000 --> 00:15:31.000

And a his signature scheme for data availability by collecting entropy from over a hundred 40,000 contributors.

00:15:31.000 --> 00:15:41.000

All you need to know is that this is essentially a very, very secret random number that only requires one person in the chain of 140,000 to not collude and reveal their number.

00:15:41.000 --> 00:15:46.000

So everybody else in the whole wide world could know all of the secret values they build on each other basically in randomness.

00:15:46.000 --> 00:15:50.000

But as long as one person is honest and does not share their randomness, then we're good.

00:15:50.000 --> 00:15:59.000

I think we're very good because a lot of people actually built randomness that they themselves did not know.

00:15:59.000 --> 00:16:14.000

I thought my the most interesting one that I saw was someone basically created a little ball with a proximity sensor in it and an orientation sensor and they would play catch with it and it would collect data and eventually they uploaded it to their computer in an obfuscated way that went into the KCD ceremony.

00:16:14.000 --> 00:16:17.000

I thought that was pretty interesting there.

00:16:17.000 --> 00:16:25.000

Not that important to the concept of what we're talking about here, but in reality it's a super secret random number that helps us do something called KZG commitments.

00:16:25.000 --> 00:16:32.000

These commitments are important because they allow us to do things with the block that are the blob rather that are trustless.

00:16:32.000 --> 00:16:37.000

Now let's get into the nitty a little bit. What are the specs say?

00:16:37.000 --> 00:16:39.000

Why do we need to do this? This is a screenshot from the exact EIP.

00:16:39.000 --> 00:16:47.000

I won't dive into this link. We can actually go in there because there's a lot of good stuff.

00:16:47.000 --> 00:16:56.000

But these are the parameters that really make everything sing. The implementation details of what we have here is a lot more complex.

00:16:56.000 --> 00:17:01.000

But we have a blob transaction type. So a new type of transaction that allows us to post blobs to the network, propagate them, and have nodes understand what's going on.

00:17:01.000 --> 00:17:15.000

It's You don't have to worry too much about what this value is. We have bytes per field element and field elements per blob.

00:17:15.000 --> 00:17:21.000

So we have 32 Byte allowed in a field element and we have 4,096 field elements in a blob.

00:17:21.000 --> 00:17:30.000

Which totals about 128 kB blobs. This is great because again, We have to use a full blob as a user or as an L 2.

00:17:30.000 --> 00:17:45.000

So say I want a hundred 20 kB of arbitrary data. I submit a transaction. I take price, take part in a gas auction, similar to main net, and I get to post my data.

00:17:45.000 --> 00:17:54.000

It's shared with everyone. Amazing. We have again, 120 kB blocks, a hundred 28 kB blocks.

00:17:54.000 --> 00:18:07.000

And we have blob gas. The reason that we have blob gas is because we have a target limit and a maximum limit of number of blobs per block.

00:18:07.000 --> 00:18:14.000

The reason for this is network latency. We are targeting initially at least. We can raise this later on.

00:18:14.000 --> 00:18:23.000

3 blobs per block. With a target of 3 and a maximum of 6, which means we can have about a 3, over 4, of a megabyte of data per block.

00:18:23.000 --> 00:18:32.000

Of data kind of availability. That lives on chain for a month. Which is tremendously larger than what we have now as far as call data is concerned and it's tremendously cheaper.

00:18:32.000 --> 00:18:50.000

The, the gray cost of gas is a lot lower. I could compare to the transaction from linear that I showed you, but I don't really need to do that. Transaction from linear that I showed you. But I, don't really need to do that.

00:18:50.000 --> 00:18:59.000

But it's best basically we have 786,000 gas. Versus blah, blah, blah, blah, This is, 4.9 million guests.

00:18:59.000 --> 00:19:09.000

So, much cheaper. Awesome. What else did you need to know about these specifications? As I mentioned, there's a gas auction.

00:19:09.000 --> 00:19:15.000

There is a fee market designed around these blobs that is very similar to EIP, 1559.

00:19:15.000 --> 00:19:23.000

That we discussed earlier on the reason being is that we again we want a target of 3 blobs with a maximum of 6.

00:19:23.000 --> 00:19:34.000

So if we are going over that target limit of 3 blobs per block. The price to include a blob in the next block will go up to disincentivize the usage of blobs.

00:19:34.000 --> 00:19:42.000

And wait for that price to come back down. It's similar to the base fee burn mechanism we have on main net, which in times of high usage of the network.

00:19:42.000 --> 00:19:50.000

Raises prices in order to drive things down in terms of usage and in times of lower congestion on the network it lowers prices to encourage usage.

00:19:50.000 --> 00:20:03.000

Same thing with blobs here. We want to limit the bandwidth and the throughput a little bit because we don't want, you know, maximum usage at all times to burden nodes with bandwidth constraints or bandwidth issues.

00:20:03.000 --> 00:20:11.000

And to. Have too much basically data flying around in the network. Because it will cause consensus layer.

00:20:11.000 --> 00:20:21.000

Pick ups and issues. So this gas price auction is determined by 2 fields, gas per blob and blob gas price update fraction.

00:20:21.000 --> 00:20:29.000

All you need to know is that this is the kind of base B, the gas per blob, and then it fluctuates up and down based on that fraction determined by the network.

00:20:29.000 --> 00:20:33.000

Needs and things there.

00:20:33.000 --> 00:20:34.000

Hi.

00:20:34.000 --> 00:20:38.000

It is exponential, correct? With each additional blob.

00:20:38.000 --> 00:20:51.000

I believe so and they're a very kind of This in our testing at least on Devonets and Tesnets, this press fluctuation can be a little bit wild.

00:20:51.000 --> 00:21:00.000

But I think that's good because we had did a lot of networking analysis that showed that using 6 blobs per block is actually extremely bandwidth intensive.

00:21:00.000 --> 00:21:14.000

We contemplated even lowering this to 2 target blobs with 4 maximum. I think as we evolve this technology, we will raise this limit actually, but for now this is what it is.

00:21:14.000 --> 00:21:24.000

We found a lot of issues with bandwidth. That I'll get to honestly in this kind of next couple of slides to determine why and how this came to be.

00:21:24.000 --> 00:21:32.000

Any questions here initially before I dive into some of the more architectural mechanics as opposed to the specs?

00:21:32.000 --> 00:21:43.000

I think just to if we were to put that in a formula. If it if it blocks parent has an access of 3 blobs, the relative increase in price will be E.

00:21:43.000 --> 00:21:48.000

To the 3 divided by Q power. Where Q is the blob gas price update fraction.

00:21:48.000 --> 00:22:01.000

So. We can We can go into formulas in office hours. Maybe we won't do it here.

00:22:01.000 --> 00:22:03.000

Yeah.

00:22:03.000 --> 00:22:04.000

Yeah.

00:22:04.000 --> 00:22:14.000

Could be good use of time. So, so. We have the blobs, we have the amount of money they cost, we have the amount of data they can hold.

00:22:14.000 --> 00:22:22.000

Now, where do we put them on the network? Initially, we bolted them on or excuse me.

00:22:22.000 --> 00:22:34.000

Initially they were decoupled from the blocks. Or excuse I might have this backwards. Excuse me, initially they were coupled with the blocks with this architecture.

00:22:34.000 --> 00:22:43.000

We have our existing Ethereum block. Within that, you might have some transactions that add a specific blob to that block.

00:22:43.000 --> 00:22:49.000

And you have a KZG commitment. To the inclusion of those blobs within that block.

00:22:49.000 --> 00:23:00.000

Like I said, this is a random secret that essentially allows us to make commitment proofs. On chain about what is occurring with the blobs and that they were indeed tied to this block.

00:23:00.000 --> 00:23:12.000

The reason for this is blob equivocation, which I will discuss in a little while, but the net is we want to make sure that folks aren't proposing duplicated blocks or they're not signing multiple blocks to try to get block rewards.

00:23:12.000 --> 00:23:19.000

For multiple blobs as things fly around. Cool. We have the KC commitments.

00:23:19.000 --> 00:23:29.000

It says I only have proposed this block with these blobs and it looks like this and I'm gonna propagate it to the network alongside my commitment to show that I am not being untrustworthy.

00:23:29.000 --> 00:23:38.000

In reality, we have, again, this is the blockchain on top. The nodes are, this is more specifically kind of the beacon chain.

00:23:38.000 --> 00:23:45.000

We have the beacon chain blocks and some of the execution payloads. We have regular transactions like maybe an E transfer.

00:23:45.000 --> 00:23:55.000

And then we have the data transactions that type 5 transaction that I mentioned where we have a blob. A commitment and the blob commitment.

00:23:55.000 --> 00:24:06.000

This KG commitment proves that the data that I have in here matches what I put in here. And I also send the commitment across the network as well so everyone can verify that.

00:24:06.000 --> 00:24:10.000

So we have arbitrary data. 128 kB in this example we have 256 kB in this one block of.

00:24:10.000 --> 00:24:23.000

Data. We have commitments and then we have a chain of blocks. So this one only has one data transaction, 128 kB in this block.

00:24:23.000 --> 00:24:31.000

This one has 3. And again, nodes don't need to actually download this all of the data themselves.

00:24:31.000 --> 00:24:42.000

They can read the KCG commitments. They can sample some of the data and use very complicated vector math to be sure that that stuff looks good.

00:24:42.000 --> 00:24:50.000

I am not a mathematician or a cryptographer, but I know that this vector math from what I have gathered is very complex.

00:24:50.000 --> 00:24:59.000

And it again, it's a proof of the fact that they have the entire data set within the data blob and the commitment matches.

00:24:59.000 --> 00:25:05.000

Nodes can still download this data if they want. They can also. Discard it after one month.

00:25:05.000 --> 00:25:22.000

I have a feeling on the network once we are deployed on main net, we will see archival services store every blob that ever was and ever will be because again they're only around 128 kB and as long as we don't need them available on the network, a property called Liveness.

00:25:22.000 --> 00:25:27.000

It becomes very cheap to store them on disk off chain, but you have to trust centralized entities.

00:25:27.000 --> 00:25:33.000

So when they're on chain, you have its trust list. When they're pruned, you have to trust somebody.

00:25:33.000 --> 00:25:44.000

But that one month period is crucial because we want one month to be able to attest to this blob data and use it to potentially do things like recreate a layer 2 state.

00:25:44.000 --> 00:25:51.000

Replay state transitions that happen on layer 2 to ensure correctness. Or we just want to read it for one reason or another.

00:25:51.000 --> 00:26:04.000

Maybe we're not using a layer 2, but we want to store some data that is short-lived and we want to be you know understand that that data hasn't been changed or modified in any way.

00:26:04.000 --> 00:26:11.000

I will pause again. Before we get into equivocation and some other topics.

00:26:11.000 --> 00:26:24.000

We have one question here. In the event that blob space becomes very expensive, more expensive than call data, could an L 2 rollup sequencer still post data as call data, at least in theory, if we're cheaper to do so.

00:26:24.000 --> 00:26:31.000

I think the answer is absolutely. I have a feeling we'll see a multiplex solution for most of these layer twos.

00:26:31.000 --> 00:26:39.000

Where they look at data availability solutions. Across the network at any given point in time and they choose the cheapest one.

00:26:39.000 --> 00:26:40.000

That could be something like I can layer DA. It could be something like main net blob space.

00:26:40.000 --> 00:27:08.000

It could be something like main net call data. It could be something like posting call data on another layer too, which would be, in my opinion, super interesting, but opens a kind of can of worms around where is the real source of truth, but I have a feeling as these solutions mature we will see essentially like a gas API that determines what is the cheapest way to store the data of this given area.

00:27:08.000 --> 00:27:18.000

And then ways for it to cascade back up and back down. So as long as the data is available, it doesn't really matter where it goes because we're pruning it after a month anyway.

00:27:18.000 --> 00:27:23.000

We just need to make sure that that data is available to both the layer 2 and Ethereum layer one.

00:27:23.000 --> 00:27:32.000

So if a theory I'm layer one can verify information that's on I can DA via people participating in eigenlayer proof of stake.

00:27:32.000 --> 00:27:39.000

Then that's great. And we can trust that that data has slashing rules against it and it uses the economic security of Ethereum.

00:27:39.000 --> 00:27:50.000

That's the real key. Right? It doesn't matter where the data lives, as long as it's taking advantage of atherium's economic security and it's available for some portion of time that's long enough for the challenge window to be valuable.

00:27:50.000 --> 00:28:02.000

In the case of optimistic roll-ups, that challenge window is about 14 days. In the case of ZK roll-ups, the challenging is a little bit different because you're using 0 knowledge proofs.

00:28:02.000 --> 00:28:22.000

However, the blobs become very important in recreating state for 0 knowledge roll ups, which is to say if, for example, all of the nodes in linear were to go down tomorrow, I would still be able to look at the commitment data that is post on L one and recreate the entirety of the chain.

00:28:22.000 --> 00:28:34.000

Which is a phenomenal property of this kind of data availability and is super useful for making sure that chains don't explode and that we have long lived immutable chains even if they're centralized or on layer 2.

00:28:34.000 --> 00:28:38.000

So that's a great question.

00:28:38.000 --> 00:28:41.000

Do I, okay, another question here from Dustin. Do we foresee blob space falling under a similar category as post EIP 4 fours storage solutions for pruned data through centralized party.

00:28:41.000 --> 00:28:54.000

I absolutely do. I think archival blog data will fall into something like that where it's incentivized.

00:28:54.000 --> 00:29:02.000

Probably only for third parties unless we can come up with a mechanism in protocol to incentivize Super nodes.

00:29:02.000 --> 00:29:12.000

So there are some discussions around this where basically we have anyone who's willing to store the entire state will receive more protocol rewards than anyone running a portion of state.

00:29:12.000 --> 00:29:20.000

So if I have a full node that just has some data and the world state information, I can still participate in proof of sake.

00:29:20.000 --> 00:29:28.000

But if I have a mega node which stores all the blobs and makes them available to everyone and all of the chain data in a post 4 fours world.

00:29:28.000 --> 00:29:34.000

We're discussing mechanism design on how to make that more profitable essentially to encourage our chival door data.

00:29:34.000 --> 00:29:42.000

I think in the interim period between deployment of 4 4 4 and deployment of 4 fours, we will see centralized parties.

00:29:42.000 --> 00:29:50.000

Potentially folks, you know, like. If you're a alchemy, these kind of RPC providers that we're already making trust assumptions about.

00:29:50.000 --> 00:30:01.000

I expect to see them provide some kind of archival data solutions. Because the blobs, especially on ZK roll-ups, you will need to keep most if not all of the blobs around.

00:30:01.000 --> 00:30:16.000

Somewhere. But again, the liveness property that is useful for the data on chain. That is kind of the point in time look, that I need to make sure that the state is valid as both chains kind of chug along.

00:30:16.000 --> 00:30:22.000

The live spirit is really important for optimistic roll-ups because that is the only thing The only thing that makes that challenge period kind of viable is having the data available within that window.

00:30:22.000 --> 00:30:38.000

Of 14 days. From when a state transition occurs to. When the fraud proof is no longer applicable.

00:30:38.000 --> 00:30:41.000

I'm gonna take 1 s, drink some water.

00:30:41.000 --> 00:30:48.000

And I'll just say, I think that actually quite a few number of, I mean, This is sort of speaking in.

00:30:48.000 --> 00:31:11.000

My in my day job role, but I think quite a few. Rpc node providers and indexers will provide, specialized services around blobs like I don't think it's like a monolith I think there's just a lot of opportunities and data needs that different groups have and so you'll start to see some generalized ones but also specialization where specific data is may be

00:31:11.000 --> 00:31:19.000

extracted. Similar in the ways that we see that there's certain bespoke APIs that have popped up in order to support like certain functions, you know, like an NFT API.

00:31:19.000 --> 00:31:41.000

A token API, etc. So I think it's something that will be interesting and I think if that's an area that will you'll continue to see specialization I think you'll see things I expect to see new dashboards popping up on on either scan and And, do analytics as well.

00:31:41.000 --> 00:31:54.000

So it's gonna be, I think the sort of data environment is really going to see a period of incredible innovation and growth.

00:31:54.000 --> 00:32:04.000

As this happened in other spaces. I'm not saying this like shooting from the, yeah, yeah, I'm not saying this like I'm.

00:32:04.000 --> 00:32:14.000

I'm sort of like trying to prognosticate the future based on no example like as other as data availability is something that we've kind of see play out.

00:32:14.000 --> 00:32:25.000

Over the past 15 years in other spaces and this has followed a sort of similar pattern.

00:32:25.000 --> 00:32:28.000

Awesome, Tom.

00:32:28.000 --> 00:32:37.000

Alright, let's keep it going. We will continue to answer questions in chat. So, some more specifics of the blob design.

00:32:37.000 --> 00:32:47.000

This is and measure of latency essentially with coupling versus decoupling of blocks and how it impacted network performance.

00:32:47.000 --> 00:32:55.000

If they blobs are coupled with blocks and propagated around the network. It actually caused additional latency.

00:32:55.000 --> 00:33:06.000

In the amount of messages that could be delivered over the consensus layer peer to peer network. So we.

00:33:06.000 --> 00:33:15.000

Excuse me, we so we are adding the blobs to the block. When you have coupled, We, this is what we started with. Excuse me.

00:33:15.000 --> 00:33:24.000

My slides are in a strange order. We had initially couple those blobs to the blocks with, the KG commitments, like I said.

00:33:24.000 --> 00:33:32.000

We had, okay. 3 blobs target per block. We propagate the blob and the blobs and the blocks together.

00:33:32.000 --> 00:33:46.000

Across the data across the network. Again, that's like 3 quarters of a megabyte per block and having the consensus layer duties of those nodes be working in this kind of fashion was a little bit expensive on the networking stack.

00:33:46.000 --> 00:33:56.000

So we had multiple plans to kind of address those. Issues there. We essentially decided we could either circulate the blobs and blocks together.

00:33:56.000 --> 00:34:03.000

Or we can circulate the blobs and blocks separately. We're using the same KCG commitment scheme.

00:34:03.000 --> 00:34:14.000

We are using the same approach. The only difference is that instead of a signature on the block, that has essentially a Merkel, Patricia.

00:34:14.000 --> 00:34:21.000

Hash and signature for this block here with all the KCG commitments and with all the blobs together.

00:34:21.000 --> 00:34:33.000

We have now decided to do a we had another plan actually to use the signature on the block and then individually sign the blobs with the KG commitments and circulate them separately.

00:34:33.000 --> 00:34:37.000

Why, you know, we I'll get to the to the kind of specifics later on, but why is this pose a problem?

00:34:37.000 --> 00:34:45.000

Why didn't we do this upfront if we had really heavy circulation of blocks and blobs.

00:34:45.000 --> 00:34:49.000

That need to be done together. Well, the reality is that we don't want equivocation.

00:34:49.000 --> 00:34:57.000

It's a slashable offense on the network. Equivocation basically means as a proposing of one of those blocks with or without blobs.

00:34:57.000 --> 00:35:05.000

If I proposed 2 blocks for the same slot, I can be slashed. And The reason being is that we want to be have predictable state transitions.

00:35:05.000 --> 00:35:16.000

We also need everybody to be able to witness the block that I've published on the network. Attest to the block validity by processing it on their own nodes and saying, hey, that looks good.

00:35:16.000 --> 00:35:22.000

This is this is bread and butter proof of steak. We have attestations for blocks and we have production of blocks.

00:35:22.000 --> 00:35:29.000

And they all work together to say, okay, we agree that this proposal at this slot produced a valid block.

00:35:29.000 --> 00:35:36.000

And 64 people looked at it and said this looks good. That's the other way that proof of stake essentially works, Ethereum.

00:35:36.000 --> 00:35:46.000

Slashing, make sure that those folks stay within those guardrails. And if I propose 2 separate blocks in the same slot, How am I gonna be able to handle that?

00:35:46.000 --> 00:35:54.000

If a, we have a tester signing different blocks. If someone notices that I'm doing this, it's a slashable offense, and we don't want a revocation.

00:35:54.000 --> 00:36:03.000

We don't want a qualification on blobs either. So we don't want a actor in the middle publishing you know, 50 blobs with their block where in reality we can only push 3 to the network.

00:36:03.000 --> 00:36:19.000

We don't want this approach because it'll cause huge amounts of latency. We think that the, again, those 120 kB blobs, 28 kB blobs.

00:36:19.000 --> 00:36:21.000

If we are having equivocation where a proposal can send 30 into the network. It's a huge DOS vector.

00:36:21.000 --> 00:36:36.000

It causes network latency. It causes headaches for block production and attestation. And it causes, again, economic kind of challenges as far as gas is concerned.

00:36:36.000 --> 00:36:54.000

So we had invented something called inclusion proofs. So I wanna return to my initial thought. Here where We need decoupling of blocks and blobs because of the network latency that's required to ship them together to all of my peers in the peer-to-peer layer.

00:36:54.000 --> 00:37:03.000

It's too expensive and costly to move those packages as one unit. So we've decoupled them because again, we can send a lot more messages and a lot less latency.

00:37:03.000 --> 00:37:14.000

Which is the red graph. We have, you know, an average of about maybe 1,000 ms and we have a lot more nodes whereas if we couple the blobs in the blocks together we have a lot more latency in the network.

00:37:14.000 --> 00:37:17.000

It's a lot more. Resource constraint and it will cause issues with the proof of steak algorithm because of that latency.

00:37:17.000 --> 00:37:29.000

So we decouple the blocks, plan B, we've taken plan B. We want to make sure that equivocation isn't a thing.

00:37:29.000 --> 00:37:37.000

So we've created something called an inclusion proof. This makes blob equivocation slashable.

00:37:37.000 --> 00:37:45.000

So we have a block header. And inclusion proof within that header that says, hey, the KSG commitments that I made.

00:37:45.000 --> 00:37:52.000

4 of those blobs will be in the header now and they correspond to what I have in these blobs.

00:37:52.000 --> 00:38:06.000

Sorry, blobs and blocks are So poorly names for the purpose of demonstration. But in the Ethereum block, if you're familiar with Ethereum block headers, they have a bunch of essential metadata and they have a bunch of essential proof data.

00:38:06.000 --> 00:38:16.000

So we're adding an inclusion proof. That is a KCG commitment around, hey, These are the blobs that I'm going to include with this block.

00:38:16.000 --> 00:38:32.000

I don't need to gossip them at the same time. I know that I've made a commitment to include maybe 3 blobs those blobs are flying around the network separately with their own kind of Metadata proofs that will tie it to the block header at a specific block.

00:38:32.000 --> 00:38:47.000

The reason for this is because again, we want to separate those things, keep network latency low. But now I have essentially an inclusion proof on the block side and then I have commitments on the blob side that those blobs will be included within the block based on that inclusion proof.

00:38:47.000 --> 00:38:54.000

And if I lie about my inclusion proof, I can be slashed. And people can check these pretty easily.

00:38:54.000 --> 00:39:00.000

So we're taking the economic security of proof of stake and we're applying it to blob equivocation.

00:39:00.000 --> 00:39:07.000

So that if I'm a proposing of a block and I lie about the blobs, I'm going to include I can be slashed.

00:39:07.000 --> 00:39:19.000

Yeah, fake proofs are easy to filter. So a proposaler built publishing more than 6 blobs needs to create a new block header and publishing more than one block header is slash.

00:39:19.000 --> 00:39:31.000

So it's again, these inclusion proofs are very easy to create. They are pretty easy to spot and block headers can be verified very quickly and cheaply by other nodes in the network.

00:39:31.000 --> 00:39:42.000

And if I see some wrongdoing, I can go ahead and slash that person, get some rewards from the protocol and ensure that everyone kind of goes about their day.

00:39:42.000 --> 00:39:49.000

Let me pause here for more explanation. That's wrapped up the actual slide mechanics.

00:39:49.000 --> 00:39:58.000

I know there's a lot of information. So I think we should have a little bit of a free wheel and discussion for maybe 5, 10 min and then we can wrap up.

00:39:58.000 --> 00:40:03.000

So if there are more questions, please include them in the chat. This can also be something very simple that I've discussed.

00:40:03.000 --> 00:40:12.000

We can go back to any topics. We can discuss any of the mechanics in more detail.

00:40:12.000 --> 00:40:25.000

Yeah, let me know. If not, we can also end a little early. I actually was impressed with the speed you went through that.

00:40:25.000 --> 00:40:38.000

There's no way I did that perfectly on the first try.

00:40:38.000 --> 00:40:51.000

Now's the time to Check for your. Understanding live. This, this is this week is the most complex week of the course.

00:40:51.000 --> 00:40:52.000

Yeah, by far.

00:40:52.000 --> 00:41:02.000

So. We're peeking in the middle. Okay.

00:41:02.000 --> 00:41:11.000

Hey, have a quick one maybe on just this last bit about inclusion proofs and How that all works?

00:41:11.000 --> 00:41:22.000

So in the event that a maybe 6 separate roll ups, publish blobs. And they're each paying.

00:41:22.000 --> 00:41:33.000

Like whatever the top amount of of gas for the blob. if there's like additional blobs beyond that.

00:41:33.000 --> 00:41:49.000

Initial set of 6 that are also willing to pay the top amount. How does the block proposing decide which blobs to include is it like the first in and highest the situation.

00:41:49.000 --> 00:42:06.000

Yeah, so I mean, each consensus implementation each consensus client implementation is probably slightly different in this regard. You were supposed to do the same EIP, 1559 auction essentially, which is to say as a proposal I'm incentivized to collect as much of that blog gas as I can.

00:42:06.000 --> 00:42:21.000

So if you reach the point in the slot though where it's like maybe you're at the The very last millisecond before you can propose a block, you probably won't have time to create new inclusion proofs and basically rewrite your block header.

00:42:21.000 --> 00:42:24.000

So it's as the blobs come in and they live within the mempool, they have a lot of this data.

00:42:24.000 --> 00:42:39.000

As associated with them already, the issue is the block header, especially when you consider which may be out of the scope of this discussion purely because my scope of understanding might be a low because the block header is really what where all the goodness comes.

00:42:39.000 --> 00:42:49.000

Mev searchers that will have to create. The new like essentially the new block header will have to find ways to tie those blobs to the block header as well.

00:42:49.000 --> 00:43:07.000

So I, in my understanding, you are incentivized to take as much of the blobs as, you know, like the blobs as, you know, like the blob gas as you can, but again, there's only a 4 s kind of propagation window in the slot to produce that block and share it.

00:43:07.000 --> 00:43:21.000

This gets more complicated when you have things like block, intentional block delay to extract more. But in my mind, it's a matter of like, since you're continuously computing that block header, you need to make sure that you have the amount of time you need to do.

00:43:21.000 --> 00:43:32.000

This inclusion proofs to tie these to the block, produce the block and admit it within the slot.

00:43:32.000 --> 00:43:45.000

You're also constantly being fed new blobs from the mempool. So at least on the execution layer side as the mental, as these transactions come in, we're consistently evaluating them to put the kind of highest value first.

00:43:45.000 --> 00:43:55.000

So in theory, if you receive a very juicy blob at like, the eleventh hour it's up to the implementation of the execution client.

00:43:55.000 --> 00:44:14.000

On how you will produce this. in basic, I believe we do what we do in our normal transaction pool, which is to say we're consistently filtering them out, but there is a cutoff point as far as block production is concerned, where we will just include what we have because we don't want to risk missing our production slot.

00:44:14.000 --> 00:44:29.000

In, I really don't know how this works unfortunately. My hunch. Is that the block header that we receive is a payload from the searchers will have to include these proofs as well.

00:44:29.000 --> 00:44:38.000

Cause in theory there will be spla mev, which is like annoying and kind of lamb in my opinion.

00:44:38.000 --> 00:44:45.000

But like there could potentially be any of the opportunities to delaying the blob inclusion. To I don't know, front run somebody else's blobs.

00:44:45.000 --> 00:44:56.000

I frankly don't know how that will work. But the since the MAV searchers are the one generating the block headers that has to be verified by the consensus layer client.

00:44:56.000 --> 00:45:01.000

There is definitely complexity in there. Flashpots for sure is I know that they're on our test nets right now for this and that they're it works.

00:45:01.000 --> 00:45:13.000

So presumably they have a mechanism that both is compatible with MEV and not getting slashed.

00:45:13.000 --> 00:45:26.000

My guess is that they are Since they're not relying on the execution layer transaction pool, the searchers will have to determine which blobs from either the public membr or the private member to included in this block.

00:45:26.000 --> 00:45:35.000

And then deal with it from there. I don't know if that answered your question, but that's probably as detailed as I can get.

00:45:35.000 --> 00:45:38.000

Awesome. Yeah, that was great. Thanks, man.

00:45:38.000 --> 00:45:42.000

Yeah, no worries. Okay, we have another question here in the chat. Is there a way to persist a blob for a period longer than a month?

00:45:42.000 --> 00:45:59.000

Say a year beyond, of course, rolling them over. So you can do, there are settings in the consensus layer clients, who has one where you can say, there's a, there's a minimum value, but you can specify a longer value to retain blobs.

00:45:59.000 --> 00:46:06.000

You can retain every blob that ever was and ever will be in your consensus layer node. If you have the space for it.

00:46:06.000 --> 00:46:14.000

So if you want to be. Like if you don't wanna rely on a centralized third party, you can witness all the blob data that's within the network.

00:46:14.000 --> 00:46:21.000

As long as your note is running and you can keep it around yourself. It just is gonna take this space, right?

00:46:21.000 --> 00:46:36.000

If we have 30 gigs, 60 gigs a month. Of worth of blobs. You can extrapolate that out to a year, you know, terabyte kind of actually a lot less than that but You know, whatever amount of data will need, you know, somewhere under a terabyte of data per year.

00:46:36.000 --> 00:46:50.000

So you're absolutely more than welcome to do that on your own node if you want to. And again, there's going to be Probably decentralized ways that this ha that that that storage archival storage is designed.

00:46:50.000 --> 00:46:56.000

Since it's not in protocol. There will, you know, be creative things that are coming up.

00:46:56.000 --> 00:47:03.000

My guess is that there will be some way to prove on chain that the blobs that you're holding for longer than a month are indeed applicable and valid and they were on the network at 1 point.

00:47:03.000 --> 00:47:13.000

That seems relatively simple to do with something like a, you know, an inclusion proof for a Merkel proof there.

00:47:13.000 --> 00:47:17.000

Yeah, it will come is my point and you can either do it yourself on your node as long as you're running in the network.

00:47:17.000 --> 00:47:35.000

There are archive settings for that in the or you can trust a third party. I and my third point is I think as we go along there will be ways to write proofs that show inclusion of blobs on the network at 1 point.

00:47:35.000 --> 00:47:49.000

And then. You'll have the blob data.

00:47:49.000 --> 00:47:58.000

Any other questions? We had Dustin share a presentation into the chat, maybe Tom, we can find that and we can share it within the discord and the course readings.

00:47:58.000 --> 00:48:04.000

Yeah, we'll share it in the question. We'll also try and share some information on.

00:48:04.000 --> 00:48:13.000

The. Well, I consider this a side quest. It's not part of the class about like MEV related.

00:48:13.000 --> 00:48:21.000

Implications. And in order flow related implications. So. You can take the time.

00:48:21.000 --> 00:48:51.000

That will be a part and if you want to dive into some of that. Will share that as well.

00:48:56.000 --> 00:49:21.000

Okay, if there are no more questions, I think we can kind of run through. What else we're gonna do this week so this is this is a packed week so tomorrow we're gonna have a guest speaker, Michael who works on he's a researcher at consensus.

00:49:21.000 --> 00:49:35.000

He has authored EIPs. He is sort of gonna be speaking. He's gonna speak to his process with the EIPs, but he's also gonna be available for questions and any questions that you have on EIP 4 8 4 4.

00:49:35.000 --> 00:49:42.000

He's another great resource. Talk to. So for.

00:49:42.000 --> 00:49:43.000

It would be 7 a. M. Pacific time tomorrow. This will be recorded.

00:49:43.000 --> 00:49:54.000

Will share out the recording like the other one, but he's kind of both a week 2. I guess speaker as well as a week 3 guest speaker.

00:49:54.000 --> 00:50:05.000

So, that we'll have office hours. On Friday. Our same time.

00:50:05.000 --> 00:50:12.000

And. That will be.

00:50:12.000 --> 00:50:18.000

At. 9 am, Pacific time and we'll go into just answering more questions since this is pretty dense material.

00:50:18.000 --> 00:50:30.000

That we went through.

00:50:30.000 --> 00:50:32.000

Cool. I think that is it for today. We have a little bit of time left over about 6 min.

00:50:32.000 --> 00:50:44.000

If there's any other questions people have. Please feel free to jump in otherwise we'll We'll pause for today.

00:50:44.000 --> 00:50:56.000

We'll post the recording. And we highly encourage you, we'll also post the, we didn't post the slide deck in advance, but we'll be posting that as well along with the recording and definitely go back and review this.

00:50:56.000 --> 00:51:10.000

I would also ask if anyone is.

00:51:10.000 --> 00:51:24.000

If anyone's doing any writing. On the upcoming upgrade and you want to share that with the class, you are highly encouraged to and we could share that.

00:51:24.000 --> 00:51:30.000

I know there are a couple folks out in the audience who authored some really awesome articles about the upcoming.

00:51:30.000 --> 00:51:40.000

Den Kun upgrade and I would encourage you if you're creating anything. Because part of, you know, this course.

00:51:40.000 --> 00:51:52.000

Is a there's a project based part you can take a sort of developer or a researcher track and the researcher track is to write a bit of material and publish it out there that helps.

00:51:52.000 --> 00:52:05.000

And improve understanding. So, those of you who might be doing that writing, if you want to share it, it's a good example to everyone else of what that could look like in terms of your final, project.

00:52:05.000 --> 00:52:09.000

Alright.

00:52:09.000 --> 00:52:10.000

Yeah, well.

00:52:10.000 --> 00:52:16.000

Yeah, so. I mean, we wrote, 4 pieces so far and the final piece.

00:52:16.000 --> 00:52:22.000

Should come out soon which is on 4 4 4 do you guys want us supposed to right here we can post the 4 published links

00:52:22.000 --> 00:52:30.000

Yeah, you can publish it here and then we'll share it out as examples. If it's okay with you.

00:52:30.000 --> 00:52:44.000

Yeah, of course. Yeah.

00:52:44.000 --> 00:52:50.000

I got, perfect. I got the other ones for TJ.

00:52:50.000 --> 00:53:08.000

Okay, thanks.

00:53:08.000 --> 00:53:28.000

Perfect. Thanks so much. Alright, well thank you everyone for attending today. And, we'll see you tomorrow, if you're able to join for, Mchale, who will be our guest speaker and then see you again on Friday.

00:53:28.000 --> 00:53:29.000

Thanks, everyone.

00:53:29.000 --> 00:53:33.000

Cool. I'll stop the recording.


Office Hours

Office Hours Transcript

The following transcript was auto-generated, and may content syntactical errors.


Supplemental Resources

Proceeds from collectibles editions of this post will be distributed as follows:

100% to Education DAO Optimism Treasury (0xbB2aC298f247e1FB5Fc3c4Fa40A92462a7FAAB85)

Loading...
highlight
Collect this post to permanently own it.
Wiki of Web3 logo
Subscribe to Wiki of Web3 and never miss a post.
#ethereum#lectures/seminars#courses