Full report chapter 2:Ethereum Cancun Upgrade Brings Potential Investment Opportunities
Cancun Upgrade
Content summary: The Cancun upgrade is currently mainly focused on EIP 4844 (Deneb) and the KZG ceremony, with a possibility of EOF appearing in this upgrade.
Recent weekly progress of Ethereum source
- 2/18/2023:
KZG Ceremony:
Currently has 47,000+ contributors, 4000+ waiting to join, the first regular contribution period has 23 days left
Coinbase added participation ad to its products
Video conference for implementers of EIP4844, mentioning that blob will be decoupled from blocks
- 2/25/2023:
KZG Ceremony:
Currently has 54,000+ contributors, 4000+ waiting to join, the first regular contribution period has 16 days left
Video conference for implementers of EIP4844, stating that client teams will release blob from blocks, and the development network is expected to be launched within a few weeks after decoupling
- 3/4/2023:
KZG Ceremony:
Currently has 61k+ contributors, 20k+ waiting, the first regular contribution period has 9 days left
Implementation of JavaScript client
Video conference on breakthroughs in SSZ (Simple Serialize): EIP updated indicators for joint and normal transaction representations
Upgrade Content
EIP4844 Proto-Danksharding (Deneb)
- Background
According to the latest conference call, Ethereum developers agreed to rename the EIP 4844 upgrade as Deneb, which is the name of a first-class star in Cygnus. In the future, the naming of EIP 4844 on related GitHub versions will be updated to Deneb;
March 8 Ethereum conference call contents:
Validator blob signature process: Currently, blind flow is only applicable to builder workflows, but it would be better if it could be extended to the validator process to save traffic information size; Another major problem is stickiness. Currently, validators can load balance on different beacon nodes, and developers are discussing whether to cut the design space of load balance for the sake of simplifying the process;
Validator blob release process: The next issue is about the release of blobs. If a blob is released separately, it will affect error codes 202 and 204. It is currently roughly decided to merge the release;
Missing blob requests: For decoupled blobs, a complex problem is how to design when to request missing blobs? That is, when a node receives a block, does it immediately request missing blobs or wait a few seconds before proof-cutting time? This requires finding a balance between the two solutions, but this issue does not affect the development progress in the near future;
Encryption library: The encryption library is making significant progress. All bindings now support the new KZG interface. The remaining task is to polish all bindings;
SSZ format of blob transactions: Considering the complexity of the SSZ definition of blob transactions is increasing. Unless everyone agrees on the SSZ format, we do not want to change to a new transaction type;
4844 is advancing independently, and SSZ discussions will remain decoupled from 4844.
- Intro:
EIP-4844 aka Proto-danksharding, and Danksharding is sharding design which is Ethereum’s rollup-centric roadmap. EIP-4844 is a proposal to implement most of the logic and “scaffolding” (eg. transaction formats, verification rules) that make up a full Danksharding spec.
-The main feature introduced by proto-danksharding is new transaction type, which we call a blob-carrying transaction.
A blob-carrying transaction is like a regular transaction, except it also carries an extra piece of data called a blob. Blobs are extremely large (~125 kB), Each transaction in Ethernet can carry up to two Blob transactions, i.e. 256 KB, and each block has a target Blob capacity of 8, i.e. 1MB, and can carry a maximum of 16 Blob, i.e. 2MB.
Blob with KZG commit Hash as Hash for data validation, similar to Merkle
After the Blob Transaction on the node synchronization chain, the Blob part will expire and be deleted after a period of time (currently about 30 days).
-Blob role — to improve the TPS lowering the cost.
Both EIP-4488 and proto-danksharding lead to a long-run maximum usage of ~1 MB per slot (12s). This works out to about 2.5–5 TB per year, a far higher growth rate than Ethereum requires today where about 1TB.
In the case of proto-danksharding, the consensus layer can implement separate logic to auto-delete the blob data after some time (eg. 30 days).
Both strategies limit the extra disk load of a consensus client to at most a few hundred gigabytes. In the long run, adopting some history expiry mechanism is essentially mandatory: full sharding would add about 40 TB of historical blob data per year, so users could only realistically store a small portion of it for some time. Hence, it’s worth setting the expectations about this sooner.
• EIP-4844: Proto-Danksharding
-High Scalability: Currently EIP-4844 can initially scale L2, and the full version of Danksharding can scale Blob data in EIP-4844 from 1MB-2MB to 16MB-32MB, ensuring decentralization and security while achieving higher scalability.
-Low cost: Bernstein analysts show that Proto-danksharding can reduce the cost of Layer 2 networks to 10–100 times the current level
• Data Availability Solutions: Danksharding currently may have overloaded nodes (16mb+) and insufficient data availability (30 days validity) if it continues to scale. To figur it out, 2 designs make it better.
-Data Availability Sampling
Proto-danksharding instead creates a separate transaction type that can hold cheaper data in large fixed-size blobs, with a limit on how many blobs can be included per block. These blobs are not accessible from the EVM (only commitments to the blobs are), and the blobs are stored by the consensus layer (beacon chain) instead of the execution layer.
The design includes Erasure Coding and KZG Commitment stack.
-proposer/builder separation (PBS).
A specialized class of actors called block builders bid on the right to choose the contents of the slot, and the proposer need only select the valid header with the highest bid.
Only the block builder needs to process the entire block (and even there, it’s possible to use third-party decentralized oracle protocols to implement a distributed block builder); all other validators and users can verify the blocks very efficiently through data availability sampling .
- Anti-censorship list (crList) — solves the problem of packagers being able to intentionally ignore certain transactions and arbitrarily sort and insert their own transactions to obtain MEV due to their censorship power.
-Before the packager (Builder) packs the block transaction, the proposer (Proposer) will first publish an anti-censorship list (crList), which contains all transactions in the mempool.
-The packager (Builder) can only select and sort the transactions in the crList for packaging. This means that the packager cannot insert their own private transactions to obtain MEV, nor can they intentionally reject a transaction (unless the Gas limit is full).
-After the packager (Builder) has packed the transactions, they will broadcast the hash of the final version of the transaction list to the proposer (Proposer), who will then choose one of the transaction lists to generate the block header and broadcast it.
-When nodes synchronize data, they will obtain the block header from the proposer (Proposer) and then obtain the block body from the packager (Builder) to ensure that the block body is the final selected version.
- Dual-Slot PBS — Solving the Centralization Problem of Packagers Obtaining MEV
-Using a bidding system to determine block production:
The packager (Builder) creates the block header for the transaction list after receiving the crList and places a bid.
The proposer selects the winning block header and packager (Builder), and receives the bid fee unconditionally (regardless of whether a valid block is generated).
The verification committee confirms the winning block header.
The packager (Builder) discloses the block body of the winning block.
The verification committee confirms the block body of the winning block and votes to validate it (if it passes, it is added to the blockchain; if the packager intentionally does not provide the block body, it is considered non-existent).
-Significance:
Firstly, the packager (Builder) has more power to package transactions, but the crList mentioned above limits its ability to temporarily insert transactions. Furthermore, if the packager (Builder) wants to profit by changing the order of transactions, it needs to pay a high bidding cost upfront to ensure that it qualifies for the block header. As a result, the MEV profit it gains is further reduced, affecting both the means and actual value of its MEV acquisition.
However, in the early stages, only a small number of people may become packagers (considering node performance issues), while most people will only become proposers. This may further centralize the network. Additionally, in cases where the number of packagers is already limited, experienced miners with MEV capability will be more likely to win bids, further affecting the effectiveness of MEV solutions.
This has certain implications for MEV solutions that use MEV auctions.
- Benefits of EIP-4844
-Reduce transaction fees on L2 by an order of magnitude. For example, in Optimistic Rollups, future fees will drop to less than $0.01, reducing transaction costs by more than 100 times the current level.
-Combining upgrades such as Push0 from the Shanghai upgrade, there will be the ability to have large contracts with low fees in the future.
-A necessary prerequisite for Danksharding is to enable easy data sharding in the future. EIP-4844 is compatible with future changes in the consensus layer, making it easy for L2 developers to upgrade.
-Introduce a multi-dimensional fee market for L1: distinguish the use and fees of different resource types, such as EVM applications, block data, witness data, and state size. And all of these resources have different capacity limits, which means that if each resource has a different pricing mechanism, they will be allocated effectively. However, Ethereum L1 currently uses a single metric, Gas fee, to measure the usage cost of all these resources, which is highly inefficient.
-Proto-danksharding introduces a multi-dimensional EIP-1559 fee market, with two resources, Gas fee and blobs, with independent floating gas prices and limits.
-Possible issues
-Since all validators and clients of Ethereum L1 need to download the entire blob content, this increases the cost of running such nodes, which may increase the threshold for running such nodes.
-In combination with other proposals, such as EIP-4444, it may require nodes/clients to store this data blob only for a certain period of time (1–3 months).
Reference Content
Ethereum researcher @protolambda shared his predictions for the crypto world in 2023 on Twitter, including that after two months of activating withdrawals, the deposit queue will become longer than the withdrawal queue; EOF series EIP will be deployed, but competition for adoption between L1 and L2 and tool development will take more than six months; at least one zk rollup will undergo a deep redesign, and more. https://twitter.com/protolambda/status/1608870209460502528?s=20 (December 31, 2022) Ethereum’s new shard scheme Danksharding and the 10,000-word research report on EIP-4844: Has a new public chain narrative arrived? A plain-language interpretation of the revolutionary solution to the “Blockchain Trilemma”. https://research.web3caff.com/zh/archives/6259 (February 14, 2023) Interpretation of the four EIPs to be activated in Ethereum’s Shanghai upgrade (November 5, 2022) https://www.chaincatcher.com/article/2082141 The role and benefits of Ethereum’s Shanghai upgrade (November 8, 2022) https://www.chaincatcher.com/article/2082266 In-depth explanation of polynomial commitments: reshaping the entire blockchain? https://foresightnews.pro/article/detail/17988 Detailed explanation of how KZG is applied to zk-rollup and Ethereum DA scheme https://www.defidaonews.com/article/6784542 Fox Capital’s research analysis — What can we predict about the Cancun upgrade? Fox Capital’s research analysis — What can we predict about the Cancun upgrade? How did V God interpret EIP-4844, which has a critical impact on Layer2? https://www.theblockbeats.info/news/29882?search=1 What is V God’s science popularization of “Danksharding”? https://www.theblockbeats.info/news/29262?search=1
EOF Upgrade
-Introduction:
○EOF is an upgrade to EVM that introduces a new contract standard and some new opcodes. traditional EVM bytecode is an unstructured sequence of instructions bytecode. EOF introduces the concept of container EOF introduces the concept of container, which implements structured bytecode. The container consists of a header and several sections to structure the bytecode. The upgraded EVM will be able to perform version control and run multiple sets of contract rules at the same time.
○EOF is a new set of contract rules. EOF introduces a new set of contract rules and upgrades the EVM’s interpreter. The upgrade allows the EVM to run two sets of contract rules in parallel, one for EOF contracts and one for Legacy contracts.
The EOF upgrade will be implemented with 5 EIPs: EIP-3540, EIP-3670, EIP-4200, EIP-4750 and EIP-5450.
-Meaning:
○ EVM versioning. This makes it easier to introduce or remove features and prevents the EVM from becoming increasingly complex and inelegant. Removing features from the EVM is now very difficult because the large ecosystem/application layer relies on a particular EVM behavior, so removal may cause incompatibility issues at the application layer. So if a feature is added to the EVM, we need to default to the fact that it will probably always be there.
○ Adding new control flow operations and dropping dynamic jumps and runtime JUMPDEST analysis altogether is more cost effective. (And make code conversions easier, etc.)
○ Shifting what EVM verifies at runtime (e.g. stack underflow, overflow) to deployment time. This makes EVM less overhead and makes contract code more secure (potential errors are not deployed on Ethereum).
○ Separation of code and data. There will be an executable but non-readable code part, and a readable but non-executable data part.
-Proposal Description:
○ EIP-3540:
▪ The EVM bytecodes deployed from the previous chain are without a predefined homogeneous structure, and the code is only validated before it is run in the client, which is both an overhead cost and a disincentive for developers to introduce new features or deprecate old ones.
The EIP introduces an extensible and version-controlled container for the EVM, and declares the format of the EOF contract (as shown below) as a basis for validating the code at the time of deployment of the EOF contract, i.e. creation time validation, meaning that contracts that do not conform to the EOF format can be prevented from being deployed. This change enables EOF version control, which will help to disable existing EVM commands or introduce large features (e.g. account abstraction) in the future
▪ Significance: the distinction between data and code is very useful for on-chain code validators, which will save gas consumption for validators
○EIP-3670: Based on EIP-3540, the goal is to ensure that the code of EOF contracts is formatted and valid, and that non-formatted contracts are prevented from being deployed without affecting Legacy bytecode
○ EIP-4200:
▪ The first EOF-specific opcodes are introduced: RJUMP, RJUMPI, and RJUMPV, which encode destinations as signed immediate values. compilers can use these new JUMP opcodes to optimize the gas cost when deploying and executing JUMPs, because they eliminate the The compiler can use these new JUMP opcodes to optimize the gas cost when deploying and executing JUMP because they eliminate the runtime required by existing JUMP and JUMPI opcodes when doing jumpdest analysis. This EIP is complementary to dynamic jump.
The difference between this and traditional JUMP operations is that operations such as RJUMP do not specify a specific offset location, but rather a relative offset location (from dynamic jumps -> static jumps), since in many cases static jumps are sufficient
○ EIP-4750:
○EIP-4750: takes 4200 one step further: by introducing two new opcodes, CALLF and RETF, an alternative is implemented for cases where RJUMP, RJUMPI and RJUMPV cannot be replaced, thus eliminating the need for JUMPDEST in EOF contracts, and thus prohibiting dynamic jumps
○ EIP-5450:
▪ Background: The background is still that Ethernet contracts are now deployed without checks, and only at runtime will a series of checks be performed, whether the stack is overflowing (capped at 1024), whether there is enough gas, and so on.
What: Another validity check has been added to EOF contracts, this time around the stack. This EIP prevents the deployment of EOF contracts that could lead to stack underflows / overflows. This allows clients to reduce the number of validity checks they perform during the execution of an EOF contract.
Significance: A major improvement is to make these checks that occur at execution time as few as possible, and more of them occur at contract deployment time.
KZG Ceremony
-KZG Promise is an acronym for the last initials of Kate, Zaverucha and Goldberg, the authors of the paper, and related literature: Paper
-Content:
○ In a polynomial scheme in which the prover computes a commitment of a polynomial and can open it at any point of the polynomial, the commitment scheme proves that the value of the polynomial at a particular position agrees with the specified value.
○ is called a commitment because the provers cannot change the currently computed polynomial when a commitment value (a point on an elliptic curve) is sent to some object ( verifier).
○ They can only provide a valid proof for a polynomial; when trying to cheat, they either cannot provide the proof or the proof is rejected by the verifier
-Function: To help implement data availability sampling (DAS) techniques in conjunction with corrective coding techniques.
○Censoring is an encoding fault tolerance technique that allows all Ethernet nodes to restore the original data with just over 50% of the data fragments using censoring to cut the data
○ KZG commitment is used to solve the data integrity problem of censored code: since nodes only sample the data fragments cut by the censored code, they do not know whether the data fragments really come from the original data of the Blob, so the role responsible for encoding also needs to generate a KZG polynomial commitment to prove that the data fragments of the censored code are really part of the original data of the original data
-The two together reduce the node burden when the Blob is expanded to carry an additional 16MB to 32MB of data
-KZG polynomial scheme requires less bandwidth and requires less computation for sampling
-Application scope:
○ Data availability (ETH Surge upgrade, ETH danksharding, L2 cost reduction, Modular Data Availability Project Avails)
○ Data structure optimization (MPT tree to Verkle tree, ETH Verge upgrade, stateless clients, lighter verification nodes for ETH implementation)
○ Zero knowledge proof system (Zksync, Zkswap, Scroll, PSE gives Zk a polynomial commitment scheme that greatly improves chain expansion)
-KZG ceremony — a novel multi-party secure computation:
○ According to Todd’s tweet, since the downside of a KZG commitment is that it must initially have some “seeds”, and whoever has those seeds can theoretically control its outcome;
○ In order to guarantee the initial security, the Etherpad started the KZG ritual, i.e., like an articulated snake, one person injects a cipher, then the next person calculates it once, then attaches a new cipher, then the next person calculates it again, then attaches a new cipher, and so on, and so on, connecting more and more ciphers;
○ Eventually, a seed is formed that no single party can master — a very useful feature enabled by polynomial commitment schemes for data blob is data availability sampling (DAS).
○ Using DAS, the verifier can verify the correctness and availability of the data blob without having to download the entire data blob
Investment Opportunities: L2 Track, Data Track and Rollup as a servive
L2 track
-Project: The expansion and low fees will be beneficial to each L2 ecology, zk track long-term favorable, and small and medium-sized rollup will have more opportunities to bend the road
-Project:
The current rollup program has ZK Rollup and Optimism, Optimistic Rollup frontrunners are Arbitrum and Optimism (TVL is $1–2b), EIP-4844 brings expansion and low fees may give more opportunities for small and medium optimisic rollups ranked lower, such as Metis, Boba, etc.;
○ Good for future ZK ecology: According to a tweet by Jordi Baylina (Polygon zkEVM developer), the scalability of zkRollup is determined by data availability, and the cost of data availability accounts for a larger portion of all costs, so currently zkRollup is in urgent need of solving the data availability problem, and the future to The future Danksharding will realize Data Availability Sampling (DAS), which will expand the additional data volume of Blob to 16MB~32MB and greatly enhance the data throughput, which will be beneficial to zk ecology. And once data availability is solved, the next bottleneck will be to keep the chain synchronized, where a multi-rollup ecosystem running in parallel will be very relevant;
Projects like ○Aztec, zkswap, zkspace, starknet and polygon hermez may see a richer ecology and an increase in the number of users due to a reduction in the cost of uploading
Data Availability Track:
-Context:
○ Definition:
▪ “Data availability in blockchain” refers to a specific problem faced by many blockchain scaling schemes. It examines how blockchain nodes generate new blocks and whether all the data contained in these new blocks are broadcast to the network. The difficulty is that if the block producer does not publish all the data contained in the block, no one will be able to find out if there are malicious transactions hidden in the block. To fully understand how data availability works on a blockchain, it is important to understand the block composition in the blockchain and the function of the blockchain nodes.
○ Data availability issues:
▪ The “Impossible Triangle”: at any given moment, a decentralized network can only provide two of the three benefits of decentralization, security, and scalability. Scalability, security, and decentralization all interact with each other. Before a transaction can be agreed upon, the network must agree on its legitimacy. If the system is large, the process of reaching agreement may take some time
▪ Problem: How do nodes determine that all data contained in a new block is actually broadcast to the network when it is created? How do the peer nodes in the blockchain network determine that all the data associated with the newly proposed block is actually available?
-Example: a blockchain data availability attack occurs when a rogue node broadcasts the block header but keeps the part of the block that contains the wrong transaction. While honest and complete nodes that can download and store the entire blockchain know that some data is not available, they lack a formal mechanism to prove this to light nodes that have limited resources and do not have access to the entire blockchain data. As a result, both sidechain and sharded blockchain strategies are vulnerable to data availability attacks. A rogue node on the sidechain network (or slice) submits the hash of the block to the trusted blockchain without transmitting the block data to the other nodes in this attack.
○ Implications for the rollup track:
▪ if the data availability solution/layers used by rollup cannot keep up with the amount of data that rollup’s sequencer wants to dump onto it, then the sequencer (and rollup) cannot handle more tx even if it wanted to, which would lead to the spike in gas costs that we see on ethereum today, so the data availability issue must be addressed;
-Layer2 scaling solution requires the use of a data availability layer. Layer2 as the execution layer leverages Layer1 as the consensus layer. In addition to updating the result state of bulk transactions to Layer1, it also needs to ensure the availability of the original transaction data to ensure that the state of the Layer2 network can still be restored when no provers are willing to generate proofs, avoiding the extreme case of user assets being locked in Layer2. However, if the original data is stored directly in Layer1, it is against the function of Layer1 as the consensus layer under the modularity of blockchain networks. Therefore, it is more reasonable and a longer-term trend to store the data in a dedicated data availability layer and record only the Merkel roots calculated on these data in the consensus layer;
The execution cost of Rollup is low, but the cost of data availability is expensive. Previously, developers have been using data compression algorithms to reduce the cost of these calls to data, and now there is a Blob of Proto-Danksharding, which greatly reduces the cost, but because of the short retention time of Blob data, there are problems for the calls to historical data, so data storage will have a new channel of development.
-Related Projects:
○Polygon Avail: A project designed to address data availability for Ethernet scaling solutions. By leveraging KZG polynomial promises, censoring codes and other techniques to allow light clients to efficiently and randomly sample small samples of block data to prove block data availability without having to download the entire block to verify its full availability. Unlike Celestia, mentioned above, which uses fraudulent proofs to ensure correct censoring, Polygon Avail uses KZG commitments; Celestia is relatively simple to implement, but requires slightly more communication bandwidth due to the large size of its censoring code and the data sampled by light nodes. It is slightly more difficult. It has the advantages of small size of correcting code, small amount of light node sampling data, and low bandwidth requirement.
○ Swarm: Swarm provides a complete DStorage infrastructure that allows people from all over the world to become storage providers and get paid for it. Its creators designed Swarm to be highly scalable and resilient, and to provide a platform for applications that require high security and censorship resistance. swarm’s development is primarily funded by the ethereum foundation.
○Storj: Storj is an Ethernet-based distributed cloud storage protocol developed by the for-profit company Stroj Labs. Users can use their platform pass $STORJ to purchase storage services on the Storj platform, which is similar to Airbnb and Uber. users use unused storage space to provide storage services and receive $STORJ in return. Compared to centralized cloud storage services, Storj’s distributed cloud storage can provide users with higher security, privacy, and cheaper prices by leveraging idle storage resources.
○EthStorage: is a Layer 2 (L2) solution that provides programmable and dynamic storage based on the Data Availability of Ethernet.
▪EthStorage will significantly reduce the storage overhead of large amounts of data on Ether, saving 100 to 1000 times the cost and better supporting a fully decentralized network in the future.
EthStorage is highly integrated with EVM and fully compatible with Ethernet tools such as Solidity, Remix, Hardhat and MetaMask.
▪Problem-oriented: The front-end of Web3, including DNS/ front-end pages / node service providers are centralized.
-Solution: end-to-end fully de-trusted decentralized network. EthStorage focuses on solving the dynamic storage problem of Ethernet, and can provide a programmable storage Layer 2 solution (L2) at a lower storage cost on top of Ethernet Data Availability (DAA)
-Conclusion:
○ favorable storage protocols such as Swarm, filecoin and Storj;
○ Good for L1 storage extension networks such as Eth storage, a Layer 2 solution focused on solving the dynamic storage problem of Ethernet, providing programmable storage at a lower storage cost on top of Ethernet Data Availability (Data Availability). ethStorage will significantly reduce the EthStorage will greatly reduce the storage overhead of large amounts of data on Ether, saving 100 to 1000 times the cost.
Rollup as a service
-Introduction: This technology aims to revolutionize Rollup deployments by providing easy-to-use tools and services that abstract away the complex technical details and make them available to everyone. After EIP4844, all types of services from rollup will expand a certain market, and RaaS is no exception.
-Customization Raas could be the biggest winner
○ Introduction: Rollup as a service technology aims to revolutionize Rollup deployments by providing easy-to-use tools and services that abstract away the complex technical details and make them available to everyone.
▪ After EIP4844, all types of rollup services will expand a certain market.
From a value creation perspective, RaaS can currently provide certain features that are difficult to implement today, such as custom design of ZK circuits, or more inefficient designs in general purpose scaling, such as RaaS customization of privacy features based on rollup or even rollup of rollup.
The current value of RaaS: customization > cost and efficiency alone, so customizing RaaS is likely to be a hot topic in the future.
-RaaS will see full competition within the ZK-based multi-layered grid and OP’s eco-building.
○ The current advantage of OP-based RaaS is the rapid build state, ahead of the development of ZK ecology, especially the bedrock upgrade of op stack + 4844 on line, but the small improvement brought purely from cost and efficiency is not attractive enough before EIP4844;
○ Instead, ZK-based multilayer grid design can overlay the advantages of low cost and customization, which seems more competitive in the long run.
○May be the gaming class has more demand to go to l3, defi class will also be in l2, social class may have daily behavior in l3 or under the chain, and finally the core data and relationships are placed on l2
-Raas homogenization is serious at present, but eventually there may be more than one head RaaS occupy the whole market
○The rise of RaaS is very dependent on ecological construction, and the mature RaaS must have the ability to meet the customized rollup needs of all project parties, so there may be multiple RaaS occupying the market in the end.
-○RaaS brings a paradigm of modular blockchain underlying innovation
○RaaS will lower the threshold for developers to build a Rollup from 0 to 1. The lowered barrier to entry will bring incentive competition, which will in turn force innovation in Rollups.
-○The Rollup track has not yet fully exploded, and the RaaS track is even earlier
○At present, ORUs represented by Arbitrum and Optimism occupy more than 80% of the market share of the track, but zk series of related projects are still in the state of waiting for development.
Both zkSync and StarkNet are on the mainnet, but they have not been online for long and have not yet seen a major explosion in the on-chain ecosystem. Therefore, it is still too early to say that the Rollup track has been decided, and there is more potential for development.
High gas fee applications
-Deneb’s launch reduces on-chain fees, which will benefit applications with previously high gas fees::
○ Full on-chain games:
○ Dark Forest: a space MMORTS game that can accommodate thousands of players simultaneously exploring a randomly generated universe, while all players’ actions and states are updated on the chain without being made public to others;
OP Craft: developed by Lattice (a fully chained game development team) and demonstrated live at Devcon in Bogota to great fanfare, a fully chained version of My World (Minecraft) built on the OP Stack;
▪Realms: Eternum: Loot Eco Incubation Project, a sandbox-like strategy simulation game similar to Civilization where players need to hold at least 1 Realms in order to play;
▪Realms: Adventurers: a Loot eco-incubation project that plans to become a full-chain RPG platform with tools to allow developers to quickly access it;
Isaac: StarkNet’s first full-chain game, an online multiplayer physics simulation based on the setting of the Three Bodies novels;
▪ GoL2: the Game of Life (Game of Life) on StarkNet;
Redline: a deep strategy and drama game based on the UE5 model, incorporating robotics, racing, and engineering elements; the participating robots are NFTs and can be purchased on the NFT marketplace on StarkNet such as Aspect and MintSquare;
▪ Eykar: a war sandbox game that combines multiplayer online, role-playing and other elements, more similar to Imperium
▪ NoGame: a space-themed MMORPG with NFT planets and ERC20 tokens as resources;
▪ Dope Wars metaspace: a DAO-managed game, a HipHop-style P2E metaverse project, featuring open source, community-driven and fully decentralized.
○ Application chains: e.g. DYDX, Axie Infinity, etc..
○ Complex derivatives and contracts:
○ Cartridge: a chain-game integration platform on StarkNet, similar to Steam in Web2, but also capable of providing a richer on-chain experience for players in combination with crypto wallets. dID, community, game prop market, and financial market can also be implemented in Cartridge, thus connecting the benefits of crypto game composability beyond the game, currently in development stage.
Multi-dimensional fee market.
-Differentiate the use and charges of different resource types, and effectively allocate the pricing of different resources, such as gas and blob, Gas and blob will have adjustable gas price and limit respectively; Blob’s charge unit is still gas, and gas amount changes with traffic, so as to maintain each block The goal of hanging 8 blob on average: https://github.com/ethereum/EIPs/pull/5707
-Multi-dimensional fee-based marketplace
-multidimensional-fee-market original: https://notes.ethereum.org/@vbuterin/proto_danksharding_faq#What-does-the-proto-danksharding- multidimensional-fee-market-look-like-
-Multidimensional Fee Market Translation: https://www.8btc.com/article/6737384
-Proto-danksharding introduces a multidimensional EIP-1559 fee market where there are two resources, gas and blob, with separate floating gas prices and separate restrictions. That is, there are two variables and four constants:
-blob fees are charged in gas, and although their number is variable, the average number of blob per block in the long run is actually equal to the set number of targets, which is 8.
Example
-Assume a gas limit of 70 and a blob limit of 40. mempool has enough transactions to fill the block with two types (tx gas including per-blob gas):
○Priority fee 5 per gas, 4 blobs, 4 total gas (simply understood as a tip fee of $5 to carry 4 blobs, which costs 4 gas in total)
○Priority fee 3 per gas, 1 blob, 2 total gas (simply interpreted as a tip of $3, carry 1 blob, cost 2 gas in total)
-If the miner only wants to earn more “tips” and order the transactions purely on the basis of the “tip” amount, then the miner will fill the entire block with 10 transactions of the first type (40 gas in total, since the blob limit 40) to fill the block and receive 5 * 40 = 200 gas. At this point the block is full, but the optimal strategy is actually to take 3 transactions of the first type and 28 transactions of the second type, which provides a block with 40 blob and 68 gas, and 5 * 12 + 3 * 56 = 228 revenue.
-Analysis: Is it time to implement a multi-dimensional market algorithm to solve this problem? -Not yet
○ Currently only a few blocks need to consider this issue: EIP-1559 ensures that most blocks will not reach these two limits, so only a few blocks actually face the multidimensional optimization problem. ○ In the usual case, mempool does not have enough (enough paying transactions) to reach these two limits, and any miner can get the best revenue by including every transaction they see;
○ In practice there are many simple ways to reach the optimal solution: according to Ansgar’s EIP-4488 analysis, there are currently naive strategies (sorting by tip only), backlog strategies (when a transaction with too large a calldata is encountered, it is temporarily placed in a “backlog “ group, which is subsequently sorted to fill the block based on the size of the calldata), and an optimal strategy (using a knapsack solver to find the best combination of transactions), and these three strategies solve most of the block production problems;
○ The multi-dimensional pricing scheme is not the largest source of revenue compared to MEV: MEV revenue represents a significant portion of total “extractable revenue” (i.e., priority fees) — dedicated MEV revenue averages about 0.025 ETH per block, total priority fees typically around 0.1 ETH per block;
○ Proposer-Packer Separation (PBS) reduces the incidence of such situations: PBS transforms the block building process into an auction, where professional participants can bid for the privilege of creating blocks and regular validators only have to accept the highest bid. This is intended to mitigate the MEV problem to some extent, but it somewhat simplifies the block construction problem.