OP+ZK, will Hybrid Rollup become the ultimate future of Ethereum expansion?

Original post by @kelvinfichter

Original compilation: Jaleel, BlockBeats

I've recently become pretty convinced that the future of Ethereum Rollup is actually a hybrid of the two main approaches, ZK and Optimistic. In this post, I'll try to lay out the basics of what I imagine this architecture to be, and why I believe this is the direction we should be heading. Note that I spend most of my time working on Optimism, aka Optimistic Rollup, but I'm not a ZK expert. If I've made any mistakes in talking about ZK, feel free to reach out to me and I'll correct it.

I don't intend to describe the operation principle of ZK and Optimistic Rollups in detail in this article. If I spend time explaining the essence of Rollups, then this article will be too long. So this article is based on the fact that you already have a certain understanding of these technologies. Of course, you don’t need to be an expert, but you should at least know what ZK and Optimistic Rollups are and their general operating mechanism. Anyway, please enjoy reading this article.

Let's start with Optimistic Rollup

The system that mixed ZK and Optimistic Rollup was originally based on Optimistic Rollup based on Optimism's Bedrock architecture. Bedrock is designed to be maximally compatible ("EVM-equivalent") with Ethereum by running an almost identical execution client to the Ethereum client. Bedrock takes advantage of Ethereum's upcoming consensus/execution client split model, significantly reducing the variance from the EVM (of course there will always be some changes along the way, but we can handle it).

OP+ZK, will Hybrid Rollup become the ultimate future of Ethereum expansion?

Like all good Rollups, Optimism extracts block/transaction data from Ethereum, then sorts this data in some deterministic way in the consensus client, and feeds this data to the L2 execution client for execution. This architecture solves the first half of the "ideal Rollup" puzzle and gives us an L2 equivalent to the EVM.

Of course, the problem we still need to solve now is to tell Ethereum what happened inside Optimism in a verifiable way. If this problem is not resolved, smart contracts cannot make decisions based on the state of Optimism. This will mean that users can deposit to Optimism, but cannot withdraw their assets. Although one-way rollup is possible in some cases, in most cases, two-way rollup is more effective.

By providing some kind of commitment to this state, and proof that this commitment is correct, we can communicate the state of all Rollups to Ethereum. In other words, we are proving that the "Rollup program" was executed correctly. The only substantial difference between ZK and Optimistic Rollups is the form of this proof. In ZK Rollup, you need to provide an explicit zero-knowledge proof to prove the correct execution of the program. In Optimistic Rollup, you can make a statement about the promise without providing clear evidence. By challenging and questioning your statement, other users can force you to participate in a "game" of back and forth deliberation and challenge to determine who is ultimately Yes.

I'm not going to go into detail about the challenges of Optimistic Rollup. It's worth noting that the state of the art at this stage is compiling your program (Geth EVM + some fringe bits in the case of Optimism) to some simple machine architecture like MIPS. We do this because we need to build a program interpreter on-chain, and it's much easier to build a MIPS interpreter than an EVM interpreter. The EVM is also a moving target (we have regular upgrade forks) and doesn't fully cover the programs we want to prove (there's some non-EVM stuff in there too).

Once you have built an on-chain interpreter for your simple machine architecture, and created some offline tools, you should have a fully functional Optimistic Rollup.

Turn to ZK Rollup

Overall, I firmly believe that Optimistic Rollups will dominate for the next few years. Some people think that ZK Rollups will eventually surpass Optimistic Rollups, but I don't agree with this view. I feel that the current relative simplicity and flexibility of Optimistic Rollups means that they can be gradually transformed into ZK Rollups. If we can find a pattern to make this transition, then instead of trying to build a more inflexible and fragile ZK ecosystem, we can simply deploy into an already existing Optimistic Rollup ecosystem.

My goal, therefore, is to create an architecture and migration path that enables an existing modern OP ecosystem (like Bedrock) to transition seamlessly to a ZK ecosystem. I believe this is not only possible, but a way to go beyond the current zkEVM approach.

Let's start with the Bedrock architecture I described earlier. Note that I've explained (briefly) that Bedrock has a challenge game to verify the validity of certain executions of L2 programs (MIPS programs running the EVM + some extras). A major downside of this approach is that we need to allow a period of time for users to have a chance to detect and successfully challenge a wrong program outcome proposal. This adds a considerable amount of time to the asset withdrawal process (7 days on the current Optimism mainnet).

However, our L2 is nothing more than a program running on a simple machine such as MIPS. It is entirely possible for us to build a ZK circuit for such a simple mechanism. We can then use this circuit to unambiguously prove the correct execution of the L2 program. Without making any changes to the current Bedrock codebase, you can start publishing validity proofs for Optimism. It's that simple in practice.

Why is this method reliable?

Just a quick clarification: Although in this section, I mention "zkMIPS", I actually mean it as a term for all general and simplified zero-knowledge proof virtual machines (zkVM).

zkMIPS is easier than zkEVM

Building a zkMIPS (or any other kind of zk virtual machine) has one major advantage over zkEVM: the architecture of the target machine is simple and static. The EVM changes frequently, gas prices adjust, opcodes change, and elements are added or removed. And MIPS-V hasn't changed since 1996. Focus on zkMIPS and you are dealing with a fixed problem space. You don't need to change or even re-audit your circuit every time the EVM is updated.

zkMIPS is more flexible than zkEVM

Another key insight is that zkMIPS is more flexible than zkEVM. With zkMIPS, you can change the client code at will, perform various optimizations, or improve the user experience without corresponding circuit updates. You can even create a core component that turns any blockchain into a ZK Rollup, not just Ethereum.

Your mission turns into proving time

The time of zero-knowledge proofs scales along two axes: the number of constraints and the size of the circuit. By focusing on the circuitry of a simple machine like MIPS (rather than a more complex machine like the EVM), we were able to significantly reduce the size and complexity of the circuit. However, the number of constraints depends on the number of machine instructions executed. Each EVM opcode is broken down into multiple MIPS opcodes, which means that the number of constraints increases significantly, and so does your overall proof time.

However, reducing proof times is also a problem deeply rooted in the Web2 domain. Given that the MIPS machine architecture is unlikely to change anytime soon, we can highly optimize the circuit and prover regardless of future changes to the EVM. I feel pretty confident in hiring a senior hardware engineer to optimize a well-defined problem, perhaps ten or even a hundred times the number of engineers building and reviewing a moving zkEVM target. A company such as Netflix probably has a large number of hardware engineers optimizing transcoding chips, and they are likely willing to spend a bunch of venture capital funds to take on this interesting ZK challenge.

The initial proof time for a circuit like this may exceed the 7 day Optimistic Rollup withdrawal period. This proof time will only decrease over time. By introducing ASICs and FPGAs, we can significantly speed up the proof time. With a static objective, we can build more optimal provers.

Eventually, the proof time for this circuit will be lower than the current 7 day Optimism withdrawal period, and we can begin the challenge process to consider removing Optimism. Running a prover for 7 days is probably still too expensive, so we might wish to wait a little longer, but the point is tenable. You can even run both proof systems at the same time so we can start using ZK proofs as quickly as possible and fall back to Optimism proofs if the prover fails for any reason. When ready, Optimism proofs can be removed in a way that is completely transparent to the application, so your Optimistic Rollup becomes a ZK Rollup.

You can go and care about other important issues

Running a blockchain is a complex problem that involves more than just writing a lot of backend code. At Optimism, much of our work is focused on improving the user and developer experience by providing useful client-side tools. We also put a lot of time and energy into the "soft" issues: having conversations with projects, understanding their pain points, and designing incentives. The more time you spend on chain software, the less time you have to deal with these other things. While you can always try to hire more people, organizations don't scale linearly, and each new hire increases internal communication costs.

Since the work of the zero-knowledge circuit can be directly applied to the chain that is already running, you can build the core platform and develop the proof software at the same time. Since the client can be modified without changing the circuit, you can decouple your client and proof team. An Optimistic Rollup in this manner could be years ahead of zero-knowledge competitors in terms of actual on-chain activity.

in conclusion

Quite frankly, I don't see any significant downsides to the zkMIPS prover, unless it can't be significantly optimized over time. The only real impact I see on the application is that the gas cost of different opcodes may need to be adjusted to reflect the increased proof time for these opcodes. If it's really not possible to optimize this prover to a reasonable level, then I admit I've failed. But if it is possible to optimize this prover, then the zkMIPS/zkVM approach may completely replace the current zkEVM approach. That might sound like a radical statement, but not so long ago, single-step optimistic failure proofs were completely replaced by multi-step proofs.

Original link

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)