Advances in Zero Knowledge Proofs

0

One way to view technological advancement is through the lens of hardware: as new needs and use-cases emerge, chip manufacturers design special-purpose GPUs, FPGAs, and ASICs optimized for specific functions and software. All major industries in tech — from cloud computing, to computer graphics, to artificial intelligence, and machine learning — have evolved to demand hardware that accelerates the speed and efficiency with which computations can run. Often, the chips for an initial function (be it storing memory, rendering graphics, or running large-scale simulations) will start out quite simply before a generalizable pattern is identified and special-purpose hardware is developed. Ideally, this hardware becomes cheaper and more accessible to consumers over time. 

A good historical example of this phenomenon is the evolution of the digital camera. In the 1960s, semiconductors were integrated into film cameras to automate simple functions like metering shutter speed or adjusting the size of an aperture depending on the quality of light a person was trying to capture — but the act of capturing an image in-memory was not yet possible. The first experiments with digital cameras in the 1970s evolved from the realization that you could take the concept of magnetic bubbles (a primitive form of storing single bits of data in-memory) and architect a charge coupled device (CCD) to absorb and store light in the form of electrons on silicon. Initial outlines of digital camera technology don’t even invoke the concept of megapixels, but camera resolution (not to mention speed and storage) was quite poor due to the limitations of semiconductors at the time: the first cameras had a resolution of around 0.0001 megapixels, and it took about 23 seconds for an image to pass from the buffer into memory. Tradeoffs between megapixel quantity and camera memory were tense until the 1990s when a newer sensor, the complementary metal oxide semiconductor (CMOS) became cheaper to manufacture and more mainstream (by comparison, modern iPhones use CMOS and offer camera quality of around 12 megapixels).

Over the course of several decades, digital cameras evolved from hacked-together contraptions engineered by researchers with access to expensive semiconductors, to devices that reached the tens of thousands of dollars, to being embedded in every mobile phone, available to anyone for a few hundred or thousand dollars. 

Other fields follow a similar trajectory from general to application-specific hardware. A more recent example of hardware optimization within the crypto space specifically is in cryptocurrency mining: when bitcoin mining launched in 2009, it was common for anyone to run the SHA256 hashing algorithm on a standard multi-core CPU. Over time, as mining grew more competitive, block rewards dropped, and a general understanding of why people might want a global, censorship-resistant currency grew more mainstream, an entire industry around developing more efficient hardware for mining developed. First we transitioned to GPU mining which allowed scaling from single digit mining parallelism to five digit mining parallelism, which sped up the process. And today, an ASIC rig for mining Bitcoin can compute around 90-100 terahashes/second — about 5 billion times more powerful than a CPU chip. 

In other words, you might view mining as just the beginning — a proof-of-concept that decentralized currencies are not only possible, but desirable. Even if we’re at an advanced stage of what ASICs for mining look like, we’re at the very beginning of what hardware for web3 will become. As blockchains have attracted millions of users, and the complexity of applications they host continues to grow more advanced, two key demands around privacy and scalability have emerged. 

A crucial trend to identify is that, while special-purpose hardware is being developed for many of these applications, there is also a movement to optimize algorithms for consumer-grade hardware in an effort to preserve decentralization and privacy. One area that exemplifies this trend particularly well is in zero-knowledge proofs. 

A brief overview of zero-knowledge proofs today

Zero-knowledge proofs provide a way to cryptographically prove knowledge of a particular set of information or data without actually revealing what that information is. Without getting too in the weeds, zero-knowledge proof constructions involve a “prover” and a “verifier”; the prover creates a proof from the knowledge of a system’s inputs, while the verifier has the ability to confirm that the prover has evaluated a computation authentically without knowing inputs or recomputing themselves. Zero-knowledge proofs have a variety of use-cases in blockchains today – the ones encountered most commonly are in the field of privacy (examples include IronFish, TornadoCash, Worldcoin, Zcash) or for scaling Ethereum by computationally verifying state transitions off-chain (examples include Polygon’s suite of zero-knowledge rollups, Starknet, and zkSync). Some, like Aleo and Aztec, propose to solve both privacy and scalability.

It’s worth taking a look under the hood at the cryptographic advancements — just in the past decade — that have made all of these applications feasible, faster, and perhaps most importantly, censorship-resistant and decentralized. Through a combination of advancements in algorithms and hardware, generating and verifying proofs has become cheaper and less computationally intensive. In many ways, these advancements mirror the democratization of technologies like the digital camera: you start out with an expensive and inefficient process before figuring out how to make things cheaper and faster. Perhaps most critically, advancements in zero-knowledge algorithms are beginning to provide alternatives to generating proof computation in servers and other centralized contexts. 

Proof setups involve arithmetic circuits that gate the computation of a set of polynomials (which represent programs); these gates grow more complex as you attempt to scale the amount of information represented by those polynomials. Ideally, you want the range of possible outputs of a prover to be very large to decrease the likelihood that a prover will be able to computationally brute-force its way to the same number that the verifier is anticipating. (This is a concept known as collision resistance.) By increasing these numbers you increase the probabilistic security of the proof, just as in proof-of-work mining. A large number of outputs, however, can be very expensive and computationally slow to generate. This is where advancements in proving algorithms and hardware come in.

zkSNARKs, first introduced in 2011, are a key ingredient of these advancements. zkSNARKs essentially made it possible to efficiently scale the number of polynomials that can be gated, unlocking speed and more complex potential applications for zero-knowledge proofs.

The “SNARK” part of zkSNARK stands for “Succinct Non-Interactive Arguments of Knowledge”, and the words most crucial here in the context of web3 are “succinct” and “non-interactive.” A proof in a zkSNARK is only a few hundred bytes, which makes it easy for a verifier to quickly check that a proof is correct (though, as you will see, the proof itself may take a long time to generate, for reasons I’ll explain below). The non-interactive component is also critical: a non-interactive proof saves verifiers from needing to challenge statements submitted by provers; in the blockchain context, this would require clients to go back-and-forth with a verifier, which would be time-intensive and difficult to architect. It’s important to note that when zkSNARKs were first introduced, the idea of using them for privacy-preserving blockchains or to scale transactions was not mentioned; the original paper suggests things like a third-party efficiently running computations on a large set of data without needing to download or compile the dataset. While this example is theoretically similar to the kinds of use-cases in privacy and scaling, it took a few years for people in the field to apply zkSNARKs to cryptocurrencies. 

Zero-knowledge proofs hit the blockchain

The first crypto protocol to implement zkSNARKs was Zcash, a private payments cryptocurrency developed in 2014. Zcash is a proof-of-work mining network based off of Bitcoin’s UTXO model, and is a particularly good example to look at because its improvements illustrate the way in which improvements in cryptography have led to more scalable forms of privacy. The original protocol implemented by Zcash, the Sprout protocol, used the SHA256 compression function to create elliptic curves; while this was cryptographically secure, it was also time and memory-intensive; it could take up to several minutes for a proof to be generated, and required around 3KB of memory to do so. Several years later, the Zcash core team developed a new hash, Bowe-Hopwood-Pedersen, to replace SHA256, and transitioned Zcash from Sprout to the Sapling protocol in 2018. In addition to the newer hash, the team also used a different circuit used by the Groth16 proving system, and rearchitected the way in which they treated accounts in the network. This led to proving times of around 2.6 seconds, and 40MB of memory, making it possible to generate a proof from a mobile phone (more can be found about the upgrade here).

Upgrades to Zcash illustrate two interesting concepts that persist across improvements in zero-knowledge proving systems. The first is that you can combine different pairings and proof systems to unlock efficiency. One might view the libraries of proving circuits, curves, constraint systems, and commitment schemes as ingredients that can be interchanged to create “zero-knowledge recipes” with varying speed, efficiency, and security assumptions. The second is that privacy is a motivating force in these improvements — if a proof is not generated on-device (a computer or mobile phone, for example), then it needs to be sent to a third party in order to be generated. This may leak the private information in question because your “private inputs” need to be sent in the clear. We can look at Zcash as an early indication that it’s possible to very quickly optimize for user-friendliness and decentralization with algorithmic improvements. Newer projects like the privacy-preserving cryptocurrency IronFish drive this value of decentralization even further, by making it possible for anyone to mine and run a node directly from their web-browser

PLONK enters the field (no pun intended)

In 2019, Ariel Gabizon, Zac Williamson, and Oana Ciobotaru published a paper proposing PLONK, a new proof system with several key advancements. The first big breakthrough was that PLONK only requires a single, universal trusted setup — the initial ceremony in which a common reference string used by provers and verifiers for a given zero-knowledge proof system is performed.

As Vitalik Buterin explains in his “Understanding PLONK” article, a single trusted setup is desirable because “instead of there being one separate trusted setup for every program you want to prove things about, there is one single trusted setup for the whole scheme after which you can use the scheme with any program.” While Zcash had to perform a trusted setup for each instantiation of its proving system (both Sprout and Sapling), a PLONK setup can be performed once and used in perpetuity by any number of projects. In 2019, Aztec Network performed a trusted setup ceremony with 176 participants; this scheme is used not only by Aztec, but by other teams pursuing zero-knowledge proof-based solutions, including Matter Labs/zkSync, Mina, and a pending upcoming update to Zcash

PLONKs are helpful for another reason: they provide relatively fast prover times; the tests performed by the proof’s authors found that a consumer-grade computer (in this case, a SurfacePro 6 with 16GB of RAM) could generate a proof in 23 seconds. A big caveat: it’s important to note that these are just benchmarks, and PLONK proofs, as implemented today, may take much longer to generate. This is because many of the teams implementing PLONK proofs are applying them toward zero-knowledge rollups, which need to aggregate thousands of off-chains transactions into a single proof together. These transactions are usually processed by compute-heavy provers, which then send records of those transactions to a sequencer for publication on Ethereum’s mainnet. 

When looking at rollups, interesting questions emerge around how and where you target decentralization. One approach Matter Labs is taking is with zkPorter, a second account type for the rollup with data availability that is kept off-chain. When zkPorter is live, people will be able to choose between transacting on zkSync, which offers the full security of L1 Ethereum (and throughput of up to 2,000 transactions per second), or transacting on zkPorter which can reach up to 20,000+ transactions per second. Crucially, zkPorter is architected as a proof-of stake network that will use “Guardians” who stake tokens to keep track of the off-chain state, which will save several orders of magnitude in transaction costs while still providing a strong guarantee of security. While Matter Labs isn’t yet targeting prover decentralization, network-level decentralization is another key way that rollups can prioritize neutrality (while also unlocking speed). Aztec, the privacy-preserving rollup, has spoken about a method to federate their prover network, allowing proofs to be generated from a mobile phone or computer. It’s important to note that all of these proposals are still early, and teams are still iterating on their approach. 

Other hardware-focused approaches to blockchain-based privacy include Worldcoin, which is using the zero-knowledge proving system Semaphore to create a decentralized, sybil resistant currency. To do this, Worldcoin recipients have their iris scanned by an orb that verifies that an individual has only signed up for Worldcoin once. Crucially, Worldcoin doesn’t store or leak private information from users. To sign up for Worldcoin, a person generates a Semaphore public key on their phone, presents the key in the form of a QR code to the orb, and has their iris scanned by Worldcoin’s orb with a hashed output. Worldoin then verifies that the hash doesn’t match with one that has already been generated, ensuring that a person only goes through the sign-up process once. By using hashes instead of storing biometric data, Worldcoin is able to use zero-knowledge proofs to preserve user privacy.

So what can and will be built?

It can be very easy to stand at the tail end of a technological revolution and declare the massive economic and social changes brought about by it inevitable; someone holding an iPhone in their hand today, with all of its astounding capabilities — photography, storage, internet access, communication — is probably not thinking about all of the advancements necessary to make the technology possible. It’s just as difficult to stand at the unresolved beginning of a massive social and economic shift, with little clarity about how long it will take for changes to be fully realized. 

We’re currently at a very early moment in what will be a long series of advancements in zero-knowledge proving schemes — but the improvements in speed, efficiency, user-friendliness, and decentralization, just in the past decade, have been astounding. We’ve gone from very few consumer-facing applications in the zero-knowledge space, to an entire ecosystem of applications and blockchains for privacy and scalability in a very short period of time. One of the most exciting things about new technologies like these ones is that it’s very difficult to predict what exactly the other side looks like. What will happen when everyone has the assurance of completely private transactions they can prove from a mobile phone, and settle on a trustless blockchain that plays host to a multitude of decentralized applications? Or a world where everyone has the right to redeem the same borderless, decentralized currency? While we live through the shift, it’s important to keep in mind the core values that have guided this space from the beginning: accessibility, trustlessness, and most importantly, decentralization.

Thanks to Sam Ragsdale, Eddy Lazzarin, Guy Wuollet, Ali Yahya, and Dan Boneh for their feedback and/or discussions that informed this piece.

Credit: Source link

Leave A Reply

Your email address will not be published.