Web3: A Neutral Take
Written and Published March, 2022.
Note: I wrote almost all of this in a single shot while on a 103 fever from COVID. I split it up into three parts for ease of reading, but most of the cursing is still in place. YMMV.
Web3 is the new polarizing thing. Everyone has opinions about it — it feels like everyone has to have opinions about it — and so lots of people are saying lots of things and most of it is noise.
I'm not a Web3 enthusiast. Full disclosure, I have about 10-15k invested in eth, most of that coming from an investment I made back in 2017. I tend to swing back and forth between bearish and bullish on the overall idea. I have some very smart friends who I respect deeply on both sides of Web3, so I guess it makes sense that I would end up being more or less neutral.
But man. People on every side of this thing are just spouting insane bullshit! Enough that I wanted to write down my own opinions, so that I can keep some sense of what's real and what isn't.
One caveat before diving in: I don't think any of the takes below are straw men, but I'm happy to be proven wrong and will update/issue corrections as needed.
This is Part 3. See also: Part 1 (let's talk about the hype men), and Part 2 (let's talk about the haters)
Part 3: What do I think?
Before diving into my thoughts on Web3, let me tell you a story about the history of the web.
The first versions of the internet, back in the 60s through the 80s, had little to no standardization. Computers physically connected to other computers via ethernet cables, mostly on small private networks maintained by universities, companies, or other organizations. In order for the computers to talk to each other, they had to know how to parse incoming data. This mostly wasn't a problem, however, because each organization could mandate top-down how computers would communicate. I call this web1. Web1 is characterized by being bespoke at every level. Data storage, data transfer, and data processing were all customized, and therefore not interoperable.
A few smart people realized that standardizing internet infrastructure would massively increase the scope and scale of the internet. These folks started developing transfer protocols, and tools that could automatically parse data presented using those transfer protocols. The most popular of these protocols was HTTP, first proposed in 1989, which rapidly became the only game in town thanks to intense network effects. The mass adoption of HTTP ushered in the era of Web2. Web2 is characterized by having a common language — in this case, a shared information protocol — but having isolated data storage and bespoke data processing. Most of the web that we interact with today, including Facebook, Google, Amazon, Netflix, etc, is set up in this image.
Unfortunately, this setup isn't ideal for the end consumer. The problem
with Web2 is that the data is intrinsically tied with the application.
If Google were to shut down one of their services, like Answers
Buzz Code Search Talk Reader Orkut
Picasa Map Maker Spaces G+ Youtube, all of
the Youtube video data that Google had collected over the years would
be gone too. Even if Google open sourced the data itself, it wouldn't
matter without the tools to run applications that process the data. So
Google is now a single point of failure. (Never mind that this is
obviously great for Google)
A few smart people realized that decentralized data had some pretty important advantages over the Web2 model. It was more resilient to removals or censorship; it opened the possibility for sustainably transferring large amounts of data; it allowed interoperability beyond the protocol level. A decade after HTTP, Napster (and later, BitTorrent) came into existence. In a world where artificial scarcity was being enforced by parasitic middle men, these new technologies promised a global revolution in how we thought about computing.
That revolution didn't happen. A decentralized file system requires an initial host (a seeder) who can share data with many other recipients. Ideally, the recipients stay on the network and continue to share, providing more seeders until the entire network has all the necessary data. In practice, individual users have no incentive to continue participating in the network once they have their data. In other words, decentralized file transfers fail because they depend on charity. Every torrent eventually goes dark because there is no one left to seed. And you can't build infrastructure (or a company) on top of data that eventually disappears.
I like this story, even though it's not quite correct. Most of the motivations of the actors involved were not as neatly packaged as I presented them. But it's a nice narrative, and it fits well with the underlying technological trends. And it sets up how I think about Web3.
So, Web3
I think most of the current talk about Web3 is bullshit, spread by people who have a weak understanding of the central technology, people who are motivated by excitement for individual applications at best, and in-group validation/social signaling at worst.
I set out to understand what building blocks Web3 provides, and why these are unique or valuable compared to other existing technologies. And after talking to a lot of people and reading a bunch of papers, I came away with three things.
Web3 offers:
- A mechanism for storing public state, coupled with robust permissions for managing that state.
- A financial incentive to make that state publicly available for everyone, always.
- A mechanism for splitting ownership of state into arbitrarily small pieces.
I think these three pieces have unique value only when bundled together. Any one of these pieces alone would be near useless. To understand why, it's helpful to think about how Web3 differs from torrents.
In Web3 world, all data is hosted on the same chain. The big tech titans of Web3 are using the same decentralized data store as every small mom-n-pop application out there. That means that as long as someone somewhere wants to use the chain, all other applications on the chain survive. In the language of torrents, every new torrent seeds all previous torrents; this means that as long as someone is seeding some torrent, all previously existing torrents will continue to seed.
But why would anyone host data that isn't theirs? In Web3 world, you get paid to do so. Every crypto miner doubles as a validator and host of the underlying data. The more folks using Web3, the more money miners make; so miners themselves are incentivized to keep all Web3 data public all the time. This is like getting paid for seeding. Some miners will make this more explicit by also offering Web2 compatible APIs that other services can pay to use.
But Web3 needs to be a closed loop, so you can't pay miners in fiat currencies — otherwise someone would have to provide an external source of capital, and would have outsize control over the public data. Instead, miners get paid in tokens that constitute ownership shares. Those ownership shares determine who is allowed to edit which data. The more you have, the more you can make changes (I'm glossing over some technically complex ideas here, but this is the basic gist of Proof of Stake and, to an extent, Proof of Work). If the data on a chain is valuable, the right to modify it is also valuable. This is sufficient incentive for miners in the closed ecosystem.
Even though the technology for public databases have been around forever, the incentives weren't there, which made all previous public databases unusable. By contrast, I think Web3 provides a complete implementation of an actually usable, globally persistent public database. This is a unique engineering primitive, a technical building block that can be used for many other things.
One application — the one we're most familiar with — is crypto currencies. Even though crypto currencies have many problems, they clearly provide unique economic value (banking the unbanked, circumventing failed states, or enabling illegal transactions). Their value comes from the fact that no one government can effectively shut down the network.
A much more fascinating application is Dark Forest, a MMO-RTS built on Ethereum. I'd argue that there was no meaningful reason the initial implementation of Dark Forest had to have public data, but it grew into something else. Because all player data was on the chain, random players (or anyone, really) could create their own clients for the game without having to interact with, or even know about, the core gameplay code. This could all happen while the game was running. People ended up developing unique modules (e.g. a bounty system) that meaningfully changed gameplay, without any interaction from the core devs at all; the development, and even the motivation behind development, were all organic from the game itself.
Obviously, I think this is cool.
But more importantly, I don't see how this happens in Web2. In a more general sense, developers in the Web2 world share code, but not infrastructure. Developers in the Web3 world share both, always. This means rapid change and constant deployment, resulting in more interesting and better applications. (Importantly, this isn't always good for the original developer!)
While I'm listing applications, I'm interested in seeing applications that require a lot of offline management -- problem areas like supply chain logistics or remote rescue that do not have reliable access to a centralized store, but still require distributed access from a variety of individuals.
Finally, I do think there is a real ideological value in decentralized data. Companies should not be able to take the data we created with them when they disappear. Other people should be allowed to pick up where they left off. Only a useful idiot thinks Web3 'cannot be centralized' (they may be employed by Metamask or OpenSea). Everyone who's paying attention to the space knows that OpenSea is highly centralized and effectively dominates the NFT marketplace, but also (hopefully knows) that's not the point. The point is, OpenSea cannot control the data by fiat. If OpenSea the application stops being the best existing tool, people will leave their platform; consumers aren't bound to a platform because of sunk costs. And if OpenSea the company stops operating, a new marketplace will pop up without any data loss. Essentially, OpenSea has made a promise to their users that they cannot build a company off a data-moat alone. This kind of promise is impossible to make in Web2. (I think this points to an interesting sociological effect: if public data is the norm, companies have to make their data public, else no one would trust them).
So far I've spoken a lot about why I think Web3 is interesting and, in some cases, valuable. But engineering is about tradeoffs; it's not at all obvious that the benefits of Web3 outweigh the costs. And there are meaningful engineering costs.
Decentralized data stores have few consistency guarantees, so any applications that require near-real-time read-write consistency have to be pretty clever about how they interact with any blockchains. Further, because code is run publicly, there is a significant increase in attack surface that needs to be accounted for. These costs are immediately felt by developers, while many of the benefits of public data are hypothetical and for the consumer in the long term. As a career SWE, most things I've built or seen or even heard of would be much harder to implement in Web3, with questionable short-term benefit for the end user. Frankly, most things I've seen on Web3 were harder to build on Web3. There just aren't enough abstractions between a dev and the data store yet to really make that process easy. I suspect that improvements in the underlying technology will mitigate these issues eventually, but I have no idea what the timeline might look like.
There are meaningful social costs as well.
The mining-model that supports the Web3 ecosystem is computationally expensive, requiring a constantly increasing need for power. The increase in demand for power has led to more investment in renewables — many crypto miners are located near hydro, for example. But cheap power is cheap power, regardless of whether it comes from the sun or from dead dinosaurs. Miners in poorer countries commonly use coal to fuel their crypto mining rigs, causing a ton of environmental damage. Supposedly the energy is being used to power transactions, but it's an artificial constraint set by the limits of Web3. If you aren't a believer in Web3 tech, the energy consumption is just another slap to the face.
Meanwhile, tying monetary gain to computational cycles has created a ton of perverse incentives. Hidden crypto miners in the browser have become a popular way of sucking up energy from unwitting consumers. Many cloud companies struggle to provide free tiers that aren't immediately abused by automated crypto bots. Providing a cheap and anonymous way of facilitating illegal transactions is obviously socially harmful — crypto currencies in particular have resulted in a significant increase in ransomware attacks.
These costs are externalized. No part of the Web3 ecosystem has tools to handle these failure modes; they exist outside the Web3 view of the world. These costs have to be considered when the technology is being adopted. After all, once you adopt a decentralized technology, you can't easily put it back.
Closing thoughts
So overall, I'm neutral on Web3. I'm interested in learning more, and seeing where it goes. Probably I will continue my cycle of being bearish one month and bullish the next. I think the best thing for the Web3 community would be for the current hype train to stop, for the speculative bubble burst catastrophically. That will clean out everyone who is polluting the conversation with dollar signs, allowing the developers with a true passion for the space to continue building.