All software is web software now

Here’s a theory: browsers, by sheer force of adoption, become the standard bearer for every domain of software they touch, even usurping incumbent standard bearers. As browsers become more capable, this is encompassing a large fraction of software standards. In a certain sense, (nearly) all software is now web software.

Take HTTP, the underlying protocol of the web. It started as an application-level protocol for browsers to exchange hypertext with web servers, but in modern software it’s become a generic data transport. I’m writing this post in a desktop app that syncs it to a cloud drive over HTTP. I’m listening to music in another desktop app that streams it over HTTP. Servers speak to other servers in HTTP. Heck, even my light switch speaks HTTP.

Using HTTP for more than hypertext is not a new idea. A W3C note from 1998 described the benefits of building an RPC system on top of HTTP:

The transport mechanism is already provided by HTTP; no need for another wire protocol. Likewise, this reduces the number of parsers needed on the client. In general less code means less software errors, lighter clients, and greater interoperability.

Notably, the justification wasn’t that HTTP was the best possible protocol for the job. HTTP was good enough for the job, clients were already ubiquitous, and it allowed the developer to spend time building rather than bikeshedding yet another wire protocol.

Similar stories have played out across various domains:

Why did these technologies succeed outside the browser? I can think of a few reasons:

  1. Because it’s hard to change web standards once they ship, they tend to be reasonably robust to change and general enough to anticipate the future. It’s a testament to HTTP’s flexibility that it’s managed to shape-shift into an RPC carrier, streaming video server, and full-duplex connection facilitator while still resembling its original form.

  2. The later stages of the browser wars fueled massive investments into high-quality open-source software. Google built the V8 runtime to make Chrome fast, which provided a basis for Ryan Dahl to build Node.js. Google’s Blink rendering engine (and Apple’s WebKit, and KDE’s KJS by provenance) provides the basis for the Electron desktop UI framework. Both are examples of open-source projects that would have been gigantic technical efforts for the small teams that built them, had they not had web technologies to build upon.

  3. Web standards tend to be patent-unencumbered as a design requirement. AV1 in particular is an example of this, where a byproduct of the economics of browsers led to R&D work that anyone can use, in contrast to prior video encoding standards which were funded by licensing fees.

  4. Applications that piggyback on web protocols benefit from the fact that most network setups allow regular outgoing HTTP requests without much fuss like configuring firewalls.

Which web technologies are next to break out of the browser? I can think of some candidates.

  • WebAssembly - The effort to make WebAssembly a full-fledged runtime outside the browser is already well underway, with the emergence of WASI and a growing list of companies building on server-side WASM. The Envoy Proxy use .wasm modules as an extension system, the same way JVM-based apps used to do with .jar files (I assume some still do, but they used to, too.)

  • WebGPU - The first generation of a browser-based GPU API, WebGL, was based on existing OpenGL standards. For the upcoming generation, WebGPU, browser makers set out on their own to create a new API that takes advantage of modern GPU capabilities. WebGPU looks poised to become a general purpose cross-platform layer for applications on native, as well as in the browser.

  • QUIC - QUIC, the protocol that HTTP/3 sits on top of, is a neat innovation in its own right that has potential for carrying non-HTTP traffic. It was intentionally decoupled from HTTP/3 with this in mind. The fact that it stands to transport a large chunk of global internet traffic means that firewalls that blanket-block unknown UDP connections will have reason to make an exception for QUIC. Protocols that piggyback on QUIC won’t just benefit from QUIC’s own features (like multiplexing, congestion control, encryption, client connection migration); they’ll also be more likely to sail through firewalls in a way that hasn't been possible for new UDP-based protocols.

One consequence to all of this is that it represents a fundamental shift in how software is funded. Companies used to build profitable businesses around things like cross-platform rendering engines and language runtimes. It’s increasingly hard to compete with companies who can give those away for free as a byproduct of building a browser, which is ultimately funded by revenue from search ads or hardware purchases. The founder of MPEG has lamented that this will stymie progress in video compression technology.

I’m mostly optimistic, though, because I remember the alternative. I started poking around the web when generating a GIF-based hit counter in your cgi-bin (which was all the rage) could run you afoul of patent lawyers. A system that makes codec development a viable business model also puts those codecs out of reach of hobbyists and open-source developers. I'm not eager to go back to that.

For more like this, subscribe to our Browsertech Digest or follow @JamsocketHQ on Twitter.