The read-write web in 2021, or “A Young Man’s Illustrated Primer”

· 3 minute read

The web feels a bit more malleable than it did just a couple of years ago. Why might that be?

Knowledge management software has been with us for decades but only recently feels like it’s become a fixture of how early adopters use the web1. Networking one’s own personal notes might just be a step towards having the web be authored by a much larger subset of its users, doing so from rich text environments rather than IDEs. Great products like Roam Research2 have started to popularize the benefits of contributing nodes and edges to a knowledge graph, making their creation feel as frictionless as bolding or italicizing. How does networked note-taking lead to web authoring? I’m confident that we’ll see this new class of tools increasingly lead to our “notes” becoming both collaboratively authored and publicly readable, at which point the lines will have become quite blurry3.

In Neal Stephenson’s The Diamond Age, our young protagonist comes to possess a copy of The Young Lady’s Illustrated Primer, an interactive storybook designed to teach its owner everything they need to know through choose-your-own-adventure edutainment. The more I thought about Roam’s choice to include the word “research” in their product name, the more it helped me reframe my own relationship with the web. “Browsing”4 started to feel passive or reactive (if not a bit wasteful), while researching—reading but crucially also writing—felt proactive and enriching, a way to earn compound interest on your thoughts.

Researching made me feel like I was meaningfully investing in myself, not unlike how writing had in the past but now with tools tailor-made to empower. Networked note-taking became my Primer—tangents could be sought out instead of avoided, with backlinks turning the anecdotal into the indexed, the random into the patterned, the hazy into the coherent. Unsurprisingly, the web feels more pliable when you feel like a main character as opposed to an outside observer.

Next, the explosion of blockchains and distributed applications feels like a significant lowering of the barrier to contribution and participation. Yes, open source software predates smart contracts and traditional REST APIs can already be built on top of. But the composability and interoperability inherent in these new ecosystems brings increased opportunity to leverage existing code and data, both to adapt in new ways or to wrap in higher levels of abstraction.

Building on top of existing code, data, and authentication affords the benefit of focusing on novel logic and UX alone, and unsurprisingly has led to an incomprehensible number of new projects. The overwhelming majority won’t succeed, but we can repurpose a Ben Thompson quote about Shopify to understand why this can be reason for optimism:

I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.

When I think of the web, I think of a series of siloed databases and applications: built by engineers, stitched together by crawlers, and served to the masses via search. Too often, this doesn’t feel particularly approachable even to me, someone with a career of experience building exactly these kinds of systems. It’s not purely about having the requisite knowledge5—the activation energy currently required to create something out of nothing undeniably keeps far too many good ideas from ever evolving past that.

But I’m optimistic that this can change in a big way. Are networked notes and blockchains really going be meaningful change agents? Perhaps not. But on this last day of 2021, they’re what come to mind.

  1. Sure, we all use Wikipedia, but how many of us contribute to it—or better yet, maintain wikis of our own? 

  2. As well as others like Obsidian, Loqseq, and Reflect, but let’s give credit where credit’s due. 

  3. See my friend Orta’s public notes, implemented using Foam

  4. Or worse, “surfing.” 

  5. No-code tools have long been making inroads here. 

Phish and digital content strategy

· 6 minute read

Photo by Rene Huemer via Phish: From The Road

Let’s start by getting the tropes out of the way: drugs, hippies, Ben & Jerry’s, “the Grateful Dead were better,” that “Gin and Juice” cover from Napster with the wrong MP3 metadata. And say what you will about their music; love it, hate it, or have no opinion whatsoever1, this essay isn’t really about music. It’s about a unique–and I have to think incredibly successful–digital content strategy unlike anything else I’ve ever seen, and seemingly nearly impossible to replicate.

Before we dig in, let’s jump back a couple of decades.

In the heyday of the aforementioned Napster and during the transition from physical media to streaming video, the music and film industries attempted to combat piracy by clamping down as hard as possible with restrictive DRM, unskippable DVD preambles, and exorbitant lawsuits. Pundits and industry observers bemoaned this approach, countering that an effective defense wouldn’t be through draconian restrictions but by competing on quality and convenience; that DRM and similarly user-hostile approaches would lead to more piracy in service of a better user experience irrespective of cost. That the way to win was by charging a fair price for a superior product.

If services like Netflix and Spotify have proven this to be true2, Phish has taken it to an extreme. While most bands would consider their studio records to be more canonical than their live recordings, the opposite is true for Phish. Yet Phish embraces the recording and sharing of their live music rather than trying to stop it–you can legally listen to hundreds of Phish shows right now just by going to this website.

This isn’t because they don’t want you to pay for their music, however. To the contrary, their LivePhish website competes on quality and convenience by making every show since 2002 available:

  • In lossless audio quality, straight from the soundboard
  • Literally within minutes of each concert ending3
  • To either download or stream (via web, iOS/Android/Apple TV apps, Sonos, etc.)
  • To purchase either à la carte or through a monthly subscription
  • With beautiful, thoughtful, artwork unique to that show or tour

Oh, and they also broadcast the live video streams of these shows. In high-definition. With extremely high-quality camerawork. For a fair price, naturally.

At this point, you might be thinking that this doesn’t actually sound particularly lucrative. After all, if you’ve heard one live show haven’t you heard them all?

This would be the case for most bands but not for Phish. Phish plays with an improvisational style which means no two shows are ever the same. If this sounds like an exaggeration, it isn’t. All four band members are virtuosic in their own right, but have also been playing together for almost 40 years as of this writing. This gives them both an enormous repertoire to draw from4, plus the ability to freely experiment with near any track without veering off the rails in a way that often feels impossible.

So that’s the playbook, unemulatable as it may be: pick a musical genre that lends itself incredibly well to improvisation, spend four decades mastering it, stand up the requisite streaming/recording/distribution infrastructure, then turn on the recurring SaaS revenue faucet. Most bands simply do play more-or-less the same set for the entirety of a tour, and there’s nothing wrong with that. Those bands just can’t expect their fans to pre-order a whole tour’s worth of MP3s and/or live-streamed webcasts. Phish can, and they unabashedly lean into it. In 2017, they played thirteen straight shows at Madison Square Garden without repeating a single song5.

And Phish fans happily capitulate. Here’s a Mashable article from 2018 that, in addition to detailing the evolution of Phish’s digital streaming empire, nicely summarizes the fan mentality:

Each one is unique, and if you’re a fan, you want to hear every possible version of your favorite song, and to collect them all. As a fan of the band for the last 23 years, I’ve been hell-bent on trying.

The author isn’t alone. Phish fans crowdsource song statistics with the fervor of sabremetricians studying nascent baseball defensive metrics.

So while the band is clearly embracing the opportunity in front of them–each performance capitalizing on the chance to produce new differentiated content–they’re only able to do so because they happen to perform in a style that makes their music uncommoditizable. And that’s, frankly, lucky. When Junta debuted in 1989, foreshadowing their trademark lack of brevity with five tracks stretching beyond the nine minute mark, they certainly didn’t foresee each bespoke incarnation becoming an elaborate snowflake sold to voracious adorers over HTTP. This would be a lot harder if they played e.g. three-chord punk rock instead.

Like most software companies with high multiples, Phish benefits from what Stratechery’s Ben Thompson sums up as the effective elimination of marginal distribution and transaction costs brought about by the Internet. Software companies traditionally have high P/E multiples because of this dynamic–there’s a fixed cost inherent in developing an application but no substantive additional cost to onboarding as many new users as possible over time, subsidizing the original expenditures. Phish benefits similarly–while there are certainly some ongoing costs to film and edit each show, the infrastructure exists and they clearly have it down to a science at this point (see: the truly-hard-to-believe turnaround time).

Is Phish’s audience big enough, though? Of course, when the total addressable market is “anyone with an Internet connection.” Ben Thompson, again:

While quality is relatively binary, the number of ways to be focused — that is, the number of niches in the world — are effectively infinite; success, in other words, is about delivering superior quality in your niche — the former is defined by the latter.

Source: Never-Ending Niches, Stratechery

And if you’re lucky enough to be the type of person who enjoys Phish’s music, the quality of their offering is undeniable. So much so that they’re able to charge just as much as Spotify does for Spotify Premium. Put another way by Marco Arment:

If you’ll permit a pretty rough analogy, imagine a world in which the vast majority of published fiction was in the form of 3,000-word short stories, and most people had never read anything longer. Phish is the one outlier publishing novels, and they’re pretty weird, complex novels. No effort to condense such novels into bite-sized short stories will truly capture the appeal.

But if you’re one of just a handful of novel publishers in this rough metaphor, you’re going to slowly accumulate a hell of a fanbase from the people who actually like novels, even if yours get a bit too weird sometimes, because almost nobody else is creating what these fans want and love.

Taking a look at Nathan Baschez’s Why Content is King, it’s pretty remarkable how many of the seven power boxes Phish is able to check6:

  • Scale economics (free distribution)
  • Network economies (crowdsourced cataloging by the fan community)
  • Counter positioning (producing new differentiated content in a way that few other bands can)
  • Branding
  • Cornered resource (a monopoly on soundboard-quality Phish recordings)
  • Process power (filming/recording/editing/distribution infrastructure)

And if you find yourself thinking that a shrewd move would be to reuse the underlying LivePhish platform for other bands as well, not unlike how Amazon sells access to AWS despite being its first-and-best-customer, the folks behind nugs.net would agree7.

At the end of the day, it‘s not lost on me that no amount of strategic novelty will change your mind if you simply can’t stand Phish’s music (and I truly understand why many cannot). But like them or not, they undoubtedly occupy a unique space amongst their peers. The technologist in me can’t help but also take satisfaction in the alignment of their digital offerings with their strongsuits as musicians.


Many thanks to Ben Reubenstein for his feedback on a draft of this post.

  1. If you’re looking for a more general introduction, I recommend this one by Marco Arment

  2. To be clear, I don’t feel knowledgable enough to opine on whether or not Spotify/Apple Music/etc. really charge “fair prices” insofar as how artists end up getting compensated. 

  3. Phish also gives all concert attendees a free copy of that show’s MP3s via their ticket stub, which I think is a delightful gesture. 

  4. They also have a proclivity for cover songs, covering a different album in full each Halloween. This only adds to the gigantic bucket of songs that might make a surprise appearance on any given night. 

  5. If seeing a totally different show each night wasn’t enough, the band also went as far as to provide donuts

  6. They do not have high switching costs but that’s clearly in the interest of competing on convenience (allowing MP3s to be downloaded as opposed to purely streamed). 

  7. It’s not clear to me exactly how or if Phish directly profits from other artists nugs.net, but it is run by the same team behind LivePhish

On specialism vs. generalism

· 5 minute read

“You’re basically not going to be an iOS engineer anymore?”

When my good friend Soroush asked this upon hearing that I had taken a new job at Stripe, I doubt he thought very much about how it’d be received. It really threw me for a loop, however. I didn’t exactly consider myself to be undergoing a career change, but was I? It’s true that I’m not going to be developing for iOS in my new role, but I hadn’t always worked in this capacity at previous jobs either. Did spending the better part of five years focused on iOS make me an “iOS engineer”? If so, when exactly did I become one and when did I subsequently cease to be? Should this kind of designation be descriptive, based on one’s actual day-to-day, or prescriptive, aspirationally describing the work being primarily sought out and anticipated?

Work as a software engineer for long enough and it’s highly likely that you’ll end up having a say over whether or not you go deep on any particular sub-discipline (and if so, which), or choose to primarily float around the surface, swimming back and forth and maybe holding your breath to go under for a bit here and there, but not taking many dives necessitating special training or equipment.

There’s no right answer here, and it’s really not a strict dichotomy anyway.

While programmers can undeniably be either specialists or generalists, there’s a whole lot of grey in the middle. As opposed to inherently being a specialist, it’s also very common to specialize over a period of time. Perhaps this is a subtle difference, but I think it’s one worth teasing apart; one can act in a specialist capacity when the situation dictates - and I presume that effectively every “generalist” does, from time to time - without self-identifying as such for the long haul.

There isn’t a right answer because one isn’t better than the other, but also because many teams should contain both specialists and generalists in order to perform their best work. The best products are often brought to fruition through a combination of generalist thinking and specialist expertise. Only specialists have the domain knowledge necessary to build best-in-breed software that takes full advantage of the platform being built for; given how advanced the various platforms that we build for in 2019 have gotten, it’d be near impossible to sweat all the right details without having first dedicated yourself to fundamentally understanding a particular one’s intricacies. We’re all very lucky that many have.

At the same time, specialists run the risk of “only having a hammer,” and as such, having every possible project “look like a nail.” With only one tool in your belt - a deep but relatively narrow area of expertise - it’s easy to inadvertently build an app that really should’ve been a website or vice versa. Or to have a great idea that you can’t quite realize, despite your excitement, due to it requiring both frontend and backend work. Said idea might be exactly the provocation that can prompt one who has historically specialized to start branching out a bit. But after having done so, are they still “a frontend developer” or “a backend developer”? Clearly, such labels start to lose their significance as we tear down the boundaries defining what we’re able to do, and perhaps more importantly, what we’re interested in doing.

In the Twitter, Slack, and GitHub circles that modern software developers often travel in, it’s easy for a discrepancy to form between how one is best known vs. how they actually view themselves. Tumblr was quite popular during the time that I led iOS development there, which gave me the opportunity to write and speak about the work that we were doing, and even release some of it as open source. These slide decks and blog posts neglected to mention that I was actually hired to be a web developer and only moved over to iOS as needs arose, subsequently parking myself there for a few years to come. I built Rails backends and React frontends at my next job, but at an early-stage company with a much smaller platform, where we primarily worked heads-down without much outward-facing evangelism for our technology. Few knew.

I’m not unique in this regard. One of the best mobile developers from my time at Tumblr has since switched over to the web. Another, an expert in animations, gestures, and UI performance, is now a designer. Since acting as a specialist at a high-profile company can cement your status as such well after you’ve stopped working in that capacity, it’s crucial not to let outside perception prevent you from shaping your career however you see fit.

In August 2014, I gave a talk entitled Don’t be “an Objective-C” or “a Swift Developer” to a room full of new programmers who were learning how to build iOS applications at the Flatiron School. The Swift programming language had been unveiled only two months prior, and reactions amongst iOS developers were divisive, to say the least. Many felt as though it was finally time for a modern language to replace Objective-C, and that such a change was long overdue, while others didn’t believe that Objective-C needed fixing, and would’ve preferred if Apple’s resources and the focus of its community were directed elsewhere. My goal was to try and convince these new engineers that they shouldn’t aspire to land in one camp or the other, but rather to learn the underlying, transferrable programming concepts, and to expose themselves to many different ways of concretely building software. Without understanding what’s out there, how can one make an informed decision as to how they should spend their time? Even if you decide to put down roots in a single community, how can you avoid perceiving the way that that community has historically operated as being the way that it should be going forward?

I feel like I could give this same talk today, to a more experienced set of engineers no less, and simply replace “Objective-C and Swift” with “frontend and backend” or “mobile and web.” The idea is the same - technologies move fast and careers are long, and while you may enjoy being a specialist or a generalist for some time, you never really know when your situation could change and when circumstances may warrant otherwise. Or, when you might simply feel like trying something new.

When I write Ruby, it’s painfully obvious to me that I don’t know Ruby to nearly the same extent that I know Swift. On some days, this makes me sad, but it just as often makes me feel empowered. Perhaps I’ll decide to spend the time needed to achieve Ruby mastery, or maybe I’ll end up retreating back to Swift at some point in the future. Or, more realistically, I’ll get slightly better at the former and slightly worse at the latter and come to peace with that, just in time to shift towards learning something different altogether. In any case, how others describe what I do, and more importantly, how I view it myself, remains a fluid work in progress.

I don’t expect this to change, and this I am at peace with.


Originally published on Better Programming.