It has bothered me for years that promotional copy for software, as long as it has existed, has overwhelmingly been fixated on features. It's something I have given a lot of thought to. A feature at once reflects a capability on the part of the user, as it represents—an unfortunately very flexible—unit of work on the part of the developer. In this sense, features are intra-organizational underpants talk. In other words, you're showing your ass to the customer when you talk about them.
A feature in software is the mirror image of a capability on the part of the user. The thing about capabilities is that they are binary: you can either do the thing, or you can't. My beef with features as an organizing principle is that given that you can achieve an outcome at all, feature-ese is silent on the extent of how excruciating and onerous the experience of achieving the outcome is—or how lobotomized and broken is the feature itself. This spawned my clever little antimetabole: You can define features in terms of behaviour, but you can't define behaviour in terms of features.
Or, at least, it sounds really silly. Like on the order of has the feature of exhibiting the behaviour of…
Amusingly I got this one super important point backwards in this video. But then, I go from zero to shipped with these videos in under an hour, so, caveat watch-or.
So one question I asked myself was is software, or indeed any invention, always going to be inextricably wedded to capability?
I'm pretty confident the answer to that is yes. Now you can get
—that kind of thing. Why I entertain questions with seemingly obvious answers is to ask other questions, like…$OUTCOME you couldn't do before, with our product
The thing about the binary of can you get
is that it also splits into two. This is visible when the answer is no: impossible in principle, versus just really expensive. I'm partial to the thought experiment that involves going back in time (impossible in principle, as far as we know) and giving an Egyptian pharaoh the plans and all supplementary information needed to build a Model T Ford. He could build one, sure, but it would cost him everything, and he probably wouldn't live long enough to see the thing bench-tested, let alone get a chance to drive it. Indeed, it could very well be a multigenerational endeavour—make the pyramids look trivial by comparison. Point being, we know Model T Fords are possible in principle, because we have those and much more sophisticated vehicles besides, but in ancient Egypt it would have taken a quixotic diversion of resources to build even one of them. This of course is because it's not just the car, but all the subsidiary infrastructure that goes into creating all of its inputs, and their inputs, and so on.$OUTCOME or not?
The baseline, then, is that over a long enough horizon, anybody is capable of anything, as long as it is possible in principle. This entails that we contemplate a concept of effective capability
which we can articulate like, you can achieve a certain outcome within a reasonable envelope of resources (time, money, people, whatever). We can moreover use that for the default definition, because the other one is silly. The distinction is important, though, because compressing the economic implications of a capability from silly
to affordable
is what inventing things is all about.
Inventions crystallize sets of capabilities one by one and tend to accumulate, but they don't all accumulate on top of each other in the same place. In fact, crystal growth really isn't a half-bad analogy for the process of extending humanity's capability repertoire: a bit of precipitate crystallizes over here, another bit over there, and then a third spans the first two. An orderly structure assembles itself over time from a structureless slurry. We call this accumulation of crystallized capability technology.
This isn't an endorsement for Maldon salt, but it's not not an endorsement for Maldon salt.
Thousands of years of economic and technological development (which I am inclined to argue are almost the same thing) have accreted between the pyramids and the Model T, and has swelled even more since then. Bigger crystal means bigger surface area, which means faster-growing crystal. To depart from the analogy—or perhaps to double down on it, I'm not sure which at the moment—crystals grow by depositing a quasi-molecular unit row by row, layer by layer. Humanity's capability repertoire grows by laying down individual inventions, usually (but not always) on top of other inventions. While the basic units of a crystal are all functionally identical (although their respective positions may be consequential), every invention is different. So the consequences of adding each new invention to the repertoire are going to vary enormously.
There is a duality to technology, moreover, insofar as the technical skill required to use it is rarely anything close to as intense as the expertise required to replicate it. This was characterized succinctly by Ursula Franklin, when she proclaimed that technology was both fish and water
. Given that the basic molecular unit of technology is the invention, the quintessential representation of an invention is not an instance of a particular gadget, but rather the formula to recreate it.
Of course, not every invention produces a gadget. A common phrase you see in the titles of patents is method and apparatus for
, the method and the apparatus being distinct and separate things. Sure, lots of methods that are patented are about how to make the concomitant apparatus, but it's the method that's important. The whole point of a patent—it's in the name after all, Latin for $OUTCOMElay open
—is primarily to put into the public register the method behind a capability. Any apparatus that results is incidental. The essential bargain of the patent, moreover, is that the state grants you a 20-year monopoly over your invention in return for providing instructions, to everybody in the world, in detail, for how to go and make one for themselves.
I mention all this because software is all about method. It's all about procedure—indeed in object-oriented programming, named procedures are literally called methods
. Procedures, that operate over representations of entities, whether those have real physical referents out in the world or not. There is in fact very little else. All software does, and can ever do, is:
The centre of gravity in software is, in practice, almost never in its hardware periphery (except when it is), so we're usually talking about a feedback loop—extracting symbols and representations from signals and manipulating them, which transform into signals for action—a sort of distillation of capability. A lot of the time (but not always directly), software is tasked to translate human intent into concrete messages, transmit those messages wherever they need to go, and then show the results of those messages and others besides; whatever effect they had. Here I go back to Herbert Simon's eminently quotable aphorism, solving a problem simply means representing it so as to make the solution transparent
. Software is just that, with most—if not all—of the hard work of physical implementation subtracted.
Curiously, the word that gets batted around in the discourse is not invention, but rather innovation. Invention requires actually coming up with a way to do something; all innovation requires is a change in framing. That said, you can invent something useless, and you can invent something by accident, whereas an innovation is always deliberate, and in service of solving some identifiable problem.
Another way to say this is while innovation leads with a problem, invention leads with a method. An invention might solve many problems, or it could solve zero. The word for an invention that either doesn't solve a problem, or solves a problem not worth solving, is chindogu. Software often looks like chindogu to onlookers, it being pure distilled method, very often with obscure and/or highly-specific use cases, that is at best inscrutable to the uninitiated, if not outright invisible. As with any invention, you have to attempt to reconcile the method with an actual problem that ordinary people experience, although brace yourself for the possibility that there might not be one.
I had always felt a little bit like this about Intertwingler, but in the past few weeks I've managed to acknowledge that no normal person—nor the median programmer for that matter—is ever going to give a shit about it. To reconcile Intertwingler with the real world
it helps to remind myself why I made it. The answer is a laundry list of things that pissed me off about the Web, about conventions around how it's made, that made it take way too much effort to make the kinds of things I wanted to make, such as the IBIS tool. I made Intertwingler so I could make things like that.
Like I wrote elsewhere, the purpose of this planning tool is to reify the intellectual work of collaborative problem-solving—turning each consideration into something visible, tangible, countable, shareable. I can't do that if I'm bogged down by the minutiae of building a Web app
. That's why it sat mostly untouched for over a decade: adding anything new to it was an effort way out of proportion to the effort it took to set up initially. Plus, the whole thing was written in a dead-end language, and would eventually have to be rewritten anyway—from scratch.
When I say things like that
, I mean we're still very much wedded to documents as the basic unit for storing and exchanging knowledge. It is my opinion that documents actually suck for this. By documents I mean, like this essay, any discrete, bounded, information-bearing artifact that has a linear path from start to finish. So a slide deck, a movie, and a podcast are all documents: great for telling stories and okay for making arguments, but absolute hot garbage for anything that resembles a fact.
Anybody who relies on documents for managing fact-like objects knows that they are a pain in the ass, especially if the documents aren't purpose-built for it, like a dictionary or manual. Documents are big, clunky, composite artifacts with poor internal addressing, that clutter up the scene with multiple redundant copies that are continually going out of date. Digital documents are a little better, given that you can search within them—although not as easy with pictures, sound or video—and networked digital documents accessible by URL are (a little) better still, but better doesn't necessarily amount to good.
Software, like any invention, will always exhibit a gap between what the artifact actually does—how it behaves—and the social (cultural, political, economic) outcomes it affords. In my experience, you cannot expect people to infer social outcomes from technical capabilities, unless they are extremely well-versed in the tech (and even still). You have to traverse from the outside in.
So the grandiose, outside-in, social/cultural/political/economic framing that has led me on this crusade for so many years, is that we rely on information to make decisions and direct our actions, and in a setting where outcomes matter—professional or otherwise—rapid access to accurate information is critical. So we have the dual problem of the information itself being accurate and up to date, as well as that information being presented in a way that doesn't inhibit its absorption. There is a third problem, moreover, that information tends to get presented in a way that is cut off from the information it is related to, which hinders a more holistic understanding. I can articulate the problem like this:
The value of having good, timely information I have often found hard to communicate, because people want concrete examples, and if you give them one, and the actual content of that example doesn't line up perfectly with something they already value, they don't get it. But rather than try to anticipate what somebody values so you can concoct an example on the fly that might resonate with them, you can ask how different would your life be today if you could have prevented a significant portion of the wrong turns and bad bets you've ever made?
Because that's what we're talking about:
Now: over 30 years ago appeared an invention that built upon another 30 (or 50, or 90, depending on how you count) years of theory and practice, that has a heap of desirable characteristics for making the information environment a lot better, so you spend less time getting the information you need to make the right moves instead of the wrong ones. At the same time, it suffers from an inertia of how things had been done in the previous regime. I speak of course about the World-Wide Web—still very much calibrated toward pages in disjoint hierarchies, guarded by individual authorities. The page is in the chapter, which is in the book, which is on the shelf, which is at the library. Paper-era thinking.
The kind of information we need to understand situations and make good decisions takes the form of facts, concepts, states of affairs, representative entities, models, and system dynamics. The documents, books, shelves, and libraries that contain them are incidental. We also need to see the connections between these objects, as well as what kinds of connections they are—a capability paper-era thinking barely supports at all. This is possible to achieve with the Web, but it's a lot of overhead, because it rows in a different direction from what is now decades of accumulated tooling and infrastructure. The point of Intertwingler is to act as a plug or shim that eliminates that overhead.
It is perfectly ordinary for people to associate a piece of information with the proximate source they got it from. Nevertheless, the same fact can exist in different books, from different authors. Same goes for the Web or any other medium. The Web, however, has evolved as a platform for delivering software at least as much as it has as an expressive medium for organizing information. The problem stems from the fact that information (well, data) behaves like nouns, while software consists of verbs that operate over the nouns. We tend to split software into operating systems, which do generic things, and applications, which do specific ones. To make things weirder, this designation is recursive, so an app
in one context is, or is at least analogous to, an operating system
in another.
The problem manifests with the importation of the language of tools. Everybody understands what a tool is, it's an artifact that extends your capability repertoire, that enables you to achieve outcomes that weren't possible without it. A tool almost always has a distinct and identifiable shape due to its physical affordances, and that association is how you recognize which tool is for what purpose. But it also means tools can be improvised or exapted for other purposes. What this further means is that a screwdriver, hammer, saw, and tape measure can each be developed in isolation, and don't have to take each other into account.
What makes this collapse in distinction so pernicious is that the analogy is wrong: an app isn't the tool, it's the workbench, and very often the workpiece as well. In other words, instead of a coffee pot that will take any coffee, filter (optional) and hot water, an app is more like a Keurig machine. You can't separate the thing, from the thing that gets you to the thing. For apps to behave more like tools, one of two things has to happen:
Apps tend not to acknowledge each others' presence (except when they do, but they generally aren't supposed to). Leaving aside things like plugins and viruses, if apps interact at all, their interactions are mediated by the operating system (or at least something operating-system-like). This usually means files, although it could mean something like a database or message bus, and mobile operating systems have interfaces to shared resources like contacts, photos, and location. It's files, though—or rather data formats—and their close relatives, the network protocols, that are where an app can choose to be gregarious and cosmopolitan, or a miserly monopolist. Just look at the empires of companies like Microsoft, Adobe, and Autodesk, which all swelled in the 1980s due to monopolies over their proprietary file formats. Indeed, one thing the Internet, and the Web in particular really did, was encourage a kind of data cosmopolitanism, and made it gauche to hoard. At least for a while, then it became viable to just hoard all the data itself, instead of just the format specs. This phenomenon is known as the platform.
In many ways, the platform is a recapitulation of the operating system—indeed platform
is a synonym for operating system
in some contexts—just one level more abstract. Just like an operating system, a platform is associated with an entity, meaning platforms are real estate; they're territory. Contemporary networked platforms are arguably much more powerful entities than their antecedents: not only do they physically possess all the data, they control how it is accessed, who is allowed to access it, and what conduct is permitted on the platform besides. They can introduce new capabilities and eliminate them on a whim, and just like any sovereign, they face few if any consequences for contravening their own policies.
To interface with a platform from an app, you either have to make the app within the platform, or connect to it from the outside. The former is unequivocally sharecropping; the latter is merely likely to be sharecropping. Platforms are likewise getting stingier with their programmatic interfaces (APIs), and have no compunction about stealing your idea to compete with you, while throwing up obstacles if they sense you might be in a position to compete with them. This is a Faustian bargain that you can only really mitigate if your centre of gravity is far enough outside the platform that losing access to it, for whatever reason, won't completely kill you.
One reason you see so many redundant copies of the same tool-like functionality is that every platform needs its own version. A cursory glance at my phone reveals no fewer than ten distinct ways to chat. Why? Because the actual chatting is the easy part. The hard part is in the plumbing that manages the users and routes the messages. There are messaging standards, but if you haven't noticed, tech companies love chat as a place to shove their gimmicks, because chat is something everybody can see and use. So using a standard, where tacking on gimmicks would be hard if not impossible, would mean you'd be less competitive in the first instance. Furthermore, making chat permeable, so a person on one platform can address somebody on another platform, goes against the essential logic of platforms: if (outsider) Alice wants to talk to (our) Bob, she can damn well create an account.
To bring this massive digression back to capability, though, I've always grated against phrases like our app lets you…
, or our product allows users to…
. I know it's (at least historically) shorthand for makes it possible
but these days it could just as well mean permits
. I remember thirty, or even twenty years ago, when software vendors didn't have the power to allow
anything; at best they were facilitators. Now, achieving this or that outcome very often absolutely is a matter of permission, rather than capability. You have reached your quota, please insert a quarter to continue. No, not like that, that's against our terms of service. We have no choice but to suspend your account.
How you get back to a place, in software, where outcomes are governed primarily by capability rather than permission, is by pledging to operate 100% over open data, designed to resist being captured by platforms. One aspect of open data is that it is indifferent to whose database it lives in. What concerns me here is the perception that almost nobody cares about this.
An app
that lets you…
may be the only language people possess to articulate your offering. Don't correct them. Mind you, this doesn't mean that you have to use this language yourself. The language you do use, however, has to resolve to something an ordinary person understands. Most people, moreover, may not care about the political economy of platforms and the ability to resist them, but the people who do care, really care. It's like how Coca-Cola is kosher: Over 99% of their global customer base is indifferent to this fact, but a select few are absolutely keeping an eye out for that certification (which costs the company a pittance to obtain). So if your product has a feature that is make-or-break but only for a minority of your potential customers, by all means mention it, but you don't have to hammer on the point in your main messaging stream.
As a maker of software, you can be pretty confident that an ordinary person understands the equation tool * computer = app. That is, if you want to do something or other with a computer, then what you need is An App For That™. They don't seem to be fussy as to whether the app (tool) is delivered as native code or via the Web. It isn't clear, moreover, that the average person makes much of a distinction anymore between (web) app
and website
(to the extent that they ever did); that is an empirical question that can only be settled through research.
I imagine people must understand at some level that various popular apps call out to the network, just like they understand that their television set does not have tiny people inside. The objects on their phones branded TikTok, Instagram and Twitter are merely portholes and control surfaces to networked resources, and if Alice posts something on one of them, Bob, wherever he is (assuming he follows Alice), can see it. This much seems to be within the grasp of the general population.
The thing about real, physical tools, though, is they have toolness. Tools have a part you hold, and they have a business end. Just looking at the business end of a tool is enough to get some sense of what the tool is for. At the very least, you can identify that a tool is indeed a tool. Software tools
have a semblance of toolness, but I'd argue that lots of software that claims to be a tool is actually an appliance. The difference is subtle: a tool extends your capability, while an appliance renders a service. A dishwasher is an appliance, not a tool, because it doesn't make you better at washing dishes; it washes the dishes for you. And when it starts to fail at its job, you throw it out and get a new one.
We think of tools as artifacts that extend our capabilities to affect our environment, but I want to draw attention to a class of tools
that affect us. Consider the breadth of technology invented over thousands of years for sensing and measuring: telescopes, microscopes, radar, rulers, calipers, scales, and so on. Closely related is the family of representational artifacts, a decidedly inside-baseball term referring to the kinds of objects that help us understand situations. Here, the essential example is the map. What can we say about maps? They represent a region in space and their purpose is to help you understand that region in space, often including where you or other things are situated within it. The content of a map is thus prepared in advance, and you operate a map by reading it. A map is a certain kind of model, and the purpose of a model is comprehension.
A model that includes dynamics can be construed as a simulation. Simulations can be extremely complex, hand-crafted artifacts that require deep expertise and copious resources to create, or a simulation can be a spreadsheet. While spreadsheets are used for all sorts of things, the thing that really sold them in the late 70s and early 80s was the ability to ask what if…?
questions of the finanical variety. You set up the context and leave a few of the elements free to twiddle around, and the spreadsheet shows you each possible future by computing it. For the first time in history, making a speculative map of your business is not a silly proposition, but an affordable, if not trivial one.
It is noteworthy that an individual instance of a spreadsheet itself behaves like a map, simulation, or tool, meaning that an app like Excel is a tool for making tools. It's pretty remarkable that in the 46 years since the inception of the first spreadsheet program, there hasn't been another class of software product—that isn't for programmers, that is—in this category.
When I look at the (IBIS) planning tool I created, it very much is on the order of a tool-making tool, or more specifically, a map-making tool. In contrast to the financial maps afforded by a program like Excel, this thing helps you create qualitative maps depicting entities of interest and the relationships between them. The IBIS framework in particular concerns itself with mapping out the things in the world you want to do something about, what you want to do about them, and why, and is just one slice of that domain.
The purpose of this sprawling odyssey has been to establish a basis for thinking and talking about the product I'm making—products, rather. The obvious first step is to separate any talk of Intertwingler from whatever goes on top of it, like how Ruby on Rails relates to Basecamp. There might be some audience overlap there but I'm not going to bank on it. As for the thing I'm calling the planning tool
, I am inclined to de-emphasize the IBIS component, since it really is only one aspect of a broader organizational cartography kit.
That is the product I'm creating: a set of map-making tools. The capability it confers is analogous—albeit qualitatively different—to that of a spreadsheet, insofar as it makes it possible to create a map that represents various aspects of your business—org chart, products, markets, competitors, technology, concepts, content, whatever—and you can use that map to plan, communicate, and comprehend.