I only worked at an agency once, and it was a very long time ago. But until recently I never thought to question the business model: a fixed-bid contract to build a website, or app or whatever. I've considered this topic before, but never with enough conviction to fully convince myself. I'm beginning to suspect, though, that software, and more conspicuously the Web, is fundamentally the wrong shape for the archetype of the construction project.

People have been constructing buildings for thousands of years, so it's a pretty familiar metaphor. Business managers love this model because it's easy to understand: a capital asset in trade for a capital asset. They can do ROI calculations and everything. Agencies like it because it's well-defined. The model is so popular that it's used internally in established businesses (through procurement processes), as well as startups (by way of the various flavours of venture finance).

But anybody who has tried this model can tell you that it glazes over enormous risk, which has to be hedged somehow. Despite the construction project's veneration, even building contractors have trouble with it, and they have an advantage. One, they have a mile-long spec to work against; two, they have building materials they can arbitrage; three, everybody knows what progress on a building looks like.

But we've heard all these arguments before. They haven't been enough to sway the practice. Even members of the vanguard are apparently not Agile enough to get away from the construction project metaphor, ostensibly because it's the only language many businesspeople understand.

I Like the Whooshing Noise They Make As They Fly By

When you sign the contract for the construction project, you are agreeing to make a Thing—app, website, whatever. And you will have agreed to deliver this Thing on a certain date, also known as a deadline. From this point forward, the goals of shipping the Thing on time and actually solving the client's problem will be in competition with each other.

Now, I understand deadlines. I understand that the plane will take off whether or not I'm on it, or the importance of beating the holiday retail rush, or that the show must go on. It is perfectly clear to me how people use timekeeping technology to coordinate social activity. It's actually quite remarkable when you step back and look at it. But, over the years, I have observed that there is a difference between those examples and the ones around the delivery of Things, which tend to be completely arbitrary. When you wrap an arbitrarily complex endeavour up in a neat launch date, the goal seems to be more about coercing the people beneath you to absorb the overhead of all the details you left out—that or sweating it yourself. As a tool for coordinating human activity, I have come to believe that the Thing-deadline calculus is, considering more sophisticated alternatives, unnecessarily crude.

Even When You Take Into Account Hofstadter's Law

First, it will always take longer than you think. The question is how much. I have seen some amusing estimate-padding formulae in my day. They seldom go so far as to account for the fact that time on the meter is not the same thing as dates on the calendar—far from reliable enough to stake a business on.

There is one property in common to all those endeavours for which it is possible to produce reliable estimates of cost and time: data. Copious amounts of quantitative, empirical data about how long it takes and how much it costs to carry out a certain task. Which entails that the task itself be well-defined.

This means that in order to meet your targets under the construction model, and thus earn a profit and stay in business, you need to systematize your process. There are two strategies I am aware of for doing this: algorithmic and statistical. The algorithmic strategy is simply prescribing the process from top to bottom, with little attention to what would actually solve the client's problem, manifest in the Web's infestation of inscrutable restaurant websites. The statistical strategy effectively reduces to bathing the project in money, in the anticipation that valuable results will emerge—which is essentially what non-trivial scope and design phases present in the higher-end agencies and internal processes are made of.

Though in my experience, elswhere the construction model was applied, there was always a pressure to prescribe, no matter how much money there was to play with. It's a simple matter of cost accounting. The more you can keep the project within the boundaries of well-defined processes, the higher the profit margin. And this fundamental incentive is what I mean by software and the Web being the wrong shape for the model.

A Tree is a Graph but a Graph is not a Tree

I have reason to believe, which for brevity's sake I will treat elsewhere, that the most complex class of processes and structures we humans can consciously prescribe, reduces mathematically to a tree. A tree has a top, bottom, left and right. Its branches fan out from the trunk and they don't intersect with one another. They are discrete, contiguous, identifiable objects which persist across time. Trees are Things.

Software and websites, however, reduce to arbitrarily more complex structures: they are graphs. A graph has no meaningful orientation whatsoever. No sequence, no obvious start or end—at least none that we can intuit. It is better considered not as one Thing, but as a federation of Things, like the brain or a fungus network, or perhaps a composite artifact left behind from an ongoing process, like an ant colony or human city.

True, a software class hierarchy is a tree, and the Web was designed on top of a POSIX file system, which is also a tree, but these are two pernicious red herrings. These structures conceal both the myriad interactions between their components, as well as the intensive processes that got them to such a tidy state. It is precisely structures like these that we have in mind when we prescribe features or sections.

I have already outlined a process for extracting trees from graphs—thus identifying shapes that do conform to the construction project—in a hairball of considerations, but there still remains the problem of composing such a graph in the first place.

Pickle Juice in Place of Chocolate Syrup

The most important consideration for any software or web excursion is content: the content of the text and other communicative media, as well as the content of the code that executes the business processes. The ability to tick off a page or piece of functionality as being done only produces a nominal successful result; the careful crafting of what one of these objects says produces a real one. And both resist the concept of done—the single most important criterion for the smooth execution of the construction project—with great intransigency.

Content, by definition, has no substitute. Content is what conveys meaning. Content is data, and any scientist can tell you that if you want a certain data set, you're going to have to be prepared to do whatever it takes to procure it—be that interviewing undergrads or building particle accelerators.

The problem of acquiring content is from the same family of problems as the scavenger hunt. Each new clue is liable to send you back to where you found some other clue before it. If only you had known at the time, you could have just picked it up then. Well, tough luck: Making this process behave predictably requires information from the future.

If you want a mental image of the process of content acquisition, the best one is probably Brownian motion. It's a fractal process that can be found just about anywhere from bee foraging patterns, to eye movements, to the way particles of dust float around in the air. If we can't beat Nature, perhaps we should join it.

We may not be able to predict the future, but we can take advantage of the assets around us at any given time. Wherever your mission brings you, there will almost certainly be something worth picking up or exploring. The construction project, however, treats this as a liability. Once again, a preoccupation with staying on-task, otherwise known as cost accounting, will subvert any opportunity to create serendipitous value. If, however, you make content generation an ongoing, meandering process, rather than part of a discrete project, you will soon find yourself with material you neither knew you wanted, nor dreamed of asking for.

Doneness is Overrated

Doneishness is the new jam. Done is enormously important for the world of Things. Done-ish is sufficient for a medium that affords instantaneous, worldwide updates to federations of Things. Done-ish makes much more room for subtlety, complexity and nuance. Done-ish enables the continual improvement to value that the Agile people evangelize about. But I go one further and submit that done-ish entails an entirely different kind of relationship than what a client would have with a building contractor.

The contractor needs done to get paid. The sooner the contractor can prove done, the more profit they earn. There is no incentive to explore; indeed a strong disincentive to deviate from the plan. There is a hazard of legal hair-splitting: that which isn't perceptible as done to the client must be discharged in the fine print, or eaten by the contractor. Change orders may abound, and are a serious money-maker, but the contractor's behaviour will always be biased toward getting clear of their obligation to their client.

A hypothetical done-ish agent would only need one directive: use your abilities to make awesome stuff every day and we'll see where that goes. The challenge is getting the incentives right.

The goal of done-ish is the intersection of internal consistency and understandability, which together can be understood as conceptual integrity. When an object (or a piece of one) is internally consistent, it means it doesn't fall apart, either physically or logically, wherever either are applicable. Internal consistency goes a long way toward understandability if you make all the parts visible and make it obvious what they're for.

Internal Consistency + Understandability = Conceptual Integrity

The idea behind the focus on internal consistency is the principle that a random component that works—today—is more valuable than a specific one that doesn't. The idea behind the focus on understandability is that the progenitor of one component or another may not be available to work on it in the future. Not only does this performance criterion facilitate the division of labour, but it also eliminates the risky and potentially contentious mutual dependency between the client and the agent.

It is obviously impossible to negotiate a project fee for results that have yet to be imagined. It can likewise get very expensive for a client if their agent bills hours to follow self-directed hunches, and negotiating permission to spend on every new turn reduces to the same general problem as negotiating a project fee. Realistically, though, such an agent needs no more compensation than the range between what covers their expenses and the market demand for their attention, billable as a flat monthly fee, with no obligation to continue. The client's costs are thus capped, and their risk exposure is limited to a single installment.

The final consideration is that of motivating these agents to produce their best material. The solution is simple enough: as part of their fee, they get to keep what they come up with.

The intellectual property generated in the process of acquiring software and websites tends to decompose into a specific part, which is of direct strategic business interest to a client organization, and a generic part, which they certainly need to access, but it is not essential to own. The generic part does have strategic value to the agent, however, because it makes them better at what they do. If a client and agent can negotiate an agreeable mix between cash and intellectual property rights, this model may indeed prove to be a viable alternative to the construction project for acquiring complex new assets.

Countdown to Launch

A launch is an exciting PR event and a great excuse to throw a party. But for software and especially websites, it is important to understand that a launch is a near-complete simulacrum: Very little truly gets launched.

When we launch something, like a ship, rocket or even a book, we completely lose our influence over it. It's gone—we can't get it back. But we have our hooks firmly sunk into virtually every aspect of a networked, digital artifact. Except one small but significant detail: the relationship our users have with whatever it is we delete in order to replace with whatever it is we're launching.

The biggest mistake I've witnessed so many times around launching is leverage—the selling downstream of results that don't exist yet. The bet is, naturally, that the results will be ready when you need them. Often enough, despite a valiant, weeks-long, round-the-clock, all-hands work binge, they aren't. But there's no reason, at least theoretically, that all updates must be tied to a launch, nor is it essential that launches be leveraged.

User-visible changes can be grouped into two rough categories: appearance and behaviour. I say rough because they are more like two facets of the same thing: the propensity for an individual to either exhibit situation awareness, or fumble around in a daze. The purpose of splitting up the phenomena is to narrow down where to go looking for them. Changing appearance is like coming home to discover that somebody has rearranged your living room; changing behaviour is like having all the one-way streets randomly inverted along your route back there.

About the only appearance or behaviour that genuinely elicits a launch is when you're replacing old stuff with it. That way, the fanfare enables people to brace for any jarring changes. Non-destructive updates, by contrast, can happen any time. Even if some updates are intended to supplant others, it's often possible, if not useful, to deploy more than one at a time. Moreover, on a sophisticated enough system, updates to appearance and behaviour can often move independently of one another. The gist of this is that by the time you arrange the caterer and notify the investors, the new stuff you have to show them can actually be stuff that's been up and working for a while.

Hey, this incremental update strategy seems to work for all the big socialnets. They even get extra fancy with an opt-in trial period before shunting everybody over. Why can't it work that way for everybody?

Okay wrap it up, I want to go to sleep

Software, and by induction, the Web, are fundamentally new media with fundamentally different properties than their predecessors. These media exhibit different constraints and affordances around what can be done with them, as well as how we can work with them. The construction project, conceived in antiquity and refined through to the late industrial age, imposes an upper bound on the harnessing of the complexity inherent in digital media, while at the same time succumbing to the hazards of that very same complexity. There are better methods, and the advantages will go to the people who master them.