I woke up this morning thinking about schedules and forecasting. I was in a meeting yesterday with a board of directors, one of whom was emphatically concerned about schedules. I understand where he's coming from, as he's in an industry that runs on very strict production schedules, and for good reason. I, however, am not.

I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

The answer I've come up with, for creative, information-centric work, is to kill the prescriptive development schedule.

If you care more about delivering valuable results than the order they came in, it is possible to have them fast, cheap, and good.

My suspicion about why stuff-based work can even be put on a predictable schedule is because of the century-old project of Frederick Taylor—no relation—and his ilk, to drive a wedge between the capricious craft of design and the rote process of implementation. I further submit that the capital cost of materials and equipment themselves play a significant role: standardized commodities plus standardized labour equals predictable schedule, which you can expand to an industrial scale. So much money goes into the implementation that it's easy to forget the cost—and time—of design.

Moreover, screwing up the design of a building, bridge, or vehicle means at best you lose money, at worst people die. Like Frank Lloyd Wright said: You can use an eraser on the drafting table or a sledgehammer on the construction site. Movie scripts and pre-production provide another good example: a drop in the bucket compared to the total cost of the project, but potentially years in development compared to weeks of shooting. They don't dare rush the process because of the huge amount of money at stake.

What Makes Predictable Processes Predictable?

A predictable process has been worked out to the point that it has no remaining endogenous sources of uncertainty:

Of course, creating these conditions doesn't rule out exogenous sources of uncertainty, leading to excuses like it would have all gone according to plan if not for that event we didn't anticipate, kind of like how economists spoke after the last financial crisis.

Taylor's service to his clients was more or less a kind of algorithm analysis and optimization, but with real people in physical space. He optimized workspaces by arranging tools and materials, and timed even the tiniest actions with a stopwatch. Other early adopters of scientific management and Taylor's colleagues, Frank and Lillian Gilbreth, used wall-sized grid paper and long-exposure photography with light markers to trace the physical actions people took at their stations. Not only could these analysts trim out wasted effort, but this information also gave them a reference figure for how long a given job should take. Fine-grained detail about the job description would be baked into the work environment, reducing training time, and could be used for insight into what it costs for a worker to be replaced.

If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable. It would be interesting to know, throughout his consulting practice, how well Frederick Taylor stuck to his own schedules.


I was considering the possibility that only algorithmic processes are amenable to forecasting, but then I remembered hairdressers. I've known a few over the years, and I've always been envious of the concreteness of their professional results, and especially the astonishing regularity with which they deliver them.

For better or worse, a reasonably-practiced hairdresser can expect, with extreme confidence, to take a client from start to finish in just under an hour. What's more is that if you were to plot the performance of a bunch of hairdressers, you'd likely see it cluster in a normal distribution, because of the physical constraints of cutting a head of hair: it can't take less than a certain amount of time, and the cases in which it takes much more are rare, with an implicit upper bound. In other words, you'll never see a haircut that takes anywhere near as long as it takes the hair to grow.

Even though it's a deeply complex stochastic process that entails hundreds, if not thousands of detailed, path-dependent design decisions, we can treat a haircut as atomic, because of the statistical properties derived from its physical manifestation, backed up by millennia worth of empirical data.

Regarding optimization: suppose a hairdresser finds that her performance time clusters almost perfectly around, say, 49 minutes, so scheduling on the hour means 11 minutes is wasted. It still doesn't make sense to try to maximize productivity by scheduling appointments every 49 minutes. For one, the hour-on-the-hour schedule acts as an interface with clients, and for another, a haircut is a discrete thing, not a fluid quantity. Making this optimization might enable her to squeeze one more client into a day, but would the additional money be worth the extra chaos?

I Have Been Thinking About This For a While

Back in 2006, I came up with two project management techniques. The first was the establishment of a new basic unit of time accounting: the four-hour cell. An hour is too short to do anything meaningful, and a day is too spongy: for instance, does it mean a standard eight-hour work day, or a 16-hour bender? What about breaks for eating, sleeping and the like? Four hours, however, is the Goldilocks zone for creative, problem-solving work. I still use the cell resolutely as the basis for all my project planning.

My design for the four-hour cell initially imagined three hours of work which could wiggle around inside a four-hour allocation, to account for short breaks and minor distractions. Again, much like the hairdresser, the value of the result is not improved much, and even potentially harmed, by trying to push productivity to the limit.

The other technique I came up with was something I called the behaviour sheet, which was an extremely accurate method for estimating the time it will take to produce a small piece of software. I took my inspiration mainly from Donald Knuth's literate programming, which is a technique of weaving expository and argumentative prose together with imperative, executable code, but without the use of Knuth's special equipment.

Knuth's idea was to organize software programs, as they were being written, in a way that was principally amenable to human understanding, with room for arguments about why things ought to be the way they were—a feature programming languages in general do not provide. His claim, which I corroborate, is that literate programming affords a superior level of clarity for working out the problems presented during the software development process. He wrote WEB to do literate programming in Pascal (which nobody uses anymore), followed by CWEB for C (which nobody uses anymore without a good reason).

These tools take literate programs as their input, and produce two documents: a nicely-formatted prose document serving a function much like my behaviour sheets, and source code rearranged internally in a way most appropriate for the computer. Few subsequent languages have followed suit, a notable exception being Haskell which has two literate modes: light and heavy. In my experience, it's not as sophisticated as Knuth's solution, in particular because you have to manually track references to symbols in the code which also appear in the prose, and you can only segment the code regions by whole subroutine, rather than arbitrary sequences of statements. It also doesn't rearrange the source code, meaning the source document has to be structured for computers rather than for people. These missing capabilities are present in Knuth's—much older—tools.

Producing a behaviour sheet is a simple analytical process: you sit down in front of an outliner and write out bullet points containing all the specific ways you do and do not want a particular piece of software to behave. You keep going until you've expressed enough detail to confidently parcel up the bullet points and assign them to cells.

I don't use this technique anymore, for one glaring reason: It takes almost as long to do the estimate as it does to do the job. What is the point of knowing that the work will be done by Friday afternoon if it takes until Wednesday morning to figure it out? I didn't even bother trying it with bigger deliverables, because I would clearly need a technique for estimating the estimate, and one for estimating the estimate for the estimate, and so on.

There are good reasons for doing something like a behaviour sheet, however, even if it's futile as a tool for predictive analysis. It smokes out the details that would otherwise pop up as surprises during the job that cause it to take longer. I still do that, I just do it, for the time being, in commentary and in-line documentation.

Finding the Operational Equivalent of a Haircut

Okay, so we can black-box even complex problem-solving processes and assign them to fixed time periods, provided that:

I type symbols into a computer for a living. The results of this activity are what my clients care about. They don't recognize any value created unless those symbols are arranged in the correct order. This could be code, or it could be prose, or it could be a diagram, or any combination thereof. I'll focus on code, though, because it has a direct relation to dollar value.


The environment is straight-forward enough, but essential: comfortable room free of distractions, comfortable chair, serviceable computer. It doesn't even have to be a very new or fast one. The toolchain on the computer also has to be in good working order, but those tend to be pretty mature and don't break once set up, unless you break them.


Getting a handle on capacity is more about ruling out what can't happen than setting a baseline for what should. I know, from measuring before, that my maximum carrying capacity for one day of writing code is about 1500 statements. A statement is equivalent to a sentence: it encapsulates a complete thought, one complete instruction to the computer. You can say quite a bit in 1500 statements, especially with modern languages that take care of the housekeeping for you, as well as with the use of time-saving, third-party frameworks. There are at least a few billion-dollar corporations out there that got up and running on maybe 10,000 to 50,000 statements.

Of course, those codebases are all several orders of magnitude larger now.

Though that figure represents going on a rip, a ten or 12-hour day of being in a solid groove. This makes the upper bound for a 4-hour cell about 500 statements. Still, you can be quite expressive in 500 statements, and often don't need to go anywhere near that to produce something of value. A subroutine, analogous to a paragraph, is usually pretty unwieldy by the time it hits a hundred statements, and there are archaic style norms that proclaim they probably shouldn't exceed 25. The most we can expect from a four-hour cell, therefore, is a handful of interrelated subroutines.

The classic unit of software project management is the feature, because in many contexts it corresponds almost perfectly to one (potentially very large, composite) subroutine. On the other side, a feature corresponds, less perfectly, to a user goal—or at least a user task—and therefore is also something to advertise on the box. A prospective customer can translate has feature X into I can get something useful done. I don't like project management by feature, because it doesn't say anything about how the feature ought to behave, and coupled with arbitrary milestones, creates incentives to make perfunctory garbage. That's why the behaviour sheet technique was about behaviour that happens while the software is running, not some feature you can point to in the code.


Subroutines are composed upward from other subroutines. Some of which you write yourself, others you reference, just as you would cite another person's book or article. This third-party material counts as an input. All software has flaws, and those flaws tend to remain hidden like land mines until you step on them. The interface, and even the behaviour of third-party software can also change deliberately from revision to revision. When you use third-party code—which you can't reasonably avoid, nor would you want to—it really makes sense to audit it for general fitness, even make some little disposable thing with it to see what it's like to work with. Nobody does that in real life, though, because of the counterintuitive nature of going off-task, but they should. Although, if I had to keep close tabs on all the third, fourth, and Nth parties I use in my code, I'd be doing little else.

I spend a non-trivial percentage of my time fixing bugs in other people's code. I'm really good at finding them, because I tend to use things how it looks like they should work, but were not anticipated by the author. At least, with open-source software, I can fix those bugs, and I'm not dependent on what, for each third-party author, is a natural monopoly.

The other input issue is just knowing up front what to write—knowing what to say and how to say it. When I blast out 500 statements in one sitting, the chances are I have a pretty good handle on both. Just as in writing, drawing, painting, composing music, whatever, this is the exception, not the rule. It may be possible to set up the conditions such that this level of prolificacy is more likely, but that has to be built up over a long period for each project. Just like all those other formats, the easiest thing to write is a rewrite of something that has already been written.

Go back to the hairdresser: the probability that the time it takes to complete a client will exceed one standard deviation from the mean is unlikely enough to not have to care about it. Two standard deviations is unheard of. By contrast, a programmer ought to be surprised if everything does go according to plan. I've only experienced that a handful of days like that in my 20-year career. The time it takes to realize a specific, prescribed result can vary by several orders of magnitude. That's why I've argued a hundred times: do not prescribe the schedule. Let the opportunity to get something done dictate your priorities and the order in which you complete the work. The highest-priority task is the one you can complete right now.

The typical push-back to this idea goes something like this:

  1. But Specific Thing X is what we're paying for, or
  2. But we have to get the product out in time for Christmas, or
  3. But our competitors have X and we have to have it too.

For each of which there is a response:

  1. If Specific Thing X is really that specific, then there shouldn't be a problem implementing it. If there is a problem implementing it, then there are issues you haven't accounted for.
  2. I understand you want to make the Christmas rush, but how much do you want to gamble on blowing the Christmas rush in order to try to make the Christmas rush?
  3. Think twice about racing to be second-to-market. There are lots of examples of products—the iPod in particular—that waited patiently to pick the bones of those who were first to market.

These scenarios depict management-manufactured emergencies. Writing software is almost never an appropriate thing to do in an emergency. In an emergency, you can only do emergency things. You can only expect emergency outcomes: usually destruction in one form to save something else, like wrecking a house with water in order to put out a fire. Your options in an emergency are usually simple and drastic. You can often fix existing software in a simple and drastic way without incurring too much collateral damage, but good luck trying to make new software in the context of an emergency.


The final consideration is the output: a consistent state. The result has to be an identifiable being, not a half-done pile of stuff. I don't think people, even in the industry, fully realize the importance of closing out a block of time in a consistent state. If you exit with an inconsistent state, it's almost like not having done anything at all. When you get back to your work, say, the next day, you tend to have to spend a good chunk of your time cleaning up the half-done mess before you can start on anything new.

To increase the probability that a 4-hour cell produces a consistent state, we have to make that the objective of the time investment: do whatever produces a consistent state. If you do that, you'll progress as fast as physically possible.

What that means, though, is going ostensibly off-task. It could mean working on another problem. It could mean doodling on a piece of paper. It could mean going off into the weeds, possibly literally. The only criterion is that you produce some kind of artifact, some kind of receipt of your cogitation, no matter how insignificant. That way we know you actually did something, and weren't just screwing around.

Even better: embed recording capabilities into your work environment and have it collate your output automatically. Make a system which generates the narrative of what you did for that four hours, and how it connects to the whole.

Establishing a Language Gradient

People like to pretend they're building when they're writing software. Okay, that's not really what you're doing, not even close. I've argued in the past that as a central operating metaphor, building is definitely erroneous and probably harmful. What we're actually doing is a lot more like the work of Frederick Taylor, in minuscule: we're taking fuzzy, ill-defined processes and sharpening them up into concrete algorithms. Bit by bit we splice in detail to our conceptual model of the process under examination, each attempt making it just a little bit clearer, until the description is so precise that it can be executed by machine.

Software is an artifact of language: a formal, logico-mathematical description of a real-world process. I strongly assert that we have much to gain if we discard the cutesy metaphors and treat the problem of writing software at every step as one of linguistic expression.

Revisiting the Behaviour Sheet

While behaviour sheets were pointless as an up-front estimation tactic, they still provided valuable direction as a last step before committing a process to code. What made behaviour sheets effective was a wealth of information that already existed about the code they were meant to map out. Their effective range was filling in the following blank:

That's assuming a lot. Consider everything that has to be already established:

Extending in Both Directions

The behaviour sheet considers fine-grained details about the desired behaviour of the code, but not fine-grained enough to actually be code. It's precisely one step in the direction toward humanity, and away from the computer. We can imagine more steps that form a an unbroken path, starting from broad strokes and moving inward to greater detail:

Business ecosystem
The ecosystem is a map of the business entities, the constituents and relevant third parties in the neighbourhood of the client organization, plus the interactions between them. This is our setting. Its purpose is to identify the interactions which may be useful to have mediated by computer. The ecosystem should represent all relevant constituencies, such that if something isn't on the map, it doesn't matter to the organization. While other representations of the ecosystem are surely useful, it should be able to be rendered a one-page document that can go on a presentation slide or printed as a poster.
Archetypal users are derived from the business ecosystem, whether in persona form or some effective alternative. Like characters in a story, they exist for us to empathize with. These users are the primary focus of any technical intervention, even if the technology employed is centuries old.
Each user will have one or more goals, which are real-world objectives that produce for them some kind of valuable material result. These are like the plot. We can already visualize here a small number of constituencies in the ecosystem yielding a larger number of prospective users, and an even larger number of goals.
A user goal is not directly actionable, so they must be matched up to tasks, which are abstract sketches of processes which we can target for intervention. A common method of expressing a task of this kind is a prose and/or storyboarded scenario. It is worth noting that many different tasks can potentially achieve the same goal, so there may be more than one candidate tasks up for selection. This further expands our conceptual web. However, a given task may address more than one goal, so it is here that can we start to see some convergence.
Fine-grained, or meta-tasks
These are aspects of a task, or distinct subprocesses that the machine must undertake in order to facilitate the more abstract, real-world tasks that have already been identified. Here we are likely to see considerably more convergence, such as the adoption of a database or particular data format or network protocol. The industry, however, is astonishingly cavalier when it comes to committing to specific technologies, and especially specific technology products, despite the prudent thing to do being to put off those decisions for as long as you can get away with.
With all the aforementioned information in place, we can start to map out the specific behaviours of the aspect of the selected user task that are handled by the machine, including anticipating error conditions and failure modes which may arise and specifying how to handle them. Like many of the descriptions at the previous levels of granularity, common behaviours can be catalogued into—and referenced from—a style guide.
Source Code
The code is what does the work and therefore what earns the money: the aforementioned stack isn't worth much without it. But possessing that stack is going to make the code that does get written come together a lot faster and more predictably, with fewer defects and a considerably more humane tenor.

Why I'm interested in a gradient like this is because of two more refinements which are already performed by the computer, after humans have given all the direct consideration they can:

When the computer takes human-written source code and unwinds it into optimized assembly language. Assembly was also once written directly by humans, and is occasionally still written, in the relatively rare cases it is appropriate to do so.
When the computer takes assembly language and translates it further into directly-executable machine code. The modern convention is often to skip this step, either by compiling source directly into hardware-independent bytecode which is then executed by a virtual machine, or running the code directly via an interpreter.

In the beginning, in the 1940s, people—usually women—inscribed decimal, then later, binary instruction codes into physical media: switches on the machine itself, or punch cards. Within a decade, through the pioneering work of Grace Hopperwhich was met with considerable political resistance at the time—computers began to be used to translate symbolic, marginally human-understandable instructions into opaque machine code. From there, the computer ended up absorbing more and more of the job, from memory management to shorthand for common data structures. It only follows that the role of the computer can be extended even farther up the language gradient to support the human reasoning that needs to take place in order to produce code that actually solves the real-world problems of living, breathing human beings.

The artifacts in the language gradient also exhibit a gradient in temporal sensitivity: The business ecosystem, for instance, would only change in the event of a major real-world change in the organization. Source code, and especially object code or whatever analogue, produced in the compilation process, despite being the stuff that's worth the money—and provided the precursors remain to regenerate it—is virtually disposable.

This may sound like heresy, but remember: the easiest thing to write is a rewrite of something that has already been written. Jeez, just look at Hollywood.

Language from one level of specificity is encouraged to find its way into its neighbours and beyond. It may be necessary for more-specific concepts to bubble up, and broader language is almost certain to permeate downward into the more detailed levels. This way, conceptual structures which are understandable to non-specialists make it into the implementation intact.

This gradient is not necessarily a prescription of sequence. Just like you might start come up with a really witty sentence and compose an entire essay around it, you could just as easily bash out a little piece of code and grow an entire app around it. This construct is meant to reconcile that little nugget of genius with a coherent whole.

Conceptual Integrity and Conceptual Gaps

The biggest endogenous impediment to the development of coherent software is easily the presence of a gap between one level of specificity and another. Let's imagine that each link of the chain of specificity that I laid out above was in different languages altogether: English, French, Arabic, Mandarin, and Japanese. We would need translators to bring the relevant concepts from one link to the next. Translators, naturally, have to be fluent in at least the two languages they're dealing with. It's easy enough to find somebody who speaks English and French, and French and Arabic, even Mandarin and Japanese. But how about somebody who speaks Arabic and Mandarin?

After writing that paragraph I realized you'd probably have decent luck finding one in Malaysia or Indonesia. Whatever, you get the point.

Point being, there are different concerns at different levels. Clearly, there are business concerns, such as earning a profit or achieving some specific sociopolitical objective, and there are technical concerns about the feasibility of such objectives. Finding businesspeople who can palate technical minutiae is even harder than finding technologists who understand business. And this, of course, leaves out entirely any concern for the user.

The user was ostensibly once the domain of the technologist, if you look at the writings of Fred Brooks or Barry Boehm from the 70s and 80s. But software development became more ambitious and its practitioners became more specialized, separating that knowledge into different departments. User experience design materialized from a number of different sources to mediate between business and technical concerns and champion the user. I submit, however, that UX is getting specialized to the point that we may need to consider another splice.

In order to get coherent software, these following translations must take place:

This means you need somebody in charge who understands the languages on both sides of each translation. It's feasible to have one polyglot do the entire thing, but good luck finding them.

Note as well that there is feedback in this process: a newly-discovered constraint, or indeed a new capability, can bubble up and alter user tasks and goals, even business goals.

Because I can't not mention conceptual integrity

Conceptual integrity is when a system of any kind reflects a unified mental model which is shared among everybody that interacts with that system: business, designers, developers, and users. This means that the germ of the conceptual structure has to be simple enough to fit inside the mind of a single person. It also means that it has to be conceived by a single mind, because a half-formed concept is by definition incommunicable.

Again, Brooks wrote that it is possible in some cases that conceptual integrity can be achieved by two people, but only if the pair is very close, for example the Wright brothers, Gilbert and Sullivan, or Charles and Ray Eames. He maintains that the surest way to prevent the gelling of conceptual integrity is anything reminiscent of design by committee.

Pediatrician and developmental theorist John Gall wrote that [a] complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. Simple, both as used by physicists, mathematicians and systems theorists to mean few interacting parts, and in the colloquial sense to mean familiar to the person considering it.

Designing a system prescriptively from the top down is how you get gaps in translation, which leads to garbage, presuming you can even finish it. Designing it from the bottom up is how you get a morass of incoherent parts. There has to be a way to move incrementally from a simple system to a complex one, while simultaneously considering the whole and the details. My suggestion here is to drop the notion of hierarchical orientation, and consider it a network-topological problem of here to there.

Getting from Here to There

The goal here is to muster up enough clarity such that a developer can sit down for four hours and produce up to 500 lines of code which a) works and b) does something useful, and then do it again, again, and again. This has two implications:

  1. If you do not possess this clarity, there is no material to task a developer with.
  2. If you do possess this clarity, there is no reason to wait for the rest of the system to be designed.

Here is here: where we are right now and everything that we know. There is somewhere else, on the other side of some achievement or other. As those achievements are realized, there becomes the new here. As such, there will always be a here, and there will always be a there.

Finding our way from here to there, for all but the most trivial undertaking, is difficult without the use of a map. It's just as easy to get lost in the details as it is to miss an otherwise obvious next step. To quote Herbert Simon: Solving a problem simply means representing it so as to make the solution transparent. Simply, indeed. We need a way to take the overhead out of representing and re-representing the information we currently have, so we can see clearly what action we need to take next.

Oddly enough, the world came close to a solution to this very problem in the 1960s: Christopher Alexander's hierarchical decomposition method for computer-assisted architecture and industrial design, Douglas Engelbart's NLS, Ted Nelson's Xanadu and ZigZag, and Werner Kunz and Horst Rittel's issue-based information system. What these all have in common is an ability to both address and rearrange small, discrete pieces of information to glean structure and insight that was previously buried. What they also have in common is that few people talk about them or try to carry forward their visions. Every once in a while somebody tries, but doesn't seem to get much farther—certainly never as far as mainstream adoption.

A Simple System that Works

While it may be foolhardy of me to try where so many others have failed, I am not trying to Change the World™. I am not—at least yet—angling for mass adoption. I'm only interested in making a simple system that works.

While I am certainly inspired by Alexander, Engelbart and Nelson, the most actionable of the group is easily Kunz and Rittel's IBIS—which I'll explain in a moment. This is a system which, if you can believe, was carried out on typed index cards. It was later digitized in the 80s by Jeff Conklin with gIBIS, meant to run on a local network of Sun workstations, and later as a Java-based desktop app from his (emeritus) Compendium Institute. There are even a few Web-based versions. If you don't understand why these haven't caught on, just try using one.

IBIS is a method of structured argumentation, in which stakeholders post issues, much like you would in a software bug tracker, but also respond to those issues with positions about how to solve them, and back those positions up with arguments why they should or should not be taken—something that also happens informally in the comment feed of a bug tracker. Issues, positions, and arguments are all first-class objects in this system, and they are related to one another through a constrained set of possible rhetorical manœuvres.

To those of you who would ask why not just use a bug tracker?, and I have, and people do, but I find them inadequate. First, they're really only well-suited to developers working on specific problems with specific software that already existsnot ephemeral issues that non-developers might raise. Second, they don't treat positions and arguments as first-class objects, so there's no way to differentiate them from the other entities in the system. I could see an IBIS tool be appropriated as a bug tracker, but not the other way around.

My main criticism with these systems is that the paper-based implementation is obviously too cumbersome for people used to electronic speed and availability, and the electronic versions all behave like they're in a bubble. In other words, great, you've hashed out a bunch of design decisions. Now you have to wrench them out of the app.


One requirement that was clear at the outset was that a system of this kind has to be able to mingle with other entities, like documents and heterogeneous data structures. Earlier systems provided for attachments, but those attachments were either embedded into the application data or linked on a computer's local file system. This gave rise to problems with keeping attachments synced, moving or losing them, and making sure everybody who needed them could access them. There is also tremendous utility in having argumentation networks dovetail across different business entities. The obvious solution here is to put everything on the Web.

Data Structure and Semantics

It was also important that this system wasn't tied to any particular vendor. Any gains in efficiency from using a system like this are instantly lost if you have to translate between two incompatible systems by hand. People have their preferences for specific products, and it's essential that those products speak the same language. As such, this project started as a data specification, in particular a linked data specification, which makes it both open and flexible.

Clustering and Abridgement

Herbert Simon noted that a system can contain subsystems which are opaque from the outside, and which communicate through specified interfaces. Christopher Alexander made the wise observation that it's foolish to try to prescribe the composition of a system, and instead you should let the system show you its own anatomy. The system I'm referring to here is the system of issues, positions and arguments, rather than the software system for working with them. These are nodes in a mathematical object called a graph, and the links between them represent information-sharing relations. Absence of a link between two nodes means they have nothing significant to do with each other. Using this knowledge, it's possible to perform a mathematical analysis which cuts across the fewest links, splitting what would otherwise be a hairball up into tidy clusters which can then be appropriately labeled, and their details hidden from anybody who isn't concerned with them.

Synoptic View

In order for conceptual integrity to congeal in a system, you need to be able to see the whole thing. On one screen. At once. In the original IBIS design, issues could be related to one another in terms of being more generic or specific in conceptual scope. Through testing a prototype, it made sense to expand this capability to positions and arguments likewise.

For example, you could have an extremely broad issue like world hunger, and an equally broad position like no paternalistic interventions, which effectively acts as a standing principle. Those who work in philosophy would note that arguments—especially fallacious ones—generalize as well.

Issues that are broader than other issues, and issues which have not been raised in response to another issue, position or argument, are therefore potential candidates for being the top from which we can project a top-down visualization. Clusters can be gathered up in abridged form so that everything fits on a conventional computer, tablet, or mobile display. This is very much like how existing IBIS and related systems are organized. They, however, tend to treat the problem as a boxes-and-arrows diagram, and I'm inclined to try something different. This is a work in progress.

Alternate Views

One of the big problems I wanted to solve was efficient communication of the need to go down so-called rabbit-holes. It's often necessary in software to do a thing, to do a thing, to do a thing, and so on. It's easy enough to forget why you were doing whatever it is that got you there, let alone explain it to the person footing the bill.

To solve this problem, I extended the IBIS data specification again, to account for endorsements. It's just like a Facebook like: the client, or project manager, can sign off on, in this case, a particular position he or she agrees is valid. Then it's possible to trace the path from any particular position to the closest item either written or endorsed by the client. Rabbit-holes very often lead to the completion of work which is either necessary for the advancement of the project, or serendipitously useful.

Allowing for serendipity, and even encouraging it, is the key to this entire method. The key to that is being able to show the value gained for the cost. And the key to that is to be able to show—without crippling overhead—how a trip into the weeds relates back to the real, stated goals of the very people paying for them.

Quantitative Analysis

The elements of an IBIS system are first-class data objects, which means they can be counted. The links between the elements are also first-class objects, which means they can be counted too. The time it takes to reason through the system produces artifacts stored in the system itself, which can be quantified and shown on an activity chart. Each element is tagged with its author and a creation date, which can be collated by day, week, month, or any time period you want. It should be easy for any client to see that the accumulation of elements in the IBIS system reflects real progress, especially since they will have some involvement in creating its contents.

Future Directions

Issues and arguments are not action items. Positions can represent action items, but don't necessarily. A position we can act on graduates into a task, to which we can apply the same process of structured argumentation to mete out the details of the task, its resource requirements, and finally, an accurate estimate of how long it will probably take to complete. For this I've written another data specification that extends IBIS with the means to represent this information.

I have also written a third specification for tasks specific to interaction design, and a fourth for content strategy. These are quite a bit farther off.

The Experience

Using the IBIS prototype I wrote is a lot like using Twitter. It only takes a few seconds to write out an issue, position, or argument—each meant to be only one sentence long—and connect it to other elements in the structure. I've consistently been able to populate it with hundreds of elements in trials lasting under an hour.

This is an early prototype of the tool, containing an IBIS corpus relating to the tool's own design. Clicking will show you a full-sized, colour version. The modified Circos plot on the left was my second attempt at visualizing the structure. The pieces along the edge of the arc are the elements, and the lines are the relations between them. It's okay for me, for now, but it has to be replaced by something better for prime time.


The prototype is still missing most of the attributes I laid out in this section. Coming up with a synoptic visualization, in particular, was difficult due to the threefold nature of issues, positions, and arguments. I've since resolved this by making arguments a special kind of issue, thus making it possible to show a lateral symmetry with a definite problem side and solution side. Unfortunately unlike the Circos plot (and previous hive plot), there's nothing off the shelf I can use to get the result I want. I had to dive deep into the mire of graph layout algorithms in order to come up with something that works, and I still haven't quite resurfaced. Once I get that done—in the interstices between Actual Work™—I'll fill in the rest.

Implications for Scheduling

The purpose of this IBIS tool is the same as the behaviour sheet technique: generate action items with enough detail that they can be completed by one person—or team when appropriate—in four hours or less. Reconcile available action items with the availability of those people, and you have a semblance of a schedule. But it's a schedule generated by a mountain of existing information and reconciled with the actual availability of the human beings who are going to do the work, rather than just some made-up guesswork.

Big thanks to Alan Cooper for pointing out that nobody's made a scheduling app that takes people's availability into account. Read up on that and two other head-smacking insights on pages 61 through 64 of The Inmates are Running the Asylum, published in 1999.

It's also important to point out that all we're doing here is matching up well-defined tasks to well-defined chunks of time when people can do them—unsurprised by meetings, dentist appointments, et cetera. In this model, you don't have to care about the order they do the tasks in, just that they get done. Best yet: you're not bullying these people with arbitrary milestones, deadlines, and critical paths.

The astute reader will pick up on the fact that most of the elements entered into the tool will not be sub-4-hour action items. But, as I mentioned earlier, work on the IBIS corpus itself is still valid work, and the tool is designed to demonstrate that validity. Non-actionable positions, for example, are targets for further analysis. As you enter elements into the corpus, the system collects statistical information about how long it takes—both in time on the clock and the calendar—to turn a non-actionable position into an actionable one. This should give some rough predictive power over outlays and gains for a reasonable period into the future.


For my entire career I've sought a method of producing good results without compromises. I'm not talking about aesthetic that's-nice-but-in-the-real-world compromises—those really are just a reality. The kinds of compromises I'm talking about are those that materially damage my ability to perform. My job is—and with a few exceptions always has been—to wade into unfamiliar waters and come back with something valuable. It's actually impossible to do that on a prescriptive schedule—at least one that has any truth to it.

Since long contiguous blocks of human attention are the principal factor of production in software development, time and money track closely to one another. The amount of time which can be allocated on the meter is therefore an inevitable function of the budget. How well the time on the meter fits into time on the calendar depends on the real situation on the ground: the ability to sequester human attention and have it directed toward a meaningful result. More importantly, the sequence in which the insight to move forward arrives is unknowable in advance. If you prescribe a sequence of certain deliverables by certain dates, you have to either pad like crazy in order to make good on those promises, or apply a formula. In the former scenario, you're lying about your resource requirements—and heaven help you if you don't lie big enough. In the latter, you're not actually solving the problem. If you neither pad nor apply a formula, a prescriptive schedule is meaningless.

It's meaningless because there's no way you can keep to it. New information continually invalidates the schedule, particularly its sequence. Sure, you can update it, but you'd be doing it so often that the schedule would lose its meaning. This is to say nothing about the overhead generated by keeping an accurate schedule under these conditions.

This is an issue of access. If you pad your milestones by an order of magnitude—which is is realistic in this medium—you price yourself out of most markets. If you apply a formula, you necessarily have to ignore the very information about constraints and opportunities that emerge during the process. This means that only the richest clients can afford to have their problems actually solved, and everybody else is stuck with ill-fitting non-solutions. If the development process could be rearranged dynamically, then we can actually solve problems and do it affordably, but the thing we give up in order to do that is a prescribed sequence of milestones and deadlines.

To paraphrase the late, iconic graphic designer Paul Rand, as recounted by the late, iconic CEO Steve Jobs: I will solve your problem for you. And you will pay me. You don't have to use the solution—if you want options, go to talk to other people. But I'll solve your problem for you, the best way I know how, and you use it or not, that's up to you—you're the client—but you pay me. That's how I want to work. My job is to solve problems.

It's worth noting that for the last few decades of his life, Paul Rand was famous and could charge blue-chip clients whatever he wanted. He didn't have the access problem.

The operative phrase here—besides you will pay me—is the best way I know how. If you're hiring any creative professional for any reason, it could be because you have better things to do with your time, but it's far more likely because they can do something you can't—at least not as well. If they're competent and acting on good faith—big ifs—you're implicitly trusting them to be doing the best they can, the fastest they can, because they invariably have more information about the problem than you do.

Just ask around: how many creative professionals do how much work off the meter because there's no way I could justify billing for that?

The compromises baked into the prescriptive budget-deadline calculus are the very same that blow said budgets and deadlines. They're land mines laid by your own hand. They're the dissonant mind-bug that tells you that a decision is wrong but you're doing it anyway, and will have to pay for it later. They're the disdain for a piece of work that makes you not want to attach your name to it. They're real-world failures that destroy value for your clients and damage your reputation. The only modus operandi that thrives in this environment is mediocrity. Everybody loses.

I searched for a real-world example, one that was outside my own tiny, insular community of practice, and I found one: the 2008 global financial crisis. This occurred precisely because the system, all the models, everything, was designed with the expectation that all, or at least most events would go according to plan. Options-trader-cum-professional-flâneur, Nassim Taleb, made himself fabulously wealthy by betting that events would not go according to plan, and then he wrote four books about it.

The gist of the latest one of the four is simple: you don't have to try to predict the future if you structure your environment so that any possible gain is much larger than any possible loss. That is what I'm trying to harness with this enterprise.

Software is about taking costly human processes and imbuing them with enough detail that they can be executed by machine. Whether it makes an existing process more efficient, or generates an entirely new capability, its value is almost always a function of how much it is used. If you do software wrong—provided you can even complete it—it won't get used, and therefore will not have any value.

I already know how to do software right: if you want a particular outcome, it takes what it takes what it takes. Usually what it takes is time, because any problem, no matter how minuscule, hinges on one person getting the solution right. This is a well-documented fact in the industry: adding people to an unsolved problem does not get you a solution any faster, in fact it makes it slower. Pressing people with made-up deadlines, for tasks beyond a certain complexity, just gives you, at best, perfunctory results that serve no purpose other than to cover their own asses.

To make software that people use, that is, to arrange the environment so that all possible gains far outweigh all possible losses, I am confident that the key is to make the prescriptive budget-deadline calculus a relic of the past. A way to do this that is competitive on price, time, and value with incumbent methods is what I believe I've found: You can have it fast, cheap, and good, as long as you aren't too picky about what it is—or more accurately—the strict sequence in which you receive it.