It has been known for several decades that developing software and/or websites incrementally is the optimal path to success. There is a problem of access, however, both for business entities wanting the work done, and practitioners willing to do it. Big companies and tech startups can afford to do genuine incremental development, because they can afford to maintain a full-time development staff. Since said development staff is being paid no matter what, it's possible for them to fix, augment, and invent things as needed, creating random spikes in value along the way.

Most business entities in the world, however, are neither big companies nor tech startups. Either they don't do enough development to justify the cost of a full-time team, or they don't know the first thing about it, or they simply can't afford it. So they contract it out to an agency, and from the point of view of the commercial transaction, that more or less sticks them square into Waterfall-land.

Such Fall, Many Water, Wow

For the three people left in the world that aren't aware, the Waterfall model is an anti-example of how to develop software—and by extension, websites—perennially mistaken for a real one. It is adapted loosely from engineering and construction, which exhibit well-trodden procurement processes. While perhaps soothing as a management fiction of seamless transits from requirements analysis, to design, engineering, implementation, testing and deployment, the reality is that the Waterfall model is a nigh-guaranteed recipe for disappointment. That the project will take longer, cost more, and not deliver on its promises is almost a certainty, because of the monolithic and highly prescriptive nature of the process. Simply put, the universe is indifferent to your Gantt chart.

The tragedy here is that while agencies and independent consultants can be as Agile as they like within the membrane of the contract, they typically have to go through the same bogus process of procurement, feature list bargaining, and the monolithic risk of delivering a massive, complex product on time and on budget. Bigger contracts cost more to qualify for, let alone negotiate, take longer to close, require all kinds of painstaking and intrusive due diligence, import enormous risk, and there are fewer of them. If it were possible to trade on smaller contracts—like the end-to-end solution to a single user goal—not only would we not have to jockey for slivers of the big contracts, but there'd be more work to go around in general, and a higher likelihood of success.

And that's just for practitioners

The value proposition to clients is that they get working results online for use as they are completed, that they own their own data, and that at any moment they're never exposed to the full financial risk of a Waterfall project. Moreover—and this is important—the foundation of the business relationship is one of mutual benefit, not codependency. The way we achieve this state is by arranging the contract such that either party can walk away at any time, and that there is actually something of value left behind. We achieve that through an unconditional payment schedule, and a few select open-source technical interventions that enable the client to hire a new agent to pick up more or less where the old one leaves off.

The (Post) Geek Stuff

The first of such technical interventions is the scaffolding, an up-and-running framework which enables practitioners to replace an existing website at the level of one addressable HTTP resource at a time. If you're sophisticated enough to use composite documents, then we're talking about sub-page granularity. This gets over the technical hurdle of having to wait until a new site is complete before launching it, because there's an inscrutable old site in the way. It's essential, because let's face it: most Web projects are in fact redesigns. Since the scaffolding works at the level of HTTP, it also means you don't have to care about whatever arcane technology the old site is using, and are free to use whatever platform you want.

The second technical intervention is an essential control surface for the direction of the project. We all know that software and especially Web development has a tendency to meander. If you prioritize tasks by the opportunity to complete them—which is the most sure-fire way to get useful things done—then without the use of a tool like this, your progress reports to your client won't make any sense. It is likewise important, when you go down the proverbial rabbit hole, to show that your excursion connects to a result the client in fact endorsed. This tool is part burn-down chart, part bug tracker and part collaborative decision-making tool. It uses a technique developed in the 1970s specifically to model, decide over, and organize action around complex development problems. I have a rough prototype of this tool currently, and expect to develop it to maturity in the coming months.

The third intervention, every bit as essential as the first two, concerns the accumulation of precursor materials in a useful state. I'm talking about research findings, personas, scenarios, sketches, wireframes, mockups, prototypes, taxonomies, content inventories, the whole nine. Despite their enormous and lasting value, nobody reads these things the second time around, or arguably even the first. The ostensible reason is that they tend to be too long and full of fluff, are buried in various slide decks, spreadsheets and Word documents, and further buried in email attachments. They also tend to be disconnected from one another, such that it's rarely worthwhile, if even feasible, to reassemble a cogent story about the design decisions that go into one aspect of a site or another.

A tool that aggregates, abridges, and organizes these artifacts therefore plays an important role beyond simply making it easier to manage them. It enables the practitioner to raise the status of these precursor materials to first-class deliverables, and the client to take possession of them in real time, justifying regular, incremental payment. A prototype of this tool is under active development, and should be released later this year.

It is also important to recognize that each of these technical interventions, as they currently exist, are merely reference implementations of mature theory and practice: they just haven't, to my knowledge, been applied in this context. While they are usable in their own right, the real goal is to grow an ecosystem of all kinds of products that implement these processes. In particular, the second and third tools began as openly-available linked data vocabularies, so that there would always be a viable exchange format, to losslessly import data from any individual implementation to another.

The Final Piece

The last part of this puzzle is a contract that individuals or small teams can use to put this plan into real commercial action. Ideally, this contract would be a freely-available, Creative Commons-licensed archetype that could grow and mature as new contingencies arise in the field. It would provide the basic terms for the incremental exchange of value, as well as the partitioning of intellectual property, and other ancillary issues peculiar to information services. I have a sketch of this contract, and will be looking for ways to finance its development—with a lawyer—over the course of 2014.

My overarching goal is to lower the exposure to financial risk in Web development, both for practitioners and clients, and increase the likelihood of higher-quality results, for the ultimate purpose of a better Web experience for everybody. To accomplish this, I want to lower the barrier to access to good design by making it both technically and commercially feasible to deliver one good design at a time. Though my vision isn't just for new entrants and small-fries: I imagine established agencies could adopt these processes to lower their risk exposure on even their largest projects. The effect, I hope, is that more business gets done on the whole, and that the outcomes perform, and are something to be proud of.