Every business entity is at root an information processing system, which means every business entity must possess an information infrastructure: Its organizational chart, accounting system, inventories of every kind, archives of files and documents, knowledge-bases of house style and methodology, controls and reports, intra- and interoffice mail, a fire hose of incoming market and other tactical information, marketing and PR channels, a telephone network, etc. Nowadays, much of this infrastructure is mediated by software. Any future improvements to an organization's information infrastructure are exceedingly likely to happen in the domain of software, and the information upon which it operates—whether building it, buying it, or tuning it. So in 2015, when we talk about information infrastructure, we are almost always talking about software.

The Only Function of Software

The production of software only ever achieves one thing: it converts well-defined tasks from the domain of labour to those which can be executed by machine—that is, it converts labour into capital. Sometimes the result is a direct increase in productivity—a linear boost. Other times software reduces the cost of a set of operations so much that entirely new capabilities arise—exponential gains.

The former of these two outcomes manifests simply as a person, possibly many people, taking incrementally less time to perform a task than they would without the aid of software. The latter outcome is liable to cause major social change. Its quintessential example is the spreadsheet, introduced in 1978. By enabling complex financial calculations to be computed dynamically by changing a single value, the spreadsheet slashed the cost of generating speculative, what-if financial scenarios from months to minutes. This new capability produced real effects: the M&A boom of the mid-1980s has the digital spreadsheet—at least in no small part—to thank for making it possible. The spreadsheet's position is now cemented as an indispensable tool for business.

I was ruminating to myself about the role the spreadsheet must have played in the corporate raids of the 1980s, and then Planet Money promptly did an episode on it. It's like they knew. The article on which the show is based is even more interesting.

Software Applied to Business

If the only function of software is to convert labour into capital, and that conversion manifests as either a linear productivity gain, an exponential one, or both, we can apply these outcomes to the basic calculus of business: any intervention must reduce costs, generate revenue, or both. If the intervention doesn't play a direct role in an improvement to the bottom line, it should at least be possible to show how it contributes. We now have enough to create a matrix:

Reduce Costs Generate Revenue
Productivity Gain Time-Saving Infrastructure Time-Saving Product/Service/Feature
New
Capability
Automation Killer App

Any software intervention, therefore, can be cast into one or more of these four domains. On the cost-reduction side, time-saving infrastructure makes employees more productive, and automation frees up entire parts of their jobs such that they can engage in more valuable activities—or supplants their jobs entirely. On the revenue-generating side, time-saving overwhelmingly constitutes the products and services on offer in the market at large, and Killer Apps, while comparatively rare, change the landscape of the market completely.

I will use the term intervention a number of times over the course of this article. By it I mean a deliberate move to make a distinct change, without specifying the kind of change or the manner in which it is carried out.

Behold, the RoboCorp

The idealized business entity is one which incurs no costs, and for which all revenue can be directed to profit—or to generalize to a definition which includes non-profit entities, the material objectives of the entity. Software, plus the Internet, is the unique substrate which enables such an entity to drive its costs toward zero—or rather incur only the costs of maintaining, extending, and running the software. The benefits of the software, be they time-savers or new capabilities, are likewise passed on to its users—be they employees or customers.

In practice, the costs of only maintaining, extending, and running the software are far from zero, but the relative gains are enormous. Google employs over 55,000 people, but it earns over $260,000 in pure profit per employee retained. Wal-Mart, by contrast, earns just shy of $7,500 for each of its 2.2 million employees. The difference between Google and Wal-Mart is that the latter has to deal with atoms, while the former only has to deal with atoms as they pertain to bits. In other words, the difference is that Wal-Mart was conceived as a physical store with real products and real, human customers. Google, on the other hand, was conceived from the outset as a pure information service.

This potential for the insane profitability of a pure information service, mediated by software, provides the impetus for the Internet startup. There are plenty of examples of these and they aren't particularly interesting—until they hit the jackpot. Much more interesting are those that approach our idealized business entity.

While nearly all tech startups start with a founding team as small as one, only a few begin life with a sustainable revenue model. These tend to follow the pattern of a spare-time project which grows into a full-time job, and then a into successful business.

Craigslist
For Craig Newmark, what started in 1995 as a hobby to support his local San Francisco community, was by 1999 his bread and butter. The success of his platform—mostly-free classified ads—had the inadvertent side effect of decimating the newspaper industry. In twenty years, craigslist.org has never grown its employee base past the low double digits.
PlentyofFish

Vancouver resident Markus Frind wrote the no-frills dating site in 2003, ostensibly to test his skills with a new programming language. By 2007, he was clocking ten million dollars a year in advertising revenue. Only then did he hire his first employee.

It is worth conceding that Frind would not have been nearly as successful without standing on the broad shoulders of Google, which even in 2003 had already invested billions of dollars into its advertising platform. But then, Google itself stands on the much humbler shoulders of people like Tim Berners-Lee, Vinton Cerf, and countless others.

Between the time I started this article and the time I finished it, Frind sold his company for $575 million dollars.

Pinboard
Pinboard was written in 2009 by another San Franciscan, Maciej Cegłowski. He wrote the site, a way to store and organize Web bookmarks, in response to what he deemed to be shortcomings of his chief competitor. That site, Delicious, had a similar genesis, but had been acquired by Yahoo! and since gone downhill. A staunch advocate for privacy, Cegłowski has eschewed ads and data tracking services, and instead charges a nominal fee for access to Pinboard. Six years in, Pinboard is still a one-man show.

While there are numerous other companies that got off the ground with only the help of the Internet, they aren't, or at least didn't start out as, pure information services. 37Signals, a most vociferous bootstrapper, did client services for several years before creating its first product. Threadless, another paragon, is a clothing company that happens to use the Web both to sell its t-shirts, and harvest the illustrations to print upon them.

Of all these examples, the closest to our idealized business entity is Pinboard. It is an example, most certainly not of large-cap enterprise, but the power of software to extend the economic impact of a single person.

Granted, craigslist earns about $5 million a year per employee, but it is something of a ground-floor Internet phenom, unlikely ever to be replicated. A story like Pinboard, on the other hand, is potentially achievable by anybody with the skills who wants to try.

We Are All Information Services Now

In 2015—and arguably much earlier—every business entity, brick-and-mortar or not, for-profit or not, has at least a toe in the information services sector. Many are much deeper entrenched than they realize. The attitude appears to be that software is best left to the tech companies, something to be purchased or leased, like a photocopier or other office equipment, intended merely to support the real functions of the organization.

The problem with this view is that it completely forecloses on access to the awesome multiplicative force of software. The Killer App scenario, the fourth quadrant of revenue-generating new capability, is ordained right out of reach. The other three quadrants don't fare much better. When you treat software—especially business infrastructure software—like a photocopier, you're stuck with the same ill-fitting dreck everybody else gets. Recall that software only ever does one thing: it makes certain forms of labour disappear. If people avoid using a bit of software over the labour it intends to replace, that means they prefer the labour, and that bit of software is worthless. Worse than worthless, actually, because you paid for it.

It is worth recognizing that many of the enterprise infrastructure platforms available for lease, price their services in terms of bundles of features, analogous to channels in a cable TV package. For every feature you use, you have to pay for a dozen you don't.

Past generic tools like the spreadsheet—which, against the backdrop of all software, really don't come along very often—the value of software to a business entity is not in the bullet-list of features, but the degree to which those features fit. You can't get a return out of software if nobody uses it, and nobody will use a piece of software if it doesn't fit. A suit fits that much better when it's tailored—only a few people can wear one right off the rack. The same goes for infrastructure software: unless your organization has exactly the right measurements, it won't fit unless it is at least adjusted, if not crafted bespoke.

Business infrastructure software has been doing more or less the same thing for 60 years: storing valuable information, operating over it to generate even more valuable information, and conveying or otherwise making that information available to the right people, at the right time. These generic tasks are fundamentally neither complex nor especially sophisticated, as their age should imply. What is complex is the peculiar set of needs of your particular organization.

In other words, if we are all information services now, even a little bit, it is likely that we could benefit immensely from even one really well-fitting piece of tailored infrastructure software.

How Much Should We Invest?

I brought up the examples of companies above to imagine a theoretical upper bound. Those entities depend 100% on software for their revenue, and their operations are going to be overwhelmingly geared toward the running, maintenance, and augmentation of software. This implies that it would be reasonable—assuming it was necessary—to spend all the way up to their total revenue to keep their operations growing.

For most organizations in the world, this will not be the case. Not even close to 100%, but not zero either.

The following mathematical model is an attempt to imagine a rough figure, applicable to either a one-time, or ongoing investment in improving information infrastructure. It begins by charting financial performance and initial expectations of growth over a set period. The effects of a hypothetical intervention are then added, and their net present value is calculated using the discounted cash flow method to produce a figure for the investment. That figure is then divided by an average consulting rate, to show the amount of practitioner time it can buy.

My ultimate goal is to make custom infrastructure development a worthwhile and affordable ongoing investment. The journey to get into a position where that is affordable to a smaller organization is itself a one-time, all-or-nothing capital venture. Consider this model my first attempt at attaching hard numbers to both processes.

In order to estimate the amount a given organization can reasonably invest into tailoring its information infrastructure, I need to introduce a bit of jargon: average revenue per user, or ARPU, which is exactly what you imagine it to be: total revenue divided by number of paying customers. This is a term used mainly by phone companies to discuss certain performance targets. To generalize again to include member or donation-funded non-profits, for the purpose of this exercise I will simply refer to paying user, or just user, as the principal source of revenue.

I have intentionally left out the effects that are strictly cost-saving, as those require copious internal, organization-specific details in order to calculate. For now, there's this.

Our organization has paying with an ARPU of , for a projected annual revenue of . This year we expect to lose through attrition, and gain another , for a net gain of . To keep things simple, we'll assume the same growth rate for the next after this one, rounded to the nearest user. This baseline trend represents a total revenue of over the period.

If an intervention this year, or aggregate thereof, to improve our information infrastructure, can directly or indirectly:

…per year over the following , then it's worth in increased revenue, or additional business over the whole period.

To account for the risks of tailoring information infrastructure, and to compare it with other investments, we apply a discount rate of . We also anticipate that the intervention will generate per year in new maintenance costs and other overhead, each year over the -year period.

Therefore, to allocate up to this year, toward tailoring our information infrastructure, would be a sensible investment. Assuming an average industry rate of an hour, this investment will buy us practitioner- this year.

Disclaimer: I am not a financial analyst. This model is extremely crude. I am already working on its replacement. Use it at your own risk for anything more serious than to glean a sense of the upside of investing in information infrastructure, and to generate a rough ballpark figure for the size of such an investment.

(Not-So) Brief Digression on Hours

Whether or not the contract to perform the intervention is based on billable hours is irrelevant. What matters is the hours themselves. There is no substitute for hiring seasoned professionals to bring their attention, experience, and expertise to bear on a problem—relevant to your particular organization—and solve it for good.

You can get valuable results by hiring one professional for as few as 10 hours and as many as 100. Information architects and content strategists can markedly improve the effectiveness of an existing website or internal knowledge base. User researchers can obtain valuable information directly useful to sales and marketing. A data visualization specialist can create a compelling presentation graphic or interactive tool. A general user experience strategist can do a more in-depth calculation similar to the one above, to create a more detailed picture of what kinds of interventions are worth going after.

None of these scenarios involve any programming. The ones that do will often, but far from always, be larger interventions, say 200 hours✱ and beyond, and will likely require a team. A team of two is possible, three to five is more likely. As administrative and communications overhead goes up exponentially as more people get involved, there are pronounced diminishing returns when teams become too large. This overhead can be reduced by strong separations of concerns, but those separations only become evident once you have already gathered a lot of information. A reasonable strategy is to keep the interventions small and safe at the beginning, and you will build up that very body of knowledge as a byproduct of the process.

✱ That 200-hour figure is, unapologetically, rectally-sourced. I have personally delivered valuable bits of infrastructure software in a fraction of that time, albeit under special circumstances. That figure is more like me imagining the point—from the perspective of the average client—at which a practitioner or team thereof would run out of conspicuously valuable things to do, without touching any code. You could imagine it from the other side as a stab at the minimum groundwork it takes—assuming that it has never been done before—to yield any . Either way, I am pessimistic that a team could come in cold and reliably produce meaningful results in many fewer hours than this.

For some additional perspective, a standard full-time employee works about 1900 hours a year. If you project an intervention this large, you might be tempted to hire somebody full-time to take care of it, at a considerable discount from the going rate. There are two problems: one, very few people have all the necessary skills, and those that do, earn much more as independents. The other issue is that the model above derives practitioner hours from a spitball estimate of potential outcomes, which become extremely unreliable with size. Larger interventions can be thought of as aggregates of smaller ones, each with its own probability of success. Committing wholesale to a large intervention is much riskier than committing individually to the smaller interventions that make it up.

Risk and Exposure

There are two distinct risk profiles associated with the kinds of interventions mentioned in this article. One manifests before any given intervention is complete, the other after. They both exhibit different characteristics.

  1. The biggest threat to any nascent information infrastructure intervention is that it gets killed in its crib. This usually happens because the client perceives that it is taking too long, and therefore costing too much money, without appreciable results. It is ultimately, therefore, a problem of unrealistic expectations, themselves often due to the context, or premise under which the initial deal is forged. The purpose of this article is to provide a framework for setting realistic expectations at the outset, by putting an intervention in the context of the value of its positive outcome.

    Excepting serendipitously beneficial side effects of the process, the upside of an information infrastructure intervention can't be felt until it is deployed. The risk of non-completion itself goes down exponentially as the project moves along, while the risk exposure goes up, roughly linearly. The downside is therefore capped to the total cost of the project before somebody pulls the plug. The best way to mitigate this risk is educating the client on what progress looks like, and allocating a significant portion of the budget toward materials that demonstrate such progress. Capping the total spend and diversifying it among several interventions will also put a hard limit on potential losses.

  2. Once a change in information infrastructure is ready for use, the risk profile changes dramatically. Ultimately it depends on the nature of the intervention, as per the table near the top of this article, turning, primarily, on whether or not it interacts directly with the outside world.

    Again, the value of software and other information systems is a function of the extent to which it gets used. If nobody uses it, then its value is the same as the project being aborted—that is to say, painfully negative. The way to ensure that the outcome actually gets used, in addition to lowering the risk and exposure of implementation, is to commit an adequate amount of practitioner-hours to research and design. Only then do you really have a shot at the upside, which, as I'll discuss momentarily, can vastly exceed expectations.

    It would be irresponsible not to discuss additional risks and exposures that manifest once an intervention is deployed. These are small, but not impossible risks of crisis, and even ruin. These risks are associated mostly, but not exclusively, with functionality that interacts directly with paying customers. These are the risks of outages, data loss, hacking, and related system failures. They can be mitigated with the right policy, such that ruin becomes a mere emergency, and crises become mere hiccups. Furthermore, if an intervention is intended to replace functionality which is already in use, people may hate it and revolt, which may result in lost revenue, or even a mass exodus. This risk can be controlled by design up front and various testing strategies. It is important to recognize that hedges like these cost money, but are worth, at the very least, some peace of mind.

    With the bad news out of the way, I can talk about the good news: the Killer App scenario. Sometimes a particular software intervention is so useful that people can't live without it. We see this particularly in the runaway success of certain independent games, which earn vastly more money than they cost to create. It also happens with more practical applications, such as the spreadsheet I mentioned at the beginning of this article.

    While it's impossible to tell at the outset whether a particular intervention will result in a Killer App, it is possible to infer which may be a candidate. Those are the ones that introduce new capabilities, as delineated above. Again, these are impossible to reach without an adequate investment in research, design, and testing, although they draw on much of the same knowledge gained through more conservative interventions. Because of this, an acceptable strategy would be to commit, say, 20% of the budget to risky outcomes and 80% to those less risky, and the cost of research and design will be at least partially shared among all of them.

The examples I gave of interventions by information architects, content strategists, etc., are considerably safer bets than software interventions proper. They are greatly insulated from the two major categories of risk I just outlined. Theirs, and related work, informs the process and mitigates the risks of more involved projects.

There is a third class of risk, associated with new product development, which I will mention for completeness, despite it being less of a factor for infrastructure. That risk is that of the product not being ready in time to coincide with some external, uncontrollable future event. The same rules, however, apply for products as they do for infrastructure: The more hours of research and design you buy, the more predictable are both the process and the outcome.

A Series of Small Bets

It is important to recognize that software is like any other media production: a book, a movie, an album, an art piece, whatever. In other words, don't bet the farm on it. If the model I provided spits out a number that is too high for your comfort, even after applying a cartoonishly huge discount rate, then lower it. The important thing is that the investment is big enough to be successful—that it buys enough practitioner-hours to ensure success. I will deal with a more accurate risk model in a future article, as my goal here, which I hope I have achieved, is to convince you that hiring professionals to tailor your information infrastructure can be both an affordable and worthwhile risk to take.

So there is a risk—of transformational success. Much better than a lottery, because the process, by its nature, shifts the odds in your favour as it goes forward. The outcomes of safer bets likewise make the riskier ones less risky. In order to play, you have to buy a ticket. Buy enough tickets, and at least one will win.