I woke up this morning thinking about schedules and forecasting. I was in a meeting yesterday with a board of directors, one of whom was emphatically concerned about schedules. I understand where he's coming from, as he's in an industry that runs on very strict production schedules, and for good reason. I, however, am not.
I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based
work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.
The answer I've come up with, for creative, information-centric work, is to kill the prescriptive development schedule.
If you care more about delivering valuable results than the order they came in, it is possible to have them fast, cheap, and good.
My suspicion about why stuff-based
work can even be put on a predictable schedule is because of the century-old project of Frederick Taylor—no relation—and his ilk, to drive a wedge between the capricious craft of design and the rote process of implementation. I further submit that the capital cost of materials and equipment themselves play a significant role: standardized commodities plus standardized labour equals predictable schedule, which you can expand to an industrial scale. So much money goes into the implementation that it's easy to forget the cost—and time—of design.
A predictable process has been worked out to the point that it has no remaining endogenous sources of uncertainty:
Taylor's service to his clients was more or less a kind of algorithm analysis and optimization, but with real people in physical space. He optimized workspaces by arranging tools and materials, and timed even the tiniest actions with a stopwatch. Other early adopters of scientific management and Taylor's colleagues, Frank and Lillian Gilbreth, used wall-sized grid paper and long-exposure photography with light markers to trace the physical actions people took at their stations. Not only could these analysts trim out wasted effort, but this information also gave them a reference figure for how long a given job should take. Fine-grained detail about the job description would be baked into the work environment, reducing training time, and could be used for insight into what it costs for a worker to be replaced.
If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable. It would be interesting to know, throughout his consulting practice, how well Frederick Taylor stuck to his own schedules.
I was considering the possibility that only algorithmic processes are amenable to forecasting, but then I remembered hairdressers. I've known a few over the years, and I've always been envious of the concreteness of their professional results, and especially the astonishing regularity with which they deliver them.
For better or worse, a reasonably-practiced hairdresser can expect, with extreme confidence, to take a client from start to finish in just under an hour. What's more is that if you were to plot the performance of a bunch of hairdressers, you'd likely see it cluster in a normal distribution, because of the physical constraints of cutting a head of hair: it can't take less than a certain amount of time, and the cases in which it takes much more are rare, with an implicit upper bound. In other words, you'll never see a haircut that takes anywhere near as long as it takes the hair to grow.
Even though it's a deeply complex stochastic process that entails hundreds, if not thousands of detailed, path-dependent design decisions, we can treat a haircut as atomic, because of the statistical properties derived from its physical manifestation, backed up by millennia worth of empirical data.
Back in 2006, I came up with two project management techniques. The first was the establishment of a new basic unit of time accounting: the four-hour cell. An hour is too short to do anything meaningful, and a day is too spongy: for instance, does it mean a standard eight-hour work day, or a 16-hour bender? What about breaks for eating, sleeping and the like? Four hours, however, is the Goldilocks zone for creative, problem-solving work. I still use the cell resolutely as the basis for all my project planning.
The other technique I came up with was something I called the behaviour sheet, which was an extremely accurate method for estimating the time it will take to produce a small piece of software. I took my inspiration mainly from Donald Knuth's literate programming, which is a technique of weaving expository and argumentative prose together with imperative, executable code, but without the use of Knuth's special equipment.
Producing a behaviour sheet is a simple analytical process: you sit down in front of an outliner and write out bullet points containing all the specific ways you do and do not want a particular piece of software to behave. You keep going until you've expressed enough detail to confidently parcel up the bullet points and assign them to cells.
I don't use this technique anymore, for one glaring reason: It takes almost as long to do the estimate as it does to do the job. What is the point of knowing that the work will be done by Friday afternoon if it takes until Wednesday morning to figure it out? I didn't even bother trying it with bigger deliverables, because I would clearly need a technique for estimating the estimate, and one for estimating the estimate for the estimate, and so on.
There are good reasons for doing something like a behaviour sheet, however, even if it's futile as a tool for predictive analysis. It smokes out the details that would otherwise pop up as surprises during the job that cause it to take longer. I still do that, I just do it, for the time being, in commentary and in-line documentation.
Okay, so we can black-box even complex problem-solving processes and assign them to fixed time periods, provided that:
I type symbols into a computer for a living. The results of this activity are what my clients care about. They don't recognize any value created unless those symbols are arranged in the correct order. This could be code, or it could be prose, or it could be a diagram, or any combination thereof. I'll focus on code, though, because it has a direct relation to dollar value.
The environment is straight-forward enough, but essential: comfortable room free of distractions, comfortable chair, serviceable computer. It doesn't even have to be a very new or fast one. The toolchain on the computer also has to be in good working order, but those tend to be pretty mature and don't break once set up, unless you break them.
Getting a handle on capacity is more about ruling out what can't happen than setting a baseline for what should. I know, from measuring before, that my maximum carrying capacity for one day of writing code is about 1500 statements. A statement is equivalent to a sentence: it encapsulates a complete thought, one complete instruction to the computer. You can say quite a bit in 1500 statements, especially with modern languages that take care of the housekeeping for you, as well as with the use of time-saving, third-party frameworks. There are at least a few billion-dollar corporations out there that got up and running on maybe 10,000 to 50,000 statements.
Though that figure represents going on a rip, a ten or 12-hour day of being in a solid groove. This makes the upper bound for a 4-hour cell about 500 statements. Still, you can be quite expressive in 500 statements, and often don't need to go anywhere near that to produce something of value. A subroutine, analogous to a paragraph, is usually pretty unwieldy by the time it hits a hundred statements, and there are archaic style norms that proclaim they probably shouldn't exceed 25. The most we can expect from a four-hour cell, therefore, is a handful of interrelated subroutines.
Subroutines are composed upward from other subroutines. Some of which you write yourself, others you reference, just as you would cite another person's book or article. This third-party material counts as an input. All software has flaws, and those flaws tend to remain hidden like land mines until you step on them. The interface, and even the behaviour of third-party software can also change deliberately from revision to revision. When you use third-party code—which you can't reasonably avoid, nor would you want to—it really makes sense to audit it for general fitness, even make some little disposable thing with it to see what it's like to work with. Nobody does that in real life, though, because of the counterintuitive nature of going off-task
, but they should. Although, if I had to keep close tabs on all the third, fourth, and Nth parties I use in my code, I'd be doing little else.
The other input issue is just knowing up front what to write—knowing what to say and how to say it. When I blast out 500 statements in one sitting, the chances are I have a pretty good handle on both. Just as in writing, drawing, painting, composing music, whatever, this is the exception, not the rule. It may be possible to set up the conditions such that this level of prolificacy is more likely, but that has to be built up over a long period for each project. Just like all those other formats, the easiest thing to write is a rewrite of something that has already been written.
Go back to the hairdresser: the probability that the time it takes to complete a client will exceed one standard deviation from the mean is unlikely enough to not have to care about it. Two standard deviations is unheard of. By contrast, a programmer ought to be surprised if everything does go according to plan. I've only experienced that a handful of days like that in my 20-year career. The time it takes to realize a specific, prescribed result can vary by several orders of magnitude. That's why I've argued a hundred times: do not prescribe the schedule. Let the opportunity to get something done dictate your priorities and the order in which you complete the work. The highest-priority task is the one you can complete right now.
The final consideration is the output: a consistent state. The result has to be an identifiable being, not a half-done pile of stuff. I don't think people, even in the industry, fully realize the importance of closing out a block of time in a consistent state. If you exit with an inconsistent state, it's almost like not having done anything at all. When you get back to your work, say, the next day, you tend to have to spend a good chunk of your time cleaning up the half-done mess before you can start on anything new.
To increase the probability that a 4-hour cell produces a consistent state, we have to make that the objective of the time investment: do whatever produces a consistent state. If you do that, you'll progress as fast as physically possible.
What that means, though, is going ostensibly off-task
. It could mean working on another problem. It could mean doodling on a piece of paper. It could mean going off into the weeds, possibly literally. The only criterion is that you produce some kind of artifact, some kind of receipt of your cogitation, no matter how insignificant. That way we know you actually did something, and weren't just screwing around.
People like to pretend they're building when they're writing software. Okay, that's not really what you're doing, not even close. I've argued in the past that as a central operating metaphor, building
is definitely erroneous and probably harmful. What we're actually doing is a lot more like the work of Frederick Taylor, in minuscule: we're taking fuzzy, ill-defined processes and sharpening them up into concrete algorithms. Bit by bit we splice in detail to our conceptual model of the process under examination, each attempt making it just a little bit clearer, until the description is so precise that it can be executed by machine.
Software is an artifact of language: a formal, logico-mathematical description of a real-world process. I strongly assert that we have much to gain if we discard the cutesy metaphors and treat the problem of writing software at every step as one of linguistic expression.
While behaviour sheets were pointless as an up-front estimation tactic, they still provided valuable direction as a last step before committing a process to code. What made behaviour sheets effective was a wealth of information that already existed about the code they were meant to map out. Their effective range was filling in the following blank:
That's assuming a lot. Consider everything that has to be already established:
The behaviour sheet considers fine-grained details about the desired behaviour of the code, but not fine-grained enough to actually be code. It's precisely one step in the direction toward humanity, and away from the computer. We can imagine more steps that form a an unbroken path, starting from broad strokes and moving inward to greater detail:
Why I'm interested in a gradient like this is because of two more refinements which are already performed by the computer, after humans have given all the direct consideration they can:
In the beginning, in the 1940s, people—usually women—inscribed decimal, then later, binary instruction codes into physical media: switches on the machine itself, or punch cards. Within a decade, through the pioneering work of Grace Hopper—which was met with considerable political resistance at the time—computers began to be used to translate symbolic, marginally human-understandable instructions into opaque machine code. From there, the computer ended up absorbing more and more of the job, from memory management to shorthand for common data structures. It only follows that the role of the computer can be extended even farther up the language gradient to support the human reasoning that needs to take place in order to produce code that actually solves the real-world problems of living, breathing human beings.
The artifacts in the language gradient also exhibit a gradient in temporal sensitivity: The business ecosystem, for instance, would only change in the event of a major real-world change in the organization. Source code, and especially object code or whatever analogue, produced in the compilation process, despite being the stuff that's worth the money—and provided the precursors remain to regenerate it—is virtually disposable.
Language from one level of specificity is encouraged to find its way into its neighbours and beyond. It may be necessary for more-specific concepts to bubble up, and broader language is almost certain to permeate downward into the more detailed levels. This way, conceptual structures which are understandable to non-specialists make it into the implementation intact.
The biggest endogenous impediment to the development of coherent software is easily the presence of a gap between one level of specificity and another. Let's imagine that each link of the chain of specificity that I laid out above was in different languages altogether: English, French, Arabic, Mandarin, and Japanese. We would need translators to bring the relevant concepts from one link to the next. Translators, naturally, have to be fluent in at least the two languages they're dealing with. It's easy enough to find somebody who speaks English and French, and French and Arabic, even Mandarin and Japanese. But how about somebody who speaks Arabic and Mandarin?
Point being, there are different concerns at different levels. Clearly, there are business concerns, such as earning a profit or achieving some specific sociopolitical objective, and there are technical concerns about the feasibility of such objectives. Finding businesspeople who can palate technical minutiae is even harder than finding technologists who understand business. And this, of course, leaves out entirely any concern for the user.
The user was ostensibly once the domain of the technologist, if you look at the writings of Fred Brooks or Barry Boehm from the 70s and 80s. But software development became more ambitious and its practitioners became more specialized, separating that knowledge into different departments. User experience design materialized from a number of different sources to mediate between business and technical concerns and champion the user. I submit, however, that UX is getting specialized to the point that we may need to consider another splice.
In order to get coherent software, these following translations must take place:
This means you need somebody in charge who understands the languages on both sides of each translation. It's feasible to have one polyglot do the entire thing, but good luck finding them.
Conceptual integrity is when a system of any kind reflects a unified mental model which is shared among everybody that interacts with that system: business, designers, developers, and users. This means that the germ of the conceptual structure has to be simple enough to fit inside the mind of a single person. It also means that it has to be conceived by a single mind, because a half-formed concept is by definition incommunicable.
Pediatrician and developmental theorist John Gall wrote that [a] complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Simple, both as used by physicists, mathematicians and systems theorists to mean few interacting parts, and in the colloquial sense to mean familiar to the person considering it.
Designing a system prescriptively from the top down is how you get gaps in translation, which leads to garbage, presuming you can even finish it. Designing
it from the bottom up is how you get a morass of incoherent parts. There has to be a way to move incrementally from a simple system to a complex one, while simultaneously considering the whole and the details. My suggestion here is to drop the notion of hierarchical orientation, and consider it a network-topological problem of here to there.
The goal here is to muster up enough clarity such that a developer can sit down for four hours and produce up to 500 lines of code which a) works and b) does something useful, and then do it again, again, and again. This has two implications:
Here is here: where we are right now and everything that we know. There is somewhere else, on the other side of some achievement or other. As those achievements are realized, there becomes the new here. As such, there will always be a here, and there will always be a there.
Finding our way from here to there, for all but the most trivial undertaking, is difficult without the use of a map. It's just as easy to get lost in the details as it is to miss an otherwise obvious next step. To quote Herbert Simon: Solving a problem simply means representing it so as to make the solution transparent.
Simply, indeed. We need a way to take the overhead out of representing and re-representing the information we currently have, so we can see clearly what action we need to take next.
Oddly enough, the world came close to a solution to this very problem in the 1960s: Christopher Alexander's hierarchical decomposition method for computer-assisted architecture and industrial design, Douglas Engelbart's NLS, Ted Nelson's Xanadu and ZigZag, and Werner Kunz and Horst Rittel's issue-based information system. What these all have in common is an ability to both address and rearrange small, discrete pieces of information to glean structure and insight that was previously buried. What they also have in common is that few people talk about them or try to carry forward their visions. Every once in a while somebody tries, but doesn't seem to get much farther—certainly never as far as mainstream adoption.
While it may be foolhardy of me to try where so many others have failed, I am not trying to Change the World™. I am not—at least yet—angling for mass adoption. I'm only interested in making a simple system that works.
While I am certainly inspired by Alexander, Engelbart and Nelson, the most actionable of the group is easily Kunz and Rittel's IBIS—which I'll explain in a moment. This is a system which, if you can believe, was carried out on typed index cards. It was later digitized in the 80s by Jeff Conklin with gIBIS, meant to run on a local network of Sun workstations, and later as a Java-based desktop app from his (emeritus) Compendium Institute. There are even a few Web-based versions. If you don't understand why these haven't caught on, just try using one.
IBIS is a method of structured argumentation, in which stakeholders post issues, much like you would in a software bug tracker, but also respond to those issues with positions about how to solve them, and back those positions up with arguments why they should or should not be taken—something that also happens informally in the comment feed of a bug tracker. Issues, positions, and arguments are all first-class objects in this system, and they are related to one another through a constrained set of possible rhetorical manœuvres.
My main criticism with these systems is that the paper-based implementation is obviously too cumbersome for people used to electronic speed and availability, and the electronic versions all behave like they're in a bubble. In other words, great, you've hashed out a bunch of design decisions. Now you have to wrench them out of the app.
One requirement that was clear at the outset was that a system of this kind has to be able to mingle with other entities, like documents and heterogeneous data structures. Earlier systems provided for attachments, but those attachments were either embedded into the application data or linked on a computer's local file system. This gave rise to problems with keeping attachments synced, moving or losing them, and making sure everybody who needed them could access them. There is also tremendous utility in having argumentation networks dovetail across different business entities. The obvious solution here is to put everything on the Web.
It was also important that this system wasn't tied to any particular vendor. Any gains in efficiency from using a system like this are instantly lost if you have to translate between two incompatible systems by hand. People have their preferences for specific products, and it's essential that those products speak the same language. As such, this project started as a data specification, in particular a linked data specification, which makes it both open and flexible.
Herbert Simon noted that a system can contain subsystems which are opaque from the outside, and which communicate through specified interfaces. Christopher Alexander made the wise observation that it's foolish to try to prescribe the composition of a system, and instead you should let the system show you its own anatomy. The system I'm referring to here is the system of issues, positions and arguments, rather than the software system for working with them. These are nodes in a mathematical object called a graph, and the links between them represent information-sharing relations. Absence of a link between two nodes means they have nothing significant to do with each other. Using this knowledge, it's possible to perform a mathematical analysis which cuts across the fewest links, splitting what would otherwise be a hairball up into tidy clusters which can then be appropriately labeled, and their details hidden from anybody who isn't concerned with them.
In order for conceptual integrity to congeal in a system, you need to be able to see the whole thing. On one screen. At once. In the original IBIS design, issues could be related to one another in terms of being more generic or specific in conceptual scope. Through testing a prototype, it made sense to expand this capability to positions and arguments likewise.
Issues that are broader
than other issues, and issues which have not been raised in response to another issue, position or argument, are therefore potential candidates for being the top
from which we can project a top-down visualization. Clusters can be gathered up in abridged form so that everything fits on a conventional computer, tablet, or mobile display. This is very much like how existing IBIS and related systems are organized. They, however, tend to treat the problem as a boxes-and-arrows diagram, and I'm inclined to try something different. This is a work in progress.
One of the big problems I wanted to solve was efficient communication of the need to go down so-called rabbit-holes
. It's often necessary in software to do a thing, to do a thing, to do a thing, and so on. It's easy enough to forget why you were doing whatever it is that got you there, let alone explain it to the person footing the bill.
To solve this problem, I extended the IBIS data specification again, to account for endorsements. It's just like a Facebook like
: the client, or project manager, can sign off on, in this case, a particular position he or she agrees is valid. Then it's possible to trace the path from any particular position to the closest item either written or endorsed by the client. Rabbit-holes very often lead to the completion of work which is either necessary for the advancement of the project, or serendipitously useful.
Allowing for serendipity, and even encouraging it, is the key to this entire method. The key to that is being able to show the value gained for the cost. And the key to that is to be able to show—without crippling overhead—how a trip into the weeds relates back to the real, stated goals of the very people paying for them.
The elements of an IBIS system are first-class data objects, which means they can be counted. The links between the elements are also first-class objects, which means they can be counted too. The time it takes to reason through the system produces artifacts stored in the system itself, which can be quantified and shown on an activity chart. Each element is tagged with its author and a creation date, which can be collated by day, week, month, or any time period you want. It should be easy for any client to see that the accumulation of elements in the IBIS system reflects real progress, especially since they will have some involvement in creating its contents.
Issues and arguments are not action items. Positions can represent action items, but don't necessarily. A position we can act on graduates into a task, to which we can apply the same process of structured argumentation to mete out the details of the task, its resource requirements, and finally, an accurate estimate of how long it will probably take to complete. For this I've written another data specification that extends IBIS with the means to represent this information.
I have also written a third specification for tasks specific to interaction design, and a fourth for content strategy. These are quite a bit farther off.
Using the IBIS prototype I wrote is a lot like using Twitter. It only takes a few seconds to write out an issue, position, or argument—each meant to be only one sentence long—and connect it to other elements in the structure. I've consistently been able to populate it with hundreds of elements in trials lasting under an hour.
This is an early prototype of the tool, containing an IBIS corpus relating to the tool's own design. Clicking will show you a full-sized, colour version. The modified Circos plot on the left was my second attempt at visualizing the structure. The pieces along the edge of the arc are the elements, and the lines are the relations between them. It's okay for me, for now, but it has to be replaced by something better for prime time.
The prototype is still missing most of the attributes I laid out in this section. Coming up with a synoptic visualization, in particular, was difficult due to the threefold nature of issues, positions, and arguments. I've since resolved this by making arguments a special kind of issue, thus making it possible to show a lateral symmetry with a definite problem
side and solution
side. Unfortunately unlike the Circos plot (and previous hive plot), there's nothing off the shelf I can use to get the result I want. I had to dive deep into the mire of graph layout algorithms in order to come up with something that works, and I still haven't quite resurfaced. Once I get that done—in the interstices between Actual Work™—I'll fill in the rest.
The purpose of this IBIS tool is the same as the behaviour sheet technique: generate action items with enough detail that they can be completed by one person—or team when appropriate—in four hours or less. Reconcile available action items with the availability of those people, and you have a semblance of a schedule. But it's a schedule generated by a mountain of existing information and reconciled with the actual availability of the human beings who are going to do the work, rather than just some made-up guesswork.
It's also important to point out that all we're doing here is matching up well-defined tasks to well-defined chunks of time when people can do them—unsurprised by meetings, dentist appointments, et cetera. In this model, you don't have to care about the order they do the tasks in, just that they get done. Best yet: you're not bullying these people with arbitrary milestones, deadlines, and critical paths.
The astute reader will pick up on the fact that most of the elements entered into the tool will not be sub-4-hour action items. But, as I mentioned earlier, work on the IBIS corpus itself is still valid work, and the tool is designed to demonstrate that validity. Non-actionable positions, for example, are targets for further analysis. As you enter elements into the corpus, the system collects statistical information about how long it takes—both in time on the clock and the calendar—to turn a non-actionable position into an actionable one. This should give some rough predictive power over outlays and gains for a reasonable period into the future.
For my entire career I've sought a method of producing good results without compromises. I'm not talking about aesthetic that's-nice-but-in-the-real-world compromises—those really are just a reality. The kinds of compromises I'm talking about are those that materially damage my ability to perform. My job is—and with a few exceptions always has been—to wade into unfamiliar waters and come back with something valuable. It's actually impossible to do that on a prescriptive schedule—at least one that has any truth to it.
Since long contiguous blocks of human attention are the principal factor of production in software development, time and money track closely to one another. The amount of time which can be allocated on the meter is therefore an inevitable function of the budget. How well the time on the meter fits into time on the calendar depends on the real situation on the ground: the ability to sequester human attention and have it directed toward a meaningful result. More importantly, the sequence in which the insight to move forward arrives is unknowable in advance. If you prescribe a sequence of certain deliverables by certain dates, you have to either pad like crazy in order to make good on those promises, or apply a formula. In the former scenario, you're lying about your resource requirements—and heaven help you if you don't lie big enough. In the latter, you're not actually solving the problem. If you neither pad nor apply a formula, a prescriptive schedule is meaningless.
To paraphrase the late, iconic graphic designer Paul Rand, as recounted by the late, iconic CEO Steve Jobs: I will solve your problem for you. And you will pay me. You don't have to use the solution—if you want options, go to talk to other people. But I'll solve your problem for you, the best way I know how, and you use it or not, that's up to you—you're the client—but you pay me.
That's how I want to work. My job is to solve problems.
The operative phrase here—besides you will pay me
—is the best way I know how
. If you're hiring any creative professional for any reason, it could be because you have better things to do with your time, but it's far more likely because they can do something you can't—at least not as well. If they're competent and acting on good faith—big ifs—you're implicitly trusting them to be doing the best they can, the fastest they can, because they invariably have more information about the problem than you do.
The compromises baked into the prescriptive budget-deadline calculus are the very same that blow said budgets and deadlines. They're land mines laid by your own hand. They're the dissonant mind-bug that tells you that a decision is wrong but you're doing it anyway, and will have to pay for it later. They're the disdain for a piece of work that makes you not want to attach your name to it. They're real-world failures that destroy value for your clients and damage your reputation. The only modus operandi that thrives in this environment is mediocrity. Everybody loses.
I searched for a real-world example, one that was outside my own tiny, insular community of practice, and I found one: the 2008 global financial crisis. This occurred precisely because the system, all the models, everything, was designed with the expectation that all, or at least most events would go according to plan. Options-trader-cum-professional-flâneur, Nassim Taleb, made himself fabulously wealthy by betting that events would not go according to plan, and then he wrote four books about it.
The gist of the latest one of the four is simple: you don't have to try to predict the future if you structure your environment so that any possible gain is much larger than any possible loss. That is what I'm trying to harness with this enterprise.
Software is about taking costly human processes and imbuing them with enough detail that they can be executed by machine. Whether it makes an existing process more efficient, or generates an entirely new capability, its value is almost always a function of how much it is used. If you do software wrong—provided you can even complete it—it won't get used, and therefore will not have any value.
I already know how to do software right: if you want a particular outcome, it takes what it takes what it takes. Usually what it takes is time, because any problem, no matter how minuscule, hinges on one person getting the solution right. This is a well-documented fact in the industry: adding people to an unsolved problem does not get you a solution any faster, in fact it makes it slower. Pressing people with made-up deadlines, for tasks beyond a certain complexity, just gives you, at best, perfunctory results that serve no purpose other than to cover their own asses.
To make software that people use, that is, to arrange the environment so that all possible gains far outweigh all possible losses, I am confident that the key is to make the prescriptive budget-deadline calculus a relic of the past. A way to do this that is competitive on price, time, and value with incumbent methods is what I believe I've found: You can have it fast, cheap, and good, as long as you aren't too picky about what it is—or more accurately—the strict sequence in which you receive it.