In 2000, I bought a book on the recommendation of a friend. It's a gigantic, floppy paperback, with an ethereal, nautically-themed cover, and the grave title of Cognition in the Wild. It's an ethnographic analysis of the navigation team of an aircraft carrier, which dives deeply into the nuts and bolts of how people think and collaborate. This fearsome tome sat on my shelf for ten years. All the better, too, as I don't think I would have understood it if I had picked it up a day sooner. Perhaps because in it, the author, Edwin Hutchins, wrote passages like this:

A navigation chart is an analog computer.


If we built the right formal system, we could now describe states of affairs in the world that would have been impossible or impractical to observe directly. Such a state of affairs might be something in the future, which we cannot observe directly, but which can be predicted. I consider the mastery of formal systems to be the key to modern civilization. This is a very, very powerful idea.

I'll get back to these statements in a moment, as I'm still laying a foundation, but reading them I was floored. Around this time, I had acquired a copy of a lesser-known book by one of Hutchins's colleagues, Don Norman. In it, he describes a type of thing called a representational artifact, a human-made object, purpose-built to embody representational states and the transitions between them. In other words, a representational artifact has a meaning beyond, or even irrespective of, its utilitarian physical properties. Norman also wrote of cognitive artifacts, which are representational artifacts designed to support memory and reasoning—Things That Make Us Smart. This, I surmise, is what Hutchins meant by the navigation chart being a kind of computer.

I had just begun, at this time, to develop my own understanding of formal systems and how they relate to computation, largely due to another daunting volume, this time by Douglas Hofstadter. Gödel Escher Bach is not only about formal systems, but also about morphisms, which are structure-preserving mappings between arbitrary categories of things. Think like a mathematical function, but more abstract, and simultaneously more concrete. A morphism could, for instance, be a way to describe what goes on when we take a photograph, or transcribe a conversation, or construct a building. Hofstadter's argument was that meaningful form in our environment is transformed across media, through our senses, with its meaning intact, and represented analogically in the structures of our brains. It is then synthesized with other meaningful forms within, and turned outward through our bodies to act on the world. The ongoing interplay between these states of affairs is what we call consciousness.

Whether or not you buy that last bit is irrelevant, because in this context it's the morphisms that are important. A known initial state plus a known morphism equals a predictable result. This equates to the deliberate composition and manipulation of formal systems. This process is also known as computation. Specific types of morphisms may also be employed to translate a representational state from one physical medium to another, to yet another, without losing any information. This idea provides the foundation for data storage and networking. It's also what we're doing when we render an image, or navigate a ship, or prosecute a law, or turn a blueprint into a building.

This trio of books had such an impact because they furnished me with the notion that any artifact, and any non-artifact, and any system or mix thereof, can be a representational medium, and any representational medium is inherently a computational medium. The machines we recognize today as computers constitute an efficient, yet extremely narrow interpretation of this idea.

This brings me to the heterodox architect, Christopher Alexander. Around 2008 or so, I read his 1964 PhD dissertation, Notes on the Synthesis of Form. The thesis was about the use of a structure-preserving mathematical decomposition of complex design problems into hierarchies of simpler problems which could be readily solved, then recomposed into a complex solution. Seeing enormous value in this technique, and puzzling at why I couldn't find an implementation anywhere, I wrote the code myself. Aside from my math skills being insufficient at the time to implement the method properly, I couldn't understand how Alexander generated the initial structure, manually, to be processed by his computer program.

In order to use Alexander's technique, you had to gather hundreds, if not thousands, of what we would typically call requirements—orders of magnitude more than common practice. Then you'd have to work them to get them all into roughly the same conceptual scope—that is, such that they deal with the same size of concern—arguably a subjective criterion. Then you connect these objects together on the condition that a change to one affects the state of another. Such a task would be positively enormous, and must be fixed before you can even begin to apply the technical innovation.

The hierarchical shape produced by breaking this hairball apart at its natural articulation points is called the decomposition pattern. Because the decomposition pattern depends on the structure of these connections, a latecomer requirement would almost certainly produce a wildly different result. This represents a huge bottleneck to progress, with the perennial one-more-thing threatening to torpedo any nascent plan. While theoretically sound, Alexander's method, as initially described, would be too fragile to put into practice. Nevertheless, as a prolific architect, he's figured it out somehow. Luckily he left a hint.

In the preface to the paperback edition of Notes, Alexander stressed the importance of the diagrams that riddled the text, and how they were actually more important than the math. I didn't understand what he meant until I got my hands on all 15 pounds of his magnum opus, The Nature of Order, which was after reading Hutchins, Norman, Simon, and Hofstadter. The key is repeated all over the text: it's the geometry, stupid. And then it hit me:

Alexander is using the building site itself
as a computational medium.

Consider this: a building, when complete, is every bit a representational artifact as it is a gadget for keeping off the rain. It encodes a set of constraints and affordances that literally program how human beings interact with one another. Architects have known about and harnessed this effect for various ends since the first buildings. Alexander's interest is in creating structure that supports the widest range of freedom in ordinary human life. To achieve this, he uses the current representational—that is to say, geometric—state of the system, on the ground, to compute the next step in the construction process, itself a recursive application of a carefully-selected set of fundamental morphisms. Christopher Alexander has amortized the computation of the requirements gathering, and their processing, and the decomposition pattern, and the construction program, through a set of morphisms which are acted out, by people, in real, physical space. The Nature of Order is a four-volume, 2165-page instruction manual.

If you can do this with a geometry problem, you can do it with a topological one. In fact, Alexander's original thesis is ultimately a topology problem. The morphisms he defines embed semiotic structure into the definite geometry of a building. When applied, the building process yields meaningful structure. Abstract relations between social concerns literally become concrete. This implies that this technique should be easier to implement, when you don't even have to pour any.

I don't yet know how well Alexander's fundamental geometric properties translate directly to abstract topological structure, but they all have to do with contrast, symmetry, proportion, continuity, recursion, and self-similarity. Indeed, Alexander has demonstrated what Hutchins has called for. If philosophers, linguists, mathematicians, anthropologists and designers saw fit, we wouldn't have to wait another fifty years for a generalization of this process.

Interlude: The Ethics of Formal Systems

The bridge crew of Hutchins's USS Palau acted as a computational medium for carrying out the formal system called navigate the ship. If such a model works for the bridge crew, then it can be expanded out to all of society, barring any evidence that people interact in ways that do not process and exchange information. Before you balk: I'm not suggesting that we're all like computers—if anything, it's the other way around. No, I'm trying to say something considerably more subtle.

A formal system is a mathematical distillation of a process which operates within a certain envelope. This envelope is like the rules of a game: while the play itself may vary wildly, it never varies outside the rules. This gives formal systems the predictive capabilities Hutchins wrote about.

Another word for formal system, therefore, is game. Yet another word is policy. For our purposes, these concepts are equivalent.

A game with contradictions in its rules isn't fun, because it can't be trusted to be consistent. A game with no contradictions runs the risk of being too fun, such that we prefer it over reality. Games are preferable to reality—at least if you're winning—because the rules of the latter are not—and never are—fully understood.

The problem with an inconsistent formal system is obvious: it's exactly equivalent to a program crashing on a computer—it kicks you out to the surrounding context and leaves you to fend for yourself. The problem with a consistent formal system is something I'm tempted to call drift. It hums away until some cumulative effect outside the system's description causes the surrounding context itself to crash, like a computer program that gobbles up all its host's memory, or runs the machine so hot that it catches fire. This scenario leads the formal system's inventor to say something like oh, it would have worked forever if reality hadn't gotten in the way.

Another word for this drift is epiphenomena: things that occur in the surrounding environment while a system is running. Yet another word is externality, like the classic example of a factory polluting its environment as it churns out widgets that its owner sells at a profit. We only notice or complain about externalities that harm us. We tend to be silent about the ones we benefit from, assuming we're even aware of them. See where I'm going with this?

Some games are won by the party with the greatest physical strength and dexterity, others are won by the sharpest intellect. There is also a category of game where the advantage goes to the person who knows the rules best. Nobody knows the rules better than the person who wrote them.

There is an idea prevalent in our culture that if we just find the right system, we can ride it out in perpetuity. We can gamify social interaction. This idea is manifested most prominently in the departments of psychology, sociology, law, political science, and economics. Moreover, there are now actual game designers who have expressed an interest in tinkering with public policy. I want to suggest that the very idea of trying to define an all-encompassing system, no matter how fair you try to make it, will always produce systems whose rules benefit the makers, and whose externalities they can at best benefit from, and at worst ignore.

I agree with Hutchins that the mastery of formal systems is a very, very powerful idea. I also submit that part of that mastery is understanding that possibly the most important criterion for assessing the potential damage a formal system can cause is its size. That is: scale, scope, complexity. As designers of systems, we should aim for compact, narrow, and understandable, and we should blow them up periodically and reform them. The closer a person's rhetoric, pertaining to a system, approaches everything and forever, the less that person should be trusted.

The problem with large systems is not that they're too complex, it's that they're too simple. Or rather, too simplistic. Their ability to be defined is the indication of their inadequacy. The simplicity and comprehensibility of a system is a natural handbrake on its domain of operation and range of applicability. If you can understand a system, it either describes states of affairs only in extremely general terms—like physics—or it simply doesn't describe a very large part of the world at all.

Rather than design master plans, we designers of systems should instead design federations of little systems linked together by extensible protocols and generic interfaces. This way we can create complex systems without having to explicitly define them. These protocols and interfaces themselves are systems which, in order to work, must constrain what can be said. This is why we still have to deliberately blow these systems up and reassemble them every once in a while. It's essential that we understand that.

The master plan is still an attractive model to people in power: Get the smartest eggheads together—pay them whatever you need to—and have them just magic up a comprehensive solution. You can hang them if they fail. It's arguably a byproduct of being in power that we come to believe that we can concoct a master plan that will work. People who are not in power, by and large, are still credulous about the notion that the way to solve problems is a master plan. I see it as our job, as designers of systems, to set this record straight. I'm fairly confident that the least frustrating way to get this message across will be by demonstration.

Alexander has found a way to create buildings—at a competitive cost—without a master plan. Rather, he uses a protocol that sequentially applies discrete, structure-preserving transformations to a region of space, all the while taking continual input from the users, the surroundings, and the partially-computed product. The result is a building which is considerably better adapted to the real environment in which it needs to function.

Why the buildings are better adapted are because they shape and give definition to physical space without imposing upon it. The structure affords certain patterns of social interaction, rather than prescribing them. The people who use Alexander's buildings report an unprecedented sense of freedom and belonging. What's more, the process to create these buildings causes them to exhibit a conspicuous geometric pattern, which becomes a signal to people that the structure behaves this way.

I submit, once again, that if you can get these results with buildings, you can get them with systems that aren't buildings. Alexander's original doctoral thesis was about the abstract meta-problem of solving design problems, irrespective of specialist domain. His solution, matured over half a century, is as elegant as it is profound. The work of people like Hutchins, Norman, Hofstadter, Simon, etc., not only corroborate just how sophisticated it is, but also indicate that the approach isn't limited to buildings. They provide the theoretical basis to show that the potentially alien-looking things we'll have to do to implement this generalized process are deliberate, and that we actually have some idea about what we're doing.

This is important, because as Alexander wrote in his latest book, which carries the unassuming title, The Battle for the Life and Beauty of the Earth, there will be pitched opposition to these new methods, if for no other reason than they look unusual. Leave aside the fact that the incumbents will have a lot of retooling to do if these new methods catch on. To keep from being interfered with, we'd have to be able to show at every turn that the method is working, and the method should take no more time or money than the incumbent to execute. That part is going to be hard.

Any structure or process we can simulate on a computer, whether physical or virtual, whether made of bits or bricks or people, either is a formal sytem, or corresponds to an analogous one. Mastering formal systems means recognizing them as first-class entities, and even implementing them in non-traditional media, i.e., not a conventional computer. Declare up front to every stakeholder that you're using whatever your equivalent of the building site is, on purpose, to compute the next step in the process. And if people understand what a cognitive artifact is for, it doesn't look like a digression or waste of effort to make one.

This process, as I've argued elsewhere, is going to entail a new kind of service, by a new kind of professional, mediated by a new kind of contract—likely several of each. The common thread in these services is, roughly, to transit business and institutional entities from a paradigm of mechanistic master planning, to one of stepwise, structure-preserving transformations. The vehicle is whatever proximate thing you actually do. To find willing participants, I suggest looking in places where the incumbent method hasn't worked out very well, and everybody knows it. If you've put up with these 3000 words so far, I'd wager the incumbent method isn't working so hot for you either.

I embarked on this odyssey because I'm fed up with working in an ecosystem that selects for garbage. In the quaternary industrial sector, which is all about information and design, the risk of the process, and the agreements around it, still makes it safer and more lucrative to look like you solved a problem than actually solving it. In other words, the easiest way to win commercially as a designer is still to fail at design. I resolved that if I want to earn a living doing good design, I'm going to have to bootstrap what is ultimately a social system—a protocol—that rewards good design.

You can scarcely compress the time it takes to do good design. The best you can do is arrange the process so that progress is conspicuous and the partially-completed result has its own intrinsic value. Alexander has figured this process out for buildings; we can use the work of these other gentlemen to figure out how far it extends.