I am unapologetically inclement when it comes to the brain-dead casting of information management tools to pre-computer standards. I recently caught some heat about a comment I made regarding the commutation of grid calendars, adapted for paper, to a computer screen which does not obey the same constraints. This document explains my position.

Skeuomorphs are important for conveying the purpose of a form when the underlying technology no longer demands a functional purpose of them. I'm not going to deny that. It's an important psychological/semiotic effect that conveys not only the purpose but the mood and tone of an object, but I wonder about it being the principal driving force for choosing the form of certain interfaces.

Metaphors Near and Dear

Let's consider a design metaphor we're all familiar with: a button. Once upon a time a button would have been connected to a piston on a spring, which might have pushed a lever, which might have been connected to a cam or clutch or something. Either way there would be some deterministic mechanical machinery underlying the button and directly physically connected to it. Later on the button would have completed an electrical circuit of increasing complexity and distance from the determinism of clockwork.

Whether or not the concept of a push-button was initially co-opted from actual buttons used to fasten clothing would be interesting to know.

The buttons we encounter nowadays, however, are just pictures of buttons which we activate with simulated index fingers (or our actual fingers in the case of multi-touch). But they aren't really even that. When we activate buttons we're injecting bits from hardware into one computational model which is read by a second model which renders a third model to the screen, which sets a fourth, all the way to an Nth model into motion, ultimately reducing to a pattern of bits which is imprinted and successively read back off of some kind of hardware.

So, why is it important to know this? Well, a button attached to deterministic machinery is pretty much guaranteed to do something every time you push it unless it physically jams. And, if it jams, the scale of its anatomy is big and simple enough to observe the jam and clear it. Moreover, those interested can actually infer something about the structure of the system, whereas two things can be said in contrast about computers and computer-driven devices: you can't tell how they work and you can't tell why they fail.

For instance, neuroscientists and usability engineers tell us that the perception of instant is around 100 milliseconds, which is a pretty hefty fudge factor, all things considered. We have computers that do billions of operations per second and pocketable devices that do hundreds of millions. Everything is really really fast now, so we should be set, right?

Well, no, because we connect our virtual buttons not to pistons or circuits but to computations, which are completely different in character from anything that came before them. I'm sure everyone has experienced the button for the supposedly instantaneous operation that ticks (possibly way) over the 100ms mark. It goes something like this:

FFFFFFFFFFUUUUUUUUUU

Here, try it:

When we proffer these controls, we are effectively making the same promise to people as we would be if they were mechanical systems or pre-transistor electronics. Anybody with a significant memory of these is going to wonder why the button, when offered as part of a linear task flow, is not sufficiently instantaneous.

Promises, Promises

And this is really the idea that I'm trying to convey. If we don't account for the machinery underlying the controls, we screw up the experience at an extremely fundamental level. And it's not something we can fix with better/faster hardware, because the software load changes at a rate which can be orders of magnitude higher than the rate of improvement to hardware. Simply put, adding one extra wafer-thin mint of data to your supposedly instantaneous push-button operation could explode past the magical 100ms mark and make a nasty stain on the carpet.

And that's just accounting for computational complexity. There are other issues with fallible hardware peripherals, networks, administrative domains, politics, etc.

This is the part I find the most confusing. We're using computers by the freighter-load to simulate pre-computer technology but in comparison we aren't using them as much to do what they actually do, which is manipulate information. The average person really doesn't do much actual computing with all the computers in his or her life. That is, the computation a person is exposed to is canned, made by somebody else to perform some subcutaneous action that our friend doesn't understand and likely doesn't care about. But occasionally people do want to manipulate information in a way that isn't preordained, at which point the tools available to them are clumsy (like Excel) or way over their heads (like just about any programming language) or both (like Automator). And I think that is a lost opportunity of woeful proportion.

So on one hand we've got these degraded copies of pre-computer technology, and on the other we have this huge barrier to manipulating information in its most natural state.

What We're Actually Doing

The way you do anything on a computer, at least the ones we use, is to lay a bunch of symbols end to end on a long strip (implemented more or less in RAM), and then jump around from symbol to symbol, sometimes overwriting them with new symbols, sometimes using them to tell which symbol to jump to next. Each jump takes a unit of time, which roughly corresponds to the MHz/GHz of your CPU. With this you can construct a model of pretty much any other structure, physical or mathematical, and amortize its shape over time. This is the same if you are simulating mechanical systems or electrical relays or something that has no pre-computer equivalent.

But it's these structures that have no pre-computer technological equivalents that are the most interesting and powerful. For instance, the way we think is not in rigid, rational categories and opaque chunks but associatively, and with arbitrarily complex anatomy. We bounce from subject to subject along associative links. Some little piece of one thing might remind us of something completely different and ostensibly unrelated, certainly not related along any preset categories — faces in clouds and the like.

So when I complain about the lack of insight and imagination going into something like a calendar, it's more structural than it is cosmetic. For instance, why should I have to manage the time it takes to commute to a particular meeting? Or how about a to-do item which is conditional on having money in my budget (like a large purchase)? Or if I'm estimating a project, accounting for stat holidays or the vagaries of the schedules of other businesses? These behaviours are all in principle no harder to realize than any other, but when we frame the problem in the context of a metaphor we cut ourselves off from what we're trying to achieve for our users, which ultimately boils down to informing them about where they are in their lives and what they ought to do next.