Delivering a successful user experience ultimately reduces to whether or not we can deliver what we promise. For successful experiences around computers, it is important that we understand certain principles of computing as well as key elements of the physical, social and political issues that arise within information systems, so that we can make good on our often tacit guarantees. Indeed, the questions can we do it? and should we do it? are often deeply intertwined. Below I outline a few specific questions which you can ask throughout the user experience design process to illuminate issues of computational feasibility and its companions: information security, privacy, and even 21st-century business viability.

The first two questions are arguably the most important because they test the very goals you are trying to achieve for your users. I don't expect you to recognize yourself when they need asking; a competent programmer should be able to look at what you're trying to do and tell you. If you don't have access to a programmer who can answer these questions, find one immediately.

Is this interaction a decidable problem?

There are some questions in this world that you just can't compute the answer to. I mean aside from obvious qualitative, philosophical, navel-gazing questions. Surprisingly practical questions routinely crop up which are and will remain forever undecidable by computer.

A common form of undecidable problem is that which states that you cannot tell how a computer program will behave without running it. Furthermore you could run it forever and never witness some of its prescribed behaviour. Possibly the most critical everyday scenario where this is an issue, and one with which I have some professional experience, is in antivirus software.

If you sell antivirus software on the premise that it will make your customers safe from viruses, you aren't being entirely honest. This is because it is mathematically impossible to tell what is a virus — it is only possible to tell, through a great deal of ongoing effort, what might be a virus. The same, incidentally, goes for spam.

If you wish to cede that responsibility to the marketing department, that's fine. There is a far nastier side effect of this condition that is purely the province of user experience: the false positive. After all, it is this inevitability that causes antivirus and anti-spam software to eat important files, emails, blog comments, etc. It would thus be wise to avoid any guarantees of safety, tacit or otherwise, and to make helping your users out of these snarls a top priority.

Can we guarantee timely results?

Computers have the unique ability to emulate other machines inexpensively, and it is evident that we put considerable industry-wide effort into doing just that. The trade-off is that these emulations don't behave like deterministic mechanical or electronic devices do. As in life, pretending to be something else takes additional effort and time than the real thing, and the masquerade is only as convincing as its actor's skills will permit.

If you put a picture of a in front of someone and beckon her to press it, she will expect some evidence of its effect within about a tenth of a second. If you can't guarantee at least the semblance of progress in that time, perhaps it would be better not to have that button present in the first place. If you must have a button, at least have its designated effect be something you can promise in a tenth of a second. For instance, you could start an asynchronous process and then promise to signal, politely, its completion.

Computer scientists use a measurement called complexity class to gauge how intense a particular computation is, and this is probably more important than any annoyance you could inflict upon someone through the abuse of buttons. Not accounting for this fundamental behaviour of computing could even cost you your business.

If you are old enough to remember Friendster, you will remember how intolerably slow it was. To probe your memory a little deeper, you might also recall how it enabled you, albeit sluggishly, to browse through people you were connected to two and three degrees away. That very action increased exponentially in cost every time someone joined the network. Friendster's growing popularity and tacit promise to their users in the form of that feature ate them away from the inside. By the time they realized this and yanked the feature, everyone had bolted to MySpace, who was considerably less ambitious in matters of social graph analysis.

Indeed, it is the amount of data given to a computationally-intensive operation that destroys its feasibility, which is why it is such a pernicious problem when those data ultimately equate to your bread and butter. An operation that seems trivial today could be crippling tomorrow. If you are faced with one of these problems, you will invariably have to compromise, either by not providing your users with the intensive feature, or using a cheaper, but inexact heuristic.

The following questions don't need a programmer handy; they can be answered with a modicum of common sense, as they involve what is going on outside the computer.

Does this interaction access hardware?

This question initially might appear banal: of course you're accessing hardware all the time. But consider what happens when you accidentally unplug a network cable or external hard drive: anything from minor inconvenience to major panic. Computers tend to have much more going on than they disclose through their displays, and very little control over their physical environment. Similarly, very little software braces adequately for the eventuality that a certain physical resource might not be available when called upon. While information pertaining to the state of a computer's peripherals is usually available, it takes extra effort to be genuinely helpful in the event of a service interruption.

At a smaller scale the same thing happens. A computation isn't naturally sensitive to anything going on beyond its confines, because it exists in a bubble of abstraction. It needs a model of the outside world in order to interact properly with it. For instance, the material from which memory cards are made wears out after a few thousand writes, so the file systems designed for flash memory are typically made to write files in little pieces randomly all over the card to preserve it. A file system optimized instead for a spinning disc could numbly cause irreparable physical damage.

Being in its bubble, software doesn't especially care where it gets its input from, and this poses a practical problem which complements the complexity problem I mentioned above. The current way we engineer computing is by working on a tiny piece at a time in a tiny space, mere nanometres across. Since peripheral hardware is, relatively speaking, much bigger and farther away, it is also orders of magnitude slower. The strategy in this case is to typically take huge gulps of slow data and feed it through a hierarchy of incrementally smaller spaces and higher speeds, and do the same thing in reverse on the other side. But if you don't get the balance right, you get a traffic jam on one end and a vacuum on the other.

We ask the hardware question when we do things like open files or databases, go online or access peripherals. We discuss all the ways the hardware can fail, or what happens if it is not in the same state as when we left it. This is important for protecting our users from surprises and helping them troubleshoot as well as making good on our tacit guarantees of timeliness.

Are other people involved?

A great deal of the computing we do in the 21st century uses information that comes from other people or administrative entities, often accessed across a network. And anything that traverses a network is inherently mired in politics. Computers feed on information, and certain operations aren't very useful if somebody out there won't let you have it, or is otherwise fooling around. In the context of user experience, this means getting cooperation from third parties to deliver results before promising them to our users. And of course, we can no longer assume that everybody, including our users, is playing with the best intentions.

Network access, authentication and authorization

Consider everybody involved in the ostensibly simple act of visiting a web page. For the sake of completeness I assume each of these is a separate entity. Let's also assume that we have a working computer and Web browser:

First we have to have some sort of physical network connection.

Parties involved: Ourselves, possibly our corporate IT department.

Wireless still counts as physical, and possesses its own litany of failure modes.

Then we need to be able to talk to the Internet at large.

Parties involved: Possibly our corporate IT department, at least one, but likely several ISPs.

Then we need to look up the domain name of the Web site.

Parties involved: The destination's Web host or DNS provider, their domain registrar, ICANN, at least one root name server.

The domain name must be registered and paid up, and its record must be present with the root name servers. The DNS server must be up, running, connected, properly configured and supplied with the correct information.

Then the web server has to be ready for our request.

Parties involved: The destination's Web host.

Like the DNS server, the Web server must be up, running, connected, properly configured and of course, not too busy.

If we're using SSL, the certificates have to be up to date.

Parties involved: The destination's Web host, our browser vendor, a certificate authority, possibly our corporate IT department.

The site's maintainer purchases an SSL certificate from a CA and the Web host installs it. Expiry dates are written into SSL certificates, putatively to make them less valuable if captured, so they must be renewed like domain names. CA certificates must also be present on the server and in our browsers for the authentication chain to work. If we're identifying ourselves with our own SSL certificate, that too must be installed in our browser.

Then there actually has to be something at the URL we visit.

Parties involved: The site's maintainer.

Incidentally, everything before this point has to work for you to even see a 404 error. If you say 404 without actually witnessing a 404, it confuses the people in charge of troubleshooting such things.

And finally, we have to be permitted to access it.

Parties involved: The site's maintainer, possibly acting on behalf of a user on the site.

This is an impressive tally of entities, business relationships and technological infrastructure that must all work in concert just to serve us a Web page. It is, of course, multiplied for every source of externally hosted images, videos, scripts, fonts, etc. and further complicated by proxies, content delivery networks, cloud servers and the like. Moreover, if anybody in this chain stops cooperating or decides to defect, or if uninvited guests appear, our users are likely going to know about it sooner than we do.

Privacy and security

On the other side, there is the case when our users possess information that is useful to third parties. A basic property of information is that once you tell somebody else, that's it, you've lost control of it for good. Given that it should be our users' prerogative to disclose even a single bit of information outside of what is necessary to interact with third parties. If not, they may quickly wonder whose side we're on.

It's here that we start bumping yet again into business viability. Many new businesses are built around the accrual and strategic use of detailed data pertaining to user behaviour. But how aware are our users of that? Do they care? Is it discussed in the open, or is it buried in the EULA? If they are aware, are we going to permit them to use the product if they don't want us to collect their data? Will the product even work? This is both a question of how much users trust us with their data and how they feel about how closely we're watching.

Let us assume now that we aren't in the business of ceding our users' data to third parties. In fact, let us assume that we have a common interest in keeping certain information private. Really private. We still have to worry about leaking information to third parties, and we often also have to consider another problem-child of computer science: cryptography.

The context in which cryptography works as intended is perversely specific, and it takes a fiendishly paranoid and equally competent individual to deploy it correctly. Because many attacks on modern cryptosystems are based on circumvention, a single leak in the wrong place can instantly render carefully-constructed security worthless. To exacerbate the situation, the information security industry, legally-operating and otherwise, is continually shredding and rebuilding its own infrastructure. So to simply tell our users that because a cryptosystem is in place, their information is safe and there's nothing to worry about, is once again patently inaccurate.

It is wise practice to demonstrate a commitment to protect the interests of our users. As such, we ought to consider the involvement of other entities when we contact them on behalf of our users, when data mining is part of our business, or in situations when our users' information could be lost or stolen, from them or from us.

Precisely what are we promising our users?

Computation ultimately reduces to the answering of questions. If for some reason, as we saw above, we can't answer the original question we're after, we need to reformulate the question. It is safe to say that many failed user experiences stem from a discrepancy between the questions designers, marketers and other stakeholders think are being answered, and those which the programmers are actually answering. They break down into two types.

Algorithms

An algorithm can be understood as an effective method for carrying out the answer to a question. That is, an algorithm will exhaustively traverse a given input and produce an answer that is guaranteed to be correct, and do so in a period of time consistent with its complexity — as long as the underlying system is functioning. In addition to this guarantee of correctness, there is also a guarantee of consistency. That is, for the same input, an algorithm will always produce the same result. This means you can cache the result and use it later, which has tremendous potential benefits for user experience, and by my admittedly anecdotal assessment, could be put to more pervasive use.

Heuristics

As we observed above, not every question can be expressed as an algorithm, or the only ones available are too costly in time and computing resources to be useful. In these cases, we must resort to heuristics. While technically speaking, a heuristic might often be an algorithm, or at least contain them, it answers a fundamentally different question. For instance, it may not use all the available data, and it will very likely produce different results each time it is run. A heuristic represents a best guess. It's like saying I can't tell you that, but I can tell you this, and that fundamental difference bears incorporating into the product's ethos. Google's famous search algorithm is actually better understood as a heuristic, and they wisely promise us relevant results, not perfect ones.

Epilogue

The PC evolved in a stark and ultimately antisocial environment. It developed the anatomy consistent with a solipsistic tool of personal productivity. In some ways, it has not adapted very well to the inundation of distractions that come with being a live node on a global network.

The server is a misnomer because in many ways it is more like the master. Descended from the mainframe, it is attended to by a priesthood that speaks in tongues; it is never dealt with directly. It is always present, subcutaneously accumulating and facilitating the details of our lives.

The mobile device was once a humble telephone, but cheap computing hardware caused its capabilities to swell. Designed from the ground up as a means of communication, it must gracefully handle a continual onslaught of interruptions, as well as its own network etiquette. Its chief function dictates, however, that it forego the requisite real estate and apparatus that enables a high volume of disparate forms of creative input.

The capabilities within this ecosystem are starting to blur together into a sort of vapourous construct, the exact name of which escapes me at the time of this writing. Each can do much of the work of the others because fundamentally they're the same thing: computers. The fluid between them is our information. It's only the packaging — the array of displays and controls for ingesting and manipulating this information — that varies.

Other electronic devices are joining the party in a frenzy because it's cheaper to program a computer than to design circuitry from scratch. We're starting to see them get chatty too, from cameras with GPS to bathroom scales that tweet your weight. Why? Because if you can buy a crate of systems-on-a-chip with networking hardware for around the same price as without, why not? The universality of computers lends itself to the creation of feature-packed products that put the Swiss Army to shame, and freaks of digital nature so twisted that they would unnerve even the Dr. Moreau of gadgetry.

Can we do it? Probably, for some value of it. The envelope of computer behaviour that is truly infeasible is actually quite slim and fairly well-defined, if we know what we're looking for.

Should we do it? This question provokes much deeper contemplation among all the stakeholders — not just designers. This is not just an issue of ethics, but also one of utility and desirability. As has been observed and remarked numerously over centuries, people can only handle so much information at once. They tend to treat interactive devices like people. Much of the experience of using a computer has no pre-computer counterpart, and the rest is often a puppet theatre of metaphor and skeuomorph — and people may not react kindly when it fails. Finally, much of the information we help people work with as user experience designers belongs to them. Some of it may be of important strategic value, some of it may be deeply personal. It is up to us, as their advocates, to discern and protect.