Security will always be exactly as bad as it can possibly be while allowing everything to still function.

Nat Howard, via Dan Geer

to which Geer added …but with each passing day, that and still function clause requires a higher standard.

On October 21, 2016, an internet outage occurred that had the ultimate effect of crippling several big services, including some services that people use to do their jobs. It turned out to be an attack, on a particular provider of domain name resolution, a core service that enables us to find other services on the internet. The attack was carried out by co-opting an army of so-called smart gadgets and instructing them to hurl deliberately-broken internet traffic at its targets. Legitimate requests to the name servers couldn't get through the torrent, making them unable to do their job of telling people where to look on the internet for the other services. It is important to recognize that this damage could easily have been caused by one person: anybody, anywhere on the planet, who knows how and cares enough to bother.

For those who haven't gotten the memo, we are now living in Extremistan: a state of affairs in which governments, corporations, terrorists, criminals, and even a bored teenager halfway around the world can cause untold amounts of damage. We are lucky that this damage has so far only been financial. It will not stay this way forever.

When this kind of thing happens, and it will keep happening, somebody is eventually going to have to be held responsible. Fingers are already pointing at the manufacturer of DVRs and webcams that were used in the attack—devices which could have been protected with a modicum of design effort. Maybe the Chinese government will prosecute. Either way, it's doubtful that Dyn, the target, let alone collateral damage like Netflix ,Twitter and GitHub, are going to see any compensation.

October 24, 2016: It looks like Xiongmai is issuing a recall.

The other common practice is, of course, to blame the users. Turns out that these devices were compromised because they were reachable over the internet with baked-in default passwords nobody bothered to change. But to millions of people, these things are just gadgets. They have no idea the extent to which these objects can be weaponized. Our marketing departments don't tell them, to say nothing of the people who actually make these devices. Molly Sauter has a point that the prevailing attitude of the so-called tech community is flagrantly anti-user.

This is a Job for User Experience Design

The stated purpose of user experience design is to empathize with and advocate for the user. However, it isn't clear to me how well-rehearsed UX designers are with information security. In my experience, most developers have a poor grasp of information security. That's only half-excusable, as short-term business interests are inherently hostile to it: albeit only incrementally, security costs more time and more money. In today's climate of so-called Agile, fail-fast, MVP product development, it almost seems like it will take a real disaster or two, or perhaps stricter liability legislation, to spur an appropriate response.

I propose a new role in the business of developing software, information service, and software-driven objects: a designer, whose job it is,

  1. to help companies design products that don't get hacked, and
  2. to help companies design products that are resilient to other products/services getting hacked.

Fallout takes all sorts of shapes: here is somebody reporting that his thermostat cranked up the heat on his house because it couldn't contact its manufacturer. It's also worth noting that this would have happened in any outage, irrespective of whether there was a cyberattack behind it.

This person is basically a cop, who watches over the design process as the surrogate for a hypothetical adversary. This person—let's say it's me—will help product design teams make robust decisions to keep their users safe and their products effective. This role is different from a typical UX designer, whose goal is to design the interactions of legitimate users, and a typical security consultant, whose goal is the technical security of the product. Rather, it's something in between.

As somebody who has chops in both UX design and information security, I can say with confidence that many of the techniques of the latter are far from alien to the former: oversimplified, you create a persona who is a badguy, and you design your product to keep him out. Like UX is in the first place, this is more of a mindset—the skills anybody can learn. While it would help a less-experienced team to start with an outside person dedicated to owning the adversary, it's entirely reasonable that over time, an in-house design team can integrate the role themselves.

The remaining question is will on the part of business leaders. The business community only just seems to be warming up to the idea of paying for design as it is, as a way to make better products that earn more money. What I'm proposing is that the businesses who have already bought into design, bump that budget up just a wee little bit more, as an insurance policy against losing that precious money, either from an embarrassing PR scenario, a product recall, or something far more serious.

…which is just about anything that either is software or is driven by it, you should really be thinking about injecting a little security-think into your product development process. Cybercrime/attacks/war are only going to escalate. Penalties, both from the attacks themselves and from liabilities in permitting them, are only going to get stiffer. Get prepared now and internalize the discipline. This is something I can help with.