Table of Contents

1 Good afternoon

  • As you probably saw on the conference program, my name is Dorian
  • For those of you who don't know me and what I do, I can say something like, I help companies by helping their teams develop formal models of processes and structures
    • by processes I mean at all scales:
      • macro processes (demographic, economic, even geopolitical) in which organizations are situated
      • business processes, which organizations themselves carry out
        • sub-processes within those processes, such as:
          • interactions between customers (users) and products/services
          • the (potentially fine-grained) behaviour of the products, services, and infrastructure itself
    • by structures I mean the things that are affected by these processes, whether actor, prop, or scenery.
      • since we're talking about models, these structures either represent something out in the world (people, companies, products, events…), or they don't: they're purely conceptual entities that exist sninside information systems.
        • the entities themselves are significant but so are the rules for how they can connect and interact with one another.
    • finally, by formal, I mean models capable of being manipulated directly by computer.
    • So that'll give you a sense of my point of departure.

2 The Specificity Gradient

  • Every project (at least every project i've ever encountered) has a part of its anatomy that goes like
    • desired outcome
    • specific method
  • and the job of any project is to map the desired outcome to the specific method.
    • and in software, once you obtain that specific method, you're done.
    • It's not like a building where you get the blueprints drawn up and then you go build the building.
      • In software and digital media, the blueprint is the building, and the code is the blueprint.
    • so the question is how do we do the part in the middle, the mapping part.
      • because that part is by definition different for every project.
      • A project by definition has some uncertainty associated with it.
      • if you had perfect information about it already, it wouldn't be a project, it would something you do.

2.1 The answer

  • so the answer to this question for software I believe goes something like this:
    • At the outside of the process we have desired outcomes, which I'm going to refine a bit here by calling them business goals.
      • By "business goals" here I mean whatever the organization is trying to do relative to the outside world;
        • I also mean "business" in the broadest possible sense, so if it was a corporation this goal could be "maximize profit", but it doesn't have to be that
    • We then intersect the business goals with user goals
      • Of course, standard disclaimer: the user could also be the customer but isn't necessarily,
        • we can trot out the standard examples where the customer and user are different, cat food, diapers, whatever…
      • The salient bit is that the user is the one you have to satisfy in order to satisfy the customer, in order to satisfy the business goals.
    • So we proceed, onward and inward, to user tasks:
      • again, a task is not a goal,
      • a task is whatever you have to do in order to achieve your goal.
    • Then we go from user tasks to system tasks.
      • These are what the system has to do to support the user in the completion of their task.
        • …in order to achieve their goal, to the extent that coincides with the business goals.
    • And then we proceed along to system behaviours.
      • These are the fine-grained prescriptions and proscriptions, rules about what must happen or not happen while the system is carrying out its end of the task.
    • And then finally, at the very end of the gradient, we have code.

2.2 I submit

  • Now, I'm gonna submit that in the race to get to code, we forget about all the value generated from the information gathered and decisions made in these coarser-grained layers up here.
  • I'm gonna footnote as well that what I have just described is not a process.
    • This is not a step-by-step instruction manual.
    • Rather, it's a taxonomy; a categorization scheme.
  • This is the specificity gradient, and you can think of it like a coin sorter.
    • some of us might be old enough to remember these things, they have big as-seen-on-tv energy
    • there are bucket types and rail types, and they are variations on the same principle:
      • you set up an array of holes the same size as each of the different coins,
      • you put the smallest holes at one end and the biggest holes at the other end,
      • then you either shake or roll the coins until they come out sorted.
  • My remark here is that in any organization, there is work product that is going to settle on one of these layers or another.
    • Each layer represents a level or range of detail, which correlates with a zone of perishability.
    • There is also an increasing level of interiority as well, a side effect of the detail:
      • The concerns become more esoteric; farther away from the outside world and "the average person"; more the domain of specialists.

2.3 Temporal strata

  • So let's examine what I mean by looking at the layers:
    1. (Business Goals) The way an organization interfaces with the public is through products and services.
      • Unless you're Google, a product is an extremely long-lived thing.
        • Individual products typically stick around for decades.
        • When you get out of tech, you can even zoom out to product categories, which stick around even longer.
          • Car companies, for example, have been making cars for over a century now.
      • If they're sophisticated, the company is going to be tracking all sorts of things happening in the world:
        • demographic trends
        • geopolitical
        • government (legislative, regulatory, judicial matters)
        • the market
        • competitors…
      • You could say that the events have their own temporal logic, but the model the organization creates to track them is persistent.
        • in fact the strategic situational awareness model is typically a bigger, ultimately slower thing than any product.
    2. (User Goals) One interface between business goals and user goals is something like a persona.
      • These are likely to be informed by the strategic model I just mentioned.
      • Analogously, there's no reason in principle why a persona couldn't last decades.
      • There's no reason why it couldn't transcend products.
      • There's no reason why an organization couldn't have an entire stable of personas that product teams could select from.
    3. (User Tasks) An example of how to model user tasks is something like a scenario.
      • Scenarios animate personas to model the coarse-grained process of getting something done.
      • You can write a scenario in a way that doesn't overprescribe too precisely how a persona accomplishes their task.
        • Like if the goal is watch a movie in a theatre, the task is buy movie tickets.
          • You can explore precisely how a person buys movie tickets through different media and technologies.
            • how they buy the tickets in person
            • how they buy the tickets on a desktop
            • how they buy the tickets on a phone
            • how they buy the tickets with an apple watch
            • how they buy the tickets with alexa
            • how they buy the tickets with GPT-5, whatever
        • You have an abstract process of buying movie tickets to which you can add detail about the specific way you go about buying them.
          • When new technologies come along, you can just add them.
    4. (System Tasks) When we get to system tasks, we begin to get a sense of the contours of what "the system" even is.
      • Again, this stratifies.
        • Any system is going to have to have an abstract structure in order to support and respond to certain processes.
        • How, precisely, this gets implemented, may change over the years.
          • but the fact that "the system" must have this or that piece of gross anatomy, the fact that the proverbial hip bone is connected to the knee bone, will persist.
      • And again, the details of some specific vendor or technology, SQL, the cloud, AI, whatever, can just be updates to a more robust conceptual model.
    5. (System Behaviours) When we get to system behaviours, we are again going to see prescriptions and proscriptions that are more and less durable; more and less specific.
      • If you want an example of these kinds of objects, you typically see them in bug reports and unit tests:
        • They will be the bullet point of "expected behaviour"
        • In my experience, these rules are rarely be as specific as to dictate what kind of programming language to use, or framework, or vendor, or platform.
          • That said, they may be only relevant to a particular one of these things.
    6. (Code) So, by the time you get to the code, you have a massive battery of precursor information.
      • The precursor information is going to be much more stable than the code!
      • What we can say about code is that it continually changes from one day to the next.
        • People are constantly moving and shuffling things around.
        • People are regularly swapping third-party tools and components in and out.
        • Every once in a while, you might rewrite in a different language or framework.

2.4 Is there precedent?

  • There sure is!
    • No doubt a lot of you in the audience are waiting for me to get here!
  • In his 1990s book and subsequent documentary, Stewart Brand introduced the concept of Shearing Layers.
    • (He later called them Pace Layers)
    • The concept is actually attributable to the architect Frank Duffy, and he was initially talking about buildings.
      • The first of the layers in his formulation is Site
        • over millennia, there are potentially many buildings on a particular site
      • Next is Structure
        • the foundation, the supports
        • these can last centuries (or millennia too)
      • Next is Skin
        • so building envelope: roofs, curtain walls, windows
        • these last decades to centuries, depending on what they're made out of
      • Services
        • plumbing, electrical, HVAC, elevators, escalators, wifi
        • years to decades for these guys
      • Space Plan
        • non-load-bearing interior walls that partition the space and govern its usage
        • again, years to decades, but certain places like museums and art galleries put up walls and tear them down every few months
      • Stuff
        • furniture, merchandise
        • the things that actually happen inside the building
        • the people, the activities
  • Brand eventually went on to make his own generalized formulation; kinda blew it out to civilization scale:
    thousands to millions of years
    hundreds to thousands of years
    tens to hundreds
    again tens to hundreds
    years to decades
    weeks, months, maybe years
  • The theme, at least originally, was that each one of these layers would operate in its own domain, and the edges would shear against each other, like the movement of a clock or astrolabe.
    • Brand later de-emphasized the shearing aspect but I frankly kinda like it.
    • It's like the concerns of each layer can be considered apart from all the others.
      • The dependencies only run in one direction.
      • That is, the structure depends on the site but not the other way around.
      • Fashion depends on commerce, but not the other way around.
  • But whether we're talking about shearing layers or pace layers, the aspect that is the same is that each layer kinda lives in its own range of durability or perishability with respect to changes in time.

2.5 My contribution

  • My contribution to this general conceptual meta-framework is the emphasis on detail.
    • I should add that the fact that I also came up with six elements is purely coincidental.
    • In fact, I am a lot more interested in the relationships between the layers.

3 How I got here

  • the penultimate layer: system behaviour
  • going to have to tell a story
  • as a junior/intermediate systems developer working on original infrastructure i was trying to find a way to do accurate time estimates
  • this led me to invent a technique i called behaviour sheets
  • the way you make a behaviour sheet is like this:
    • you take an outliner, any outliner will do
    • you start putting down bullet points about the piece of software you intend to write
    • you make a declarative list of all the discrete things that the module must and must not do
    • (i can't show you a concrete example because these tend to be confidential)
    • you indent the outline every time there's a change in scope or condition
    • you continue until you are satisfied that you have exhausted all the things you can say about the module of code without actually writing any code
      • i also want to underscore this is not pseudocode; rather it is declarative.
        • each bullet should represent one rule about what the code must or must not do.
      • once you're finished, you eyeball the bullets into four-hour chunks
        • the four-hour chunk thing could be its own talk
          • it comes from an idea I had around the same time, to use a different base unit for time accounting that was more congruent to what you could get done in a day
          • I called it the cell
          • it was like a shipping container for time
        • anyway, you take the bullet points from the behaviour sheet and pack them into 4-hour cells
        • you add those up and project the result onto the calendar, and that's your estimate.
    • It turns out behaviour sheets are a reliable way to make consistently accurate time estimates.

3.1 the catch

  • So what's the catch? why isn't everybody using this method?
    • The catch is that if I started writing a behaviour sheet on Monday, I'd be able to tell you by lunchtime on Wednesday that the code would be done by end of day Friday.
      • That is to say, the cost of figuring out the cost of the job is roughly equal to the cost of the job.
      • In other words, the behaviour sheet technique is completely useless for time estimation.
        • At least assuming a condition of time estimation is to come up with an estimate quickly.
        • So you get into this absurd scenario where you would need an estimate to produce an estimate to produce an estimate, et cetera.

3.2 still worth doing

  • This, however, does not mean behaviour sheets are not worth doing!
    • The code I have written based on behaviour sheets was a lot more regimented and organized,
      • That is to say, the code itself was a lot more organized; the process of writing it was a lot more regimented.
      • Because a lot of the questions I would have had to answer, a lot of determinations I would have had to make at the code level, were already figured out one level chunkier than that.
      • Behaviour sheets were useless for predicting effort, because they were too expensive in terms of effort.
        • but they were useful for organizing effort.
        • The reason why the code estimates based on behaviour sheets were so accurate was because the act of creating the behaviour sheed wrung out all the surprises.
          • The behaviour sheet is a cheaper medium to do this kind of work in than code because we're only considering the qualitative content of the behaviour, not the actual details of how the behaviour is implemented in this language or that.
    • The other benefit of behaviour sheets is that little to none of their content deals with the details of any particular programming language.
      • They are slightly more abstract than that.
      • This means you could write one behaviour sheet that prescribes what to do that would be the same for python, javascript, swift, ruby, whatever.
        • There might be some minor language-specific differences, say up to five percent.
        • Also, the time estimates might be incrementally more or less depending on the programming language.
          • (at least, assuming all the languages in question had the same basic pieces you needed already, and you didn't have to go make them.)
    • Also, if you're familiar with programming at all, you know programmers write test suites, in code, to determine if the implementation code they wrote exhibits the correct behaviour.
      • Behaviour sheets effectively specify what these test suites should test.
      • Pretty much every bullet in a behaviour sheet is going to correspond to a unit test.
      • So one thing I could see — and just a caveat, I haven't gotten here yet — is being able to reference the behaviour sheet from the unit tests, or otherwise use the behaviour sheet to generate a skeleton for the unit tests.

3.3 This is something I use

  • The rest of it I wanna underscore is stuff I have done and still do.
    • Most of the code I write is libraries, and a lot of those libraries are spec implementations.
    • One of the remarks about that is that once you've made an implementation in language A, it's a heck of a lot less work to write the same thing in language B.
    • That's because a huge chunk of the task of programming is working out how you're going to organize and structure things.
      • The decisions you make about the structure are going to be affected by what you need the program to do, often invariant of whatever language you're writing it in.
      • As such, once you've done that work once, you just copy it.

3.4 This was the germ

  • The idea that there could be a medium or representation one step removed from code, that could be used to inform parallel implementations of code, is what got things moving in my head.
    • What happens is the code is no longer the authoritative source of information for anything but the details peculiar to it.
      • Because the description of how the code should behave is being driven from another place.
  • Moreover: If we can do one step removed from code, why not two steps? three steps? more steps?
  • More importantly: What if you started in the other direction?
    • What if you started from the outside and moved inward?
    • What if you could trace an unbroken line, from business goals,
      • to user goals,
      • to user tasks,
      • to system tasks,
      • to system behaviours,
      • and finally, to code?

4 What is in the way

  • A process that transforms (or otherwise relates) a behaviour sheet into a skeleton for automated code tests is going to need to be able to directly address every individual bullet point.
    • this is a doable thing, but there aren't a lot of things on the market that can do it.
  • This brings me to a beef I have with the artifacts that people tend to create farther up the specificity gradient.

4.1 Specifically

  • This has been bothering me professionally for years:
    1. We're into our third decade since the year 2000—and two generations into the era of personal computers—and we're still doing an insane amount of work, that computers both could and should be doing, by hand.
    2. This situation is not helped by the fact that organizations are highly forgetful entities:
      • Policies get executed and nobody remembers why (vestiges)
      • Policies that used to work no longer do (regressions)
      • Why? Because:
        • not only is the outside environment constantly changing, but
        • people are continually joining and leaving the the company, and moving around within it.
  • The problem—or at least a significant part of the problem, as I see it—is that the state of the art for documentation is hot garbage.
    • still.
    • after all this time.

4.2 More specifically

  • Documentation is "extra" work that trails behind the "actual" work
  • Documentation governance, maintaining documentation and ensuring handoffs occur across personnel changes, is another layer of extra work on top of already-extra work.
  • Most documentation exists in documents, and documents make lousy documentation.

4.2.1 Documents make lousy documentation

  • imagine your kitchen, you have all kinds of ingredients
    • dry things which can last months to years (but contingent on keeping dry)
    • frozen things which can last months to years (but are contingent on keeping frozen)
    • fresh things
      • some things can last weeks at room temperature (e.g. butter, cured meats)
      • other things can last for weeks in the fridge
      • other things won't even last a week
  • In this metaphorical framework, a document is like a meal:
    • it mixes together all the ingredients
    • it plates and presents them
      • it optimizes them for consumption
    • however the result is something of the most perishable variety
      • maybe you can freeze it if it's the right kind of thing
      • most prepared meals have a pretty short shelf-life though
        • often much less than any one thing that went into them
      • usually what happens is a small subset of the ingredients that were delicious in the moment very quickly gets gross and unpalatable
      • plus, leave it any longer and it becomes actual poison

4.2.2 Documentation needs to be more fluid

  • documentation needs to drive development processes rather than trail behind them.
  • it needs to be passively collected and updated wherever possible
  • it needs to be amenable to repurposing through transformation and recombination
  • it needs to be dramatically less work to maintain
    • we need to be able to precisely target parts of the documentation and update just those parts
    • we need to be able ensure those changes are propagated throughout the organization (and potentially beyond)
  • This is a much bigger scope than what I'm covering in this talk
    • What I'm presenting is just one facet of how to tackle this problem
    • It's about thinking about ways to create structured documentation one tiny piece at a time, so the more perishable parts can be replaced, and the more durable parts can stick around and be reused.
    • Some of you may have heard of the acronym FAIR:
      • Findable
      • Accessible
        • (I would be more inclined to call this addressable because accessibility is an important but nevertheless different concept)
      • Interoperable
      • Reusable
  • This is what needs to happen to decouple documentation from documents.

5 An example

  • This is something I'm working on.
    • This is an old prototype of a tool, but more importantly it's a demonstration platform for FAIR principles.
    • I started working on it before I even knew what FAIR was
      • Indeed, before the FAIR people even coined the acronym FAIR.
      • (seriously, FAIR dates back to 2016; I wrote this code in 2013, based on some theory and other stuff I can probably date back to 2008 or 9).
  • First let me remark about what you're seeing on the screen.
    • This is an implementation of Horst Rittel's Issue-Based Information System, or IBIS for short.
    • Rittel was the guy who coined the term wicked problem.
      • A wicked problem is one which has multiple stakeholders and no real clear solutions, only better and worse solutions.
      • Does this sound familiar to anybody?
      • IBIS is the collborative process Rittel and his colleagues invented in the 1960s.
      • It's a form of structured argumentation.
      • Its purpose is to aid in the generation of design rationale.
      • They were implementing it on index cards!
  • There have been other attempts to digitize IBIS, dating all the way back to 1988.
    • There are plenty of extant versions actually; this one is kind of an experimental toy by comparison.

5.1 Describe IBIS

  • There are three basic kinds of entities in IBIS:
    an issue is a state of affairs in the world that you either want to do something about or have to steer around. These are the red or pink entities.
    a position is an explicit prescription for what you want to do a about a particular issue. These are green.
    an argument is why or why not. These are blue.
  • (Note this palette isn't final and I will be addressing our colour-blind brethren in a future system.)
  • Issues, positions, and arguments are connected together through a controlled set of semantic relations:
    • First, any individual entity can generalize or specialize any other entity of the same type.
      • These are the bright blue lines in the diagram, and the blue inset in the lozenge.
    • Any entity in the system can suggest an issue, and any issue can question any other entity. These are yellow and orange, respectively.
    • Any position can respond to any issue; this is fuchsia,
    • And finally, any argument can either support or oppose any position, bright green and red.
  • I wanna remark that none of what I've said so far is original; it's just me implementing what was in some academic papers.
    • (well, maybe the hyperbolic graph visualization is original)

5.2 My contribution

  • If I was going to say what the closest product this thing is like, it's like an outliner, except that each bullet point gets its own webpage.
    • Furthermore, each bullet point has a durable permalink that you can point to directly, and it will never 404 unless you're dumb enough to go out of your way to nuke it.
    • Finally, each bullet point is connected to its neighbours using a controlled vocabulary of semantic relations, and we can use these to compute.
      • And I should also note that the tool is fully transparent with respect to both its instance data and the schema that powers it.
      • That is to say, if somebody else were to use the schema to make a completely different IBIS tool, it should be able to read the data this tool produces and vice versa.
  • This is the connection to the Specificity Gradient:
    • As you can tell, this structure has a sense.
    • Some of the entities are more central, while some of them are more peripheral.
    • Again, this is an old prototype, but you can imagine some future version of this tool being able to set a horizon under which these elements could be aggregated,
      • (you could "weigh" them if you want; make the dots bigger, I dunno)
    • Or, you could otherwise narrow the scope so that this or that team only sees their own chunk of it and can ignore everything else.
  • In the works I also have extensions for general-purpose process modeling, which extends IBIS to distinguish between ideas and actual actions;
  • I also have verticals for specific design disciplines:
    • I have a content inventory vocabulary which is pretty robust;
    • I also have an interaction design vocabulary that further extends the process model vocabulary, though that's waiting on the tool rewrite to really get going.

5.3 conclusion lol

  • The idea though is that since the information is completely pulverized, it's possible to keep the durable things around and drop the more perishable stuff.
  • Since it's structured, it can be manipulated directly by computer, which means other tools
  • Since it's FAIR and open, it can interoperate in a wider tooling ecosystem.
  • So what I ask of you as I draw this talk to a close, is to demand more from your tools.
    • I salute you all; have a great afternoon.

Author: dorian

Created: 2023-04-01 Sat 11:40