The mantra protocols over platforms has been in the tech industry zeitgeist for some time—much longer than the Masnick article (I myself uttered it in 2013, apparently, and I was not original at the time). The level of understanding on the part of the public, however, appears to be apps, or maybe, maybe platforms. Even practitioners seem to gravitate toward platforms and frameworks (recent examples being AWS and React, respectively). Apps, platforms, and frameworks, however, have owners, and every owner has an agenda. This is not to say that protocols may not have these things, though Venkat Rao coined a phrase some time ago that protocols were appropriate for those situations that were too big to nail—that is, systems for which even ostensible owners could not exercise the kind of absolute control typically associated with ownership.

I strongly believe that protocols are essential to a more equitable civilization. Apps, platforms, and frameworks lend themselves to quasi-natural monopolies that levy a form of excise on the public through deliberate—or at best, neglectful—incompatibilities. Most people don't even have the vocabulary to express this situation; they just think that's how technology is.

It is a plausible (and more importantly, testable) hypothesis that people would demand more protocol-based solutions if they understood the role protocols play in facilitating interoperation between business entities (including themselves) while frustrating the formation of monopolies.

In information systems, content (data) is analogous to nouns, while operations over that content are akin to verbs. Tech companies are generally in the verb business. To ensure that only their verbs can verb the nouns, these entities have historically either obfuscated the nouns (e.g. through proprietary file formats) or retained physical custody of them. Assuming you can even get your hands on the nouns, you will need some way to tell what they mean. If you can manage this, then you can use somebody else's verbs, or even create new verbs—perhaps ones that no extant company considers a priority, or maybe that nobody else has even thought of. A protocol is what facilitates this outcome.

The creator of the Gossamer series of human-powered aircraft once said that he didn't set out to build a plane, but a system for building planes, quickly and cheaply. That's how he won his prize. Analogously: subvert the need for an app for that by making it dirt cheap to make—or simply use—a protocol instead. Show the public that protocols are what make Alice's app able to talk to Bob's app, or Charlie's app, or anybody else's app, with no payola or even permission required. If people got a taste of how their information environment could function without the obstructive behaviour of platform monopolies—and this time around, ensure that they understand that protocols are responsible—they would be loath to give it up.

A proposed strategy, therefore, with this factory for protocols, is to aim at processes that are either siloed by vendors, or not worth it for (or in the interest of) any one vendor to productize. Target (initially) people who are technically sophisticated enough to understand what a protocol is, but lack the skills to hack their own private, non-protocol solutions. My vote goes for UX design, content strategy, and product management people in tech companies. As it stands, they are constantly hungry for tools, and the coverage of their tools is remarkably sparse.

The project proposed herein for the Summer of Protocols is already well under way. The resources from this fellowship would afford it a level of concentration it has so far never enjoyed. The working title, Tackling the Symbol Management Problem, refers to the overhead that ensues when trying to use the Web to create dense hypermedia—large numbers of small resources with a high density of links betwixt. When you jack up the number of URLs, however, you invariably need to mechanize the prevention of link rot. That said, this problem extends past URLs for the resources themselves, and into the various controlled vocabularies that express what structured data objects are on the page (or other representation), or provide styling or accessibility information, or whatever.

The goal of the project is in one place to create an engine for representing structured data as dense hypermedia, as well as operating over it. The underpinning theory is that most verbs of interest are CRUD-related—that is, they are mundane data entry and representation operations, and a great deal of value can be created by simply providing users with useful representations of structured data you have enabled them to store. This engine is primarily intended to be a proof of concept and reference implementation, and it is anticipated that subsequent incarnations in other programming languages will be developed. In this sense it is a didactic artifact with the side effect that it performs its first-order function.

In the second place, the goal is to make the engine do something meaningful. Here I propose to use it to create a tool for collaborative problem-solving. This would be a radical reincarnation of an existing tool I wrote a decade ago, for the humble purpose of testing a different protocol. The content the tool operates over is my implementation of Horst Rittel's Issue-Based Information System, a structured argumentation framework from the 1960s, intended to model the collaborative development of design rationale, and the solution of what he termed wicked problems.

The tool operates as a thin skin around its instance data. One hundred percent of the application state is exportable into a standard format, and the semantics of that format are publicly available (indeed, the spec, just like all my specs, is directly machine-actionable). What this means is that anybody could make a second IBIS tool, import the data from the existing one, and the two tools of completely different origins could use each other's state, effectively conversing without any loss of information.

In addition to IBIS, I have created a fairly mature content inventory vocabulary, and have started on vocabularies for generic process modeling and interaction design, which extend IBIS (which itself extends SKOS, the ontology for concept schemes and thesauri). This is why I suggest focusing on the UX community, because it is a target-rich environment for creating a number of tools that spoke and understood open-spec data vocabularies and would make their jobs markedly easier. Moreover, through their existing expertise, they would be able to articulate why.

The reason why the existing tool hasn't already been expanded to accommodate these other vocabularies, is because it was never intended to do more than kick the tires on the protocol that summoned it into existence. A rewrite (to the extent that term is applicable) is necessary, and a rewrite is already under way. Resources from the Summer of Protocols fellowship would change that outcome from one that is perpetually a few weeks off, to something that could actually be packaged and shipped in the time allotted, with ample bonus content.

Again, at least as important as running code is the rationale for why it is the way it is. One goal of the original implementation was to demonstrate how a piece of hosted SaaS-ware could be completely transparent with respect to its contents. Its replacement has been designed to do the same. This brings me to a point about the fellowship itself: I can show the community concrete solutions I actively use to organize information systems in a protocol-centric manner, whether they use my infrastructure or not.

A coda to this proposal is that while I have focused mainly on networked information systems—specifically Web-based software systems—this is mainly because I view the Web as a local optimum (for the time being). As I wrote back in 2015, I believe that hypermedia is an essential affordance of computers as a medium (at least as of the 1960s if you go back as far as Nelson and/or Engelbart), and will only increase in sophistication. Nelson's two cheers for the Web criticism (fragile, unidirectional links, embedded in mutable, contiguous, blob-like documents) is absolutely legitimate; Engelbart had capabilities in the sixties that we still lack today. What the Web has, though, that neither of these men had to deal with, is crossing administrative boundaries. This is precisely where cryptography—as well as blockchain-adjacent technology—becomes not only relevant but critical.

Tim Berners-Lee once said that the URI is the most important of his inventions. I happen to agree. The ability to point to arbitrarily fine-grained pieces of information (and in the case of the URI, across administrative boundaries) is arguably what defines computation. Future systems will no doubt have new ways of pointing at things. The Web, in my opinion, is as much of a springboard for future hypermedia systems as it is a serviceable bundle of protocols today.