The Web development industry is still awfully cavalier about redesigns. I understand that the redesign is a convenient project format that organizations often specifically request. Redesigns likewise evoke a mystique of glamour and novelty, as well as being full of opportunities to flex new technical muscle. However, as somebody who cares about the integrity of information on the Web, I am routinely disappointed by the kinds of results produced by this framing, and therefore have made it my mission to figure out how to do things differently. Over the years I have called aspects of this initiative by different names, for example:
My ultimate goal here is to figure out how to change the Web development paradigm from one of replacement to one of repair, and in the process, devise business engagements with lower risk—and thus an opportunity to earn more money—while at the same time delivering higher-quality, more durable material results to clients, stakeholders, and users.
I can think of a lot of reasons why you would want to change a website other than a complete redesign, but two in particular jump out at me:
launchto get some new asset into the hands of users—or your own. Maybe you just believe that there's no compelling reason to not develop a website incrementally.
Since nobody is currently lining up to offer their websites as tribute for this excursion, and because I fit both of these criteria, I am using my own website as a proving ground. As such, I thought it would be helpful to start something like a change diary. That's change diary, not change log, because I want to do a little more than say what I changed; I also want to explain and/or discuss it a bit.
I quietly overwrote every page on my website with the output of something I am calling my Swiss Army Knife.
I began by laying the groundwork:
Back at home:
The site is superficially identical to the way it was prior to the change, but with the following invisible changes:
I added modification times!
This is something I had erstwhile deliberately left off, for two reasons:
Anyway, people complained. They complained and complained, so I added modification times. Owing to the Swiss Army Knife, the process—which reduced to instructing the template to dig the timestamps out of the embedded metadata—took all of ten minutes.
I will invariably change it later when I have a better idea for how it ought to look, at which point it will take another ten minutes. That's what this exercise is about: making websites so cheap to change, there is no deliberation, you just do it.
Today I made some very desperately-needed changes to the visual design:
The wordmark changes configuration when its bounding rectangle gets wider than a certain aspect ratio. As one might expect, a media query swaps one viewport out for another, enabling the path data to be reused. This is in keeping with a conceptualization of Web resources, such as vector graphics, as rudimentary programs that can respond to their surroundings, rather than mere inert files.
It is worth noting that I had an SVG wordmark in the very first design, which fell back to GIF for non-supporting browsers. Even the few that supported SVG didn't do so consistently. I remember being made fun of at the time for the effort. It was many years before SVG became a viable tool in the kit.
I have nothing more to add other than finally.
Okay maybe I have a little bit more to add: my personal style when it comes to CSS is to avoid class
selectors where I can, because they proliferate like crazy, and it is no small effort to tame them. Second, class names often reduplicate semantics that could be more usefully represented as standard metadata—particularly accessibility metadata—as well as general-purpose microdata or RDFa. Class names are also made redundant by node selectors when the topology of the document can be expected to remain relatively stable. As such, I only use CSS classes when I absolutely cannot hook into the content any other way.
Anyway, if you're gonna go that route, holy cow will you ever need help with the heavy lifting, and that's what Sass provides.
float
layout swapped out for a more manageable flex
.These changes are perhaps not worth remarking on in their own right, but what is remarkable is the fact that I completed them in just an few hours in the afternoon and evening, and I did it without destroying anything. I have made several attempts over many years to redo the visual design on this site, each of which I had to abandon because it required too much contiguous effort. The only way it shipped was all or nothing: I couldn't do what I did here, which was to spend a few hours making some improvements that didn't leave the rest of the site any worse off. That is really the goal here: to create an environment where I can tinker productively whenever I have the time.
There is no reason why an organization couldn't adopt this strategy too.
I got my visual design template working with the RDFa and transclusion XSLT libraries I had actually written for this specific purpose. In fact, I had done RDFa already the last time around, and this sortie was to get the transclusion stuff in there. I am finally now displaying my most current technique on my own website.
To reiterate, the purpose of this excursion is to come up with a durable, declarative, standards-based way to separate presentation from content. Granted, the RDFa querying stuff is new
, where new
is defined as didn't show up until a full decade after XSLT 1.0
. The transclusion library, on the other hand, would have worked just fine back when we were all still worrying about Y2K. Think about that: I'm using technique that I learned when the average Y Combinator cohort was in kindergarten, and I don't see why it wouldn't continue to work when they're as used up and salty as I am now.
Most of the content on this site is completely static, or put more accurately, deterministic transformations of static files. Only a handful of resources involve more computation than that. One such is the index of cited books. I had generated this index initially by scraping my content for Amazon links, then collating by ISBN and collecting the metadata, then flipping the whole structure around. Since I started using the Swiss Army Knife to manage this website, it had apparently been been buried by the system, since the book index was not under its management. I remedied this situation by reimplementing the one-off script I had previously used to generate the index as a lobe of the SAK.
Because of the infrastructure I had to write in order to get the book index generated, I had all the precursor material I needed to do something I had been planning to do for a while, which is to add reverse links to the rest of the website. Now every page that isn't a complete orphan will have a What Links Here
inset that, well, does what it says on the tin.
Note that this development has made real estate scarce in the marginalia, so I will inevitably have to reconsider how to lay it out.
One of the goals of this website rehabilitation project is to further refine the process of performing content inventories and subsequent content audits. Indeed, a live inventory forms the backbone of the breadboard content management meta-system I am currently designing. The inventory itself is glued together from a number of a different data vocabularies including one I designed with a number of useful properties to facilitate content audits.
An important aspect of a content audit is the reconciliaton of a document with its audience. I have been brewing a solution to this problem for some time: Any information resource, to the extent that it contains any language at all, is bound to mention certain terms. These terms identify concepts, and the concepts can be put into relation to one another. Furthermore, even if a concept is not explicitly invoked within the resource, it may still be implied. It is a fairly straightforward process, then, to relate documents to concepts in this way, provided we have a representation of the concept scheme.
Obtaining the terms themselves, is another story. My current strategy is just to throw the inventory into a big spreadsheet and read through them, however with over 200 articles that stretch up and beyond 9,000 words, I am looking for a more efficient method.