A year ago, I set out to change the economics of web development. Specifically, I wanted to provide agencies and other independent players with the same organizational latitude as big companies, who could dedicate an entire development team to some widget or other, but do so with a fraction of the overhead. I speak of the ability to put seamlessly into production the solution to a single user goal—without having to wait for the launch of an entire website—and subsequently get paid for it.
Most web design projects are actually redesign projects, because in 2013, unless a company is brand new, it already has a website. You could work incrementally if you had a blank slate, but not if there's an existing website in the way. And then there's wading through whatever hodgepodge of nonsense CMS or other platformware left behind by the last guy, and ugh, what a nightmare.
The solution I propose is to put up a scaffold, one which is invisible to the user. You use it to cover up the old site with your new work, piece by piece, until none of it is left showing through. Then you disconnect the old site, take down the scaffolding, and voilà.
What I'm trying to change is the all-or-nothing nature of the typical website redesign contract. With this scaffolding, you can deliver incremental results to your clients into production, so their business goals—and your payday—aren't waiting on some far-off launch date. What's more is that you don't have to know or care one jot about the morass that is whatever platform—or platforms—your client currently has in use.
Because the rest of this article talks about how to achieve this effect, I won't make you read all the way to the bottom to see this thing in action. Here's one of the two I'm currently working on:
This is scaffolding running on top of this.
Interested yet? Good. What follows is roughly how you use it:
This server✱ should, as a rule, be considerably beefier than the original site. You will need root. Shared hosting will not suffice, neither will some app cloud doo-dad. Likewise, irrespective of whatever stack the original site runs on, this server should really be some kind of Unix variant.
So far I have installed this contraption onto two different Linux VPS systems. I prefer Debian and its derivatives because it's easiest to deal with. Somewhere between 1 and 2 gigabytes of RAM, on top of whatever they needed originally, should probably suffice for a small-to-medium-sized client.
The scaffolding makes extensive use of Apache's filter mechanism, and is currently prototyped using mod_perl, it being one of the more mature and comprehensive embedded interpreters for Apache. I say prototyped because there's plenty of room for improvement, but as it stands it's good enough to handle the traffic seen by the average corporate website.
This is the only part of the prep that requires any real work. The scrubber isn't smart enough—yet—to isolate the main content of a page and do anything reasonable with it on its own, so you have to tell it. This entails going through the entire original site and collecting specimens of all the variants of the site's template and content. Then you examine them in a DOM inspector to derive a set of XPath statements that uniquely match the main content. This is also your opportunity, if you see fit, to remove any burrs of yesteryear: to eliminate <font> tags, <table>-based layouts, or to upgrade to HTML 5, which entails crafting a special XSLT transform to handle them. Expect this part of the process to take around a week or two to get right, depending on how gnarly the original site is.
And now for the fun part: putting the lipstick on the pig. After all, it's the exact same site that you started with, only it's been treated and sanitized, so you can do pretty much whatever you want to it. Do you duplicate the way it looked before, or do you give it a completely new skin?
Having done a few of these, I can attest that trying to make a perfect copy of the original layout takes much more effort than you'd expect, because you have to copy all the mistakes that were invariably left in there as well. At this point I would recommend a slight facelift
to your client, as part of the redesign process. Turns out it's actually easier.
This would be the real test, wouldn't it? Switching the tracks to pass all traffic through the scaffolding? Will it be fast enough?
I don't want to delude anybody: of course it's going to be slower. You're taking however long it took to produce a page on the original site, and adding to it the cost of transporting that content to another site, then sanitizing and manipulating it, before sending it on to the browser. While I'd love more than anything to shave this overhead down, to use this contraption, you're looking at an additional cost of around 100 to 200 milliseconds per request.
The real bottleneck on the scaffolding for the moment is RAM. The busier the site, the more you'll need for the scaffolding. Needless to say, curbing memory usage is a top priority for this project. Caching will help, but it'll only be as good as the cache directives coming from the original site, which in my experience are practically nonexistent for anything but static content.
The two projects I have going using this scaffolding at the time of this writing are small enough that I'm not worried about the memory overhead. That said, I'm still waiting for the green light on both to switch the DNS.
If it wasn't implicitly clear, I specifically designed this scaffolding to work beneath the application layer, by altering the behaviour of the web server itself. That means you can use any combination of platforms and frameworks that will function on top of—or behind—Apache, which in the broadest sense means nearly all of them.
The overarching goal here, once again, is to open the door to other ways of arranging the business aspect of website redesign projects, by making it as easy as possible to deliver material value to the client, when said material value is ready to be used, and no later.