This is the bet:

In 100 years, we will still have the Web✱.

✱That is to say, in a hundred years, humanity will still possess—and regularly use—some sort of networked, resource-oriented hypermedia system. The Web is still it for the foreseeable future, and it is inconceivable that whatever replaces the Web will do any less than the Web does already. Systems of this kind are too flexible and too powerful for our civilization to willingly give them up.

If you're also inclined to bet this way, then it follows you would be interested in an infrastructure that acted the part. Most existing infrastructure, however, does not.

Tech-Obsessed
Discourse around the Web is dominated by questions of technical capability. Can we? is a bogus question, though, because whatever predicate you can dream up, the answer is probably yes. There is comparatively scarce attention reserved for questions that start with should we?, or even questions about the detailed behaviour of the things we know we can do. There is also the harried narrative of keeping up with the fast pace of technology. This narrative ignores the fact that the pace of legitimate individual technologies is glacial, and you can see them coming a mile away. The churn happens in technique, packaging, and technology products.
Vendor Lock-In
Routinely mistaken for technology proper, is the technology product. The natural state of technology is non-proprietary. Technology products, on the other hand, have an owner. That owner, however benign, invariably has an agenda, which may diverge from your own agenda over time. The current paradigm in Web infrastructure, whether proprietary or open-source, is that you buy into a particular product, which assumes sovereign status over its domain of operation, and expects you to use it for everything. Of course, new products come out all the time, with new capabilities, but you're stuck with the dilemma of either an enormous integration and/or migration bill, or sitting out the round. This situation is exacerbated by platform-as-a-service, in which you no longer even possess your own data, and couldn't extricate every last bit of it at any price. When businesses finally migrate away from the first-generation cloud platforms, the pain will be astronomical.
Design is Under-Valued
The dirty little secret of Web development is that programming is the easy part. The hard part is figuring out what to program. By this I mean an exhaustive, detailed design of what ought to go where, and how things ought to behave. These design decisions represent an irreducible and unpredictable cost—and equally unpredictable return. Some decision-makers have come to understand that investing in design is the difference between an effective product and an ineffective—or anti-effective one—but still conceive of design only as a means to one specific implementation, and is scarcely reviewed after the code is written. However, while a working implementation is valuable now, the aggregate design decisions are valuable for informing future implementations of the same abstract goals. If it's too hard to dredge up the original decisions, due to a neglect to properly curate them, those decisions will be have to re-decided the hard way.
False Modularity
Modules are the infrastructural equivalent of there's an app for that. Indeed, modules are an extremely efficient way to bundle up and share code, thus abridging programming work. They are less effective for doing the same with user experience. Modules that define human interactions are optimized for the particular context of the module author, or otherwise some imaginary generic context. The claim of using modules is that it will save time and therefore money, but it's an even bet that the cost of adapting an existing module to a design specification will exceed the cost of just writing the code from scratch. Plus there's the cost of sourcing the competing candidates and auditing them for fitness. Furthermore, if the design specification contradicts the existing design of the module, the budget will declare the module the winner, meaning you lose both the work that went into that part of the design, and whatever additional value you would glean from having that design realized.
Link Rot
The basic problem of broken links, especially those broken by leaping every couple of years from platform to product to service, has never been satisfactorily solved. Links get built up one by one over years, and destroyed in an instant when a resource is moved, renamed, or deleted. This is a guarantee of frustrated and disappointed users, with a direct impact on the bottom line. It's amazing how cavalier so-called professionals are about letting links break this way.
Security Failures
Many popular platforms are still, in 2015, routinely exposed as vulnerable to the most embarrassing and costly security compromises, the routes to which are variations on themes which have been known for decades. The prospect of hacking these systems is so boring to the human beings in that business, that they make robots do it for them. These flaws are in the fundamental architecture of the platforms themselves, which means they can never be fixed other than scrapping the products and the paradigm that designed them, and replacing them entirely.
Prohibitively Expensive to Do Properly
Infrastructure platforms exist in a feedback loop with methods of financing, contracting, and project management. Monolithic capital gave us monolithic contracts to produce monolithic products on monolithic platforms. This means the default project is all-or-nothing, which jacks up the risk, which jacks up the cost, which jacks up the risk, which jacks up the cost, and so on. This situation creates something of an apartheid. A line often used is that it's like buying a car: it's just that some people can afford the Lamborghini, while others are stuck with the Geo Metro. Of course, the smart money develops its infrastructure incrementally, and it does that by hiring full-time staff. At current rates, that strategy too, leaves out most players.

The infrastructure designed to last a hundred years would never break a single link. It would never permit itself to be tricked into handing over the keys to the kingdom, or disgorging confidential data. It would let completely heterogeneous subsystems coexist, even cohabitate and interact within its confines. It would square away the more pedestrian implementation tasks, freeing up time and money for more thoughtful design. It would furthermore enable contracting for design, implementation and deployment to be done incrementally, with the cost and risk of an individual endeavour so low that one may be encouraged to speculate. Most importantly, the hundred-year infrastructure would be cognizant of its own mortality: that no one product or component will last anywhere near a hundred years. This means every jot of accumulated content is exportable in an open format, and the infrastructure itself can be replaced completely, piece by piece, as new needs and capabilities arise.

Finally, the hundred-year infrastructure would embody an understanding that its own health relies on healthy business and professional relationships, which are, at root, human. This not only means that the cost of purging an unhealthy relationship must not exceed the cost of staying in it, but also a recognition that even the healthiest relationships don't last forever. Companies go in and out of business. Products and services are launched and scrapped. People change jobs, they retire, they even die. The continuity of the infrastructure depends on being able to weather these changes as they happen.

So the hundred-year infrastructure is not another product, but rather a pattern: It's an attitude toward a medium which is showing no sign of going away, backed up by concrete budgetary, contractual, administrative, design, and technical strategies. This is a real thing that is coming together, piece by piece, in the form of a reference implementation. It would be nice to see some interest in it.