The web is slow, and you know it. Networks and network transmissions get faster, and yet your customers see it and tell you about it more frequently: The content you serve is reaching them too slowly.

Can we trace the problem to your CMS? To your digital asset management system? 

To the higher and higher overhead imposed by web browsers on the client side? Or to human psychology, to the extent that people become too familiar with faster and faster “normal” speeds too soon?

It would be nice if a technology publication gave you some insight about this topic, wouldn’t it?

Actual Web Performance Insight

Many publications’ interest in technology stops cold the moment the subject becomes technical. Which is as much as problem as a medical journal whose editors faint at the sight of blood.

The lifeblood of technologists is performance. Earlier this month, a lot of press coverage was devoted to the efforts of MIT’s Computer Science and Artificial Intelligence Laboratory to devise a JavaScript-based technique that web browsers could implement to expedite the page load process.

Here is the basic theory behind their work: In order to make the composition of web pages simpler for CMSs, their constituent parts are separated into parts like patches in a quilt. Constituent parts have their own parts.

Once we’ve boiled down all the parts of a web page into their fundamental elements, we realize they all come from separate locations in the network — maybe subdomains of the central domain, but in the case of advertisements, from far distant locales.

As a result, when a smaller part from one network location belongs to a larger part from elsewhere, there’s a greater likelihood that latency will be added to the process of the browser downloading the entire page.

The basic way a browser composes all these parts together is by envisioning a kind of dependency graph, like an org chart. But because all those component parts are arriving at different speeds, and it may take the arrival of one to know the location of another, that chart is typically incomplete for any one point in time.

The MIT CSAIL team sought to resolve this problem by implementing new JavaScript code that rethinks the way dependency graphs are generated. It sounds strange, but much of the code that browsers use to parse JavaScript code is actually written in JavaScript itself.

This new code, which CSAIL dubbed Scout, would examine the relationship between the JavaScript inside these components, at the level of variables (the symbols that represent values or contents of memory). Rather than relying upon things like the HTML <SCRIPT> element to determine when and where the dependencies are, CSAIL’s Polaris would compute a much more finely-grained dependency graph, and begin traversing the network for components using this graph instead.

Polaris would then allow for components to be fetched out-of-order from this graph at opportune times, which would eliminate the latencies that browsers introduce to many highly structured web pages. Such allowances are discovered through a technique that, as it would apply to discovering the best move in a chess game, would be considered artificial intelligence.

“Only Scout produces a dependency graph which captures the true constraints on the order in which objects can be evaluated,” reads the CSAIL team’s white paper [PDF]. “Polaris uses these fine-grained dependencies to schedule object downloads — by prioritizing objects that block the most downstream objects, Polaris reduces overall page load times.”

Mighty Mouse vs. Superman: Dawn of Redundancy

I love the strategic application of AI as much as anyone in this business. The greatest thing about technological barriers is that they give us something logical that we can overcome logically.

But the greatest barrier we face to our ability to overcome these barriers is communication. Too often, our best efforts are replicated or even overshadowed by something potentially greater.

Learning Opportunities

In this case, what’s “potentially greater” is HTTP/2.

Last September, the Internet Engineering Task Force — the collaborative body responsible for all the technological underpinnings of the Internet — officially completed its work on the next-generation specification for the transport layer of the web.

One of the key goals of HTTP/2 has been to expedite the transfer of web pages by changing how the relationships between their many components are established in the first place. 

Part of the reason that different components of the same web page exist in different domains is completely artificial: To get around the limitation of web browsers having only so many concurrent connections to a single domain at one time (a constraint the CSAIL team acknowledges), web engineers intentionally place dependent parts in separate domains.

That placement introduces latencies into the page loading process — latencies which HTTP/2 would eliminate by completely recalculating the dependency graph generating process. HTTP/2 introduces flow control and stream dependency weighting, among other techniques, that would enable browsers to make just-in-time adjustments to how they gather components together.

One of these adjustments allowed by HTTP/2 is out-of-order fetching, except with the new protocol, the basic order is different anyway. Theoretically, many of the roadblocks that Polaris would seek to eliminate, would not exist in a network where HTTP/2 is deployed end-to-end.

In fairness, the CSAIL team did provide experimental results of its Scout algorithm on browsers using SPDY, Google’s original submission to IETF, a variation of which was implemented in HTTP/2. Those results did show some marginal performance gains — measurable, but not dramatic.

In the performance business, marginal gains are still good gains. But these tests were conducted using SPDY-endowed browsers, over a network where the current HTTP 1.1 was still in effect.

So the facts with which we are faced today are these: The IETF has presented us with a long-term solution to the page load latency problem. Yet we know from experience with things like IPv6 how long it takes the world to implement these big solutions.

MIT’s Polaris is not so big a solution; it can be implemented entirely on the client side, with Scout-endowed browsers. But too many times in history, our tendency to rely upon short-term, patchwork solutions to big problems merely postpones our efforts, and even drains our energies, for when it comes time to make the big, single-bound leaps.

History will snapshot how well we fared at this stage by whether we ended up calling for Mighty Mouse to save the day, or taking a risk on Superman.

For More Information:

fa-solid fa-hand-paper Learn how you can join our contributor community.