If you call NGINX a “web server” among other things so named, and do a market share comparison among all of these “web servers,” NGINX commands between 15 percent (Netcraft survey) and 25 percent (W3Techs survey) of this market.
That is, if you continue to think of NGINX as sharing the spotlight with Apache HTTP and Microsoft IIS.
Now NGINX (pronounced “engine-X”) is stewarded by a commercial entity servicing paying customers under the brand NGINX Plus. With today’s release of NGINX Plus r7, NGINX is evolving into a class unto itself — and I don’t mean to sound like a vendor’s self-serving statement here.
Too many of us are accustomed to thinking of the web as a delivery conduit for pages, document and the “C” in CMSWire: content. The rise of RESTful APIs has vaulted the web into the role as a communications mechanism for distributed applications.
What’s more, the components of these applications may be arranged differently, depending upon the user’s class of device.
Last February, the Internet Engineering Task Force officially declared “done” its work on HTTP/2, its most significant revision to the web’s main transport protocol to date. NGINX Plus r7 is the first version to support HTTP/2.
And if the objective of all new releases was to support the latest technology, I could just end this article right here.
HTTP/1.1, it would appear, will not go gently into that good night. It’s not just for an abundance of procrastination, as has been the case with IPv6. It’s partly due to a lingering skepticism over whether HTTP/2 can actually live up to its lofty goals: faster delivery, more reliable service, improved privacy.
The biggest change to HTTP/2 is a move to a “binary wire,” where content delivered by servers is sent in binary code rather than text. This is somewhat more efficient, and more easily enables TLS/SSL encryption of the wire content.
It also enables clients, such as Web browsers, to request multiple components of Web pages and microservices in parallel, dramatically improving perceived performance. Customers should notice a much faster Web.
Should. Or perhaps I should say, “might.” Or “might possibly.”
With HTTP/1.1, multiple components in web pages (such as the one you’re reading now) were handled serially, one after the other. Mobile users, especially, noticed complex Web pages to load very slowly.
The first “dynamic” pages (content whose arrangement adapts on-demand to the framework of whatever device happens to be rendering it) were dog slow. Molasses slow. Or, to use a 20th century synonym for these phrases, CompuServe slow.
This problem was mitigated, at the time, using a technique called “domain sharding.” Web server packages would actually tout this technique as a virtue.
The idea of domain sharding was to divide multiple content components among separate, discrete Web domains, enabling browsers to request these components in parallel. If they were all being served through a single domain, requests would have to wait their respective turns.
HTTP/2 should render the entire practice of domain sharding obsolete. The problem is, when a web site is crafted for HTTP/1.1, it may actually run more slowly under HTTP/2, because it uses domain sharding.
Conservative estimates say that only about half the web browsers in current use today support HTTP/2. On one side of the argument, that’s a high percentage, showing how successfully the browser makers have been replacing clients currently in use in the field.
On the other side, it’s not a high enough percentage for content producers to safely make the leap to HTTP/2 and parallel requests, which are incompatible with HTTP/1.1 altogether.
And here is where politics re-enters the picture: HTTP/2’s parallel handling capacity was made possible through the adoption of most of Google’s SPDY protocol proposals. There’s a sizable minority, in the IETF and elsewhere, who feel a single vendor’s proposal should not have been absorbed in such a rushed fashion.
What’s more, because some countries in the world have already banned a fully-encrypted Internet, the SPDY requirement as originally proposed for end-to-end TLS/SSL encryption was made optional instead. For some, this appears to have eliminated the mandate for moving to a binary wire protocol in the first place.
So the move to HTTP/2 will not be an exodus on any grand scale, and that might appear on the surface to be bad news for NGINX Plus, whose main value proposition is its HTTP/2 support.
It isn’t, for a nuanced reason.
Because it will need to support a mixed-protocol Web for the indefinite future, NGINX has concentrated on improving its services as what Web developers call a “Layer 7 load balancer” or a “reverse proxy server.” Many believe that, if we were to judge NGINX in this class rather than as a “web server,” it would be the market share leader already.
A Layer 7 load balancer routes requests for content based on the nature of the requests. With this in place, web architects can abandon domain sharding today, whether they’re using HTTP/1.1 or HTTP/2.
All components of a web page can be collected together under a single domain. That means the DNS server only has to be invoked once to resolve all the page components’ requests, dramatically improving responses, including on mobile devices — again, using HTTP/2 or HTTP/1.1.
With NGINX Plus r7, the distribution of resources takes place at the application layer (Layer 7) rather than at the transport layer (Layer 4) of the network.
Microservices and the new realm of cloud-native, distributed, mobile applications benefit from this because it enables developers to partition resources throughout data centers and throughout clouds by class name, without incurring the penalties that come from resolving multiple, disparate DNS addresses in the API calls.
If you think about it, this takes one of the main improvements of HTTP/2 and extends it to the realm of the pre-existing protocol. This assumes, of course, that Web architects are willing to meet NGINX halfway, ceasing the practice of domain sharding right now.