Microservices and containers are quickly becoming part of the mainstream IT landscape.
Not all that long ago only new digital companies, such as Netflix, and emerging technology-driven ones, like Uber and Lyft, were actually building their systems from microservices and containers. Now we have banks, manufacturing companies, and other mainstream organizations exploring the idea of systems using this architecture.
As is the case with any emerging technology, a number of debates have broken out about design practices and patterns.
One such debate is whether it is better to have larger but fewer clusters of containers or smaller and more numerous ones. In a nutshell, does it make sense to build systems and platforms from many small, self-contained, container clusters? Or is it better to have a few clusters that are comprised of all the components and services needed for enterprise applications?
Related Article: Microservices Make Inroads: Replacing the CMS Monolith
Big Clusters, Centralized Control. Small Clusters, Greater Flexibility
While the needs of organizations will differ and, hence, so too will their container cluster architectures, there are pros and cons of each approach to consider. Big clusters provide for centralized control and management. While each cluster may have a large number of services, there are fewer clusters to manage. This means more complexity inside the cluster but less outside or between clusters.
This is a typical platform approach. Every system, no matter what the system, sits within a comprehensive platform that provides the same set of services to all system components. Larger clusters can be depended upon to contain all the services that most applications will need.
A larger number of smaller clusters, on the other hand, has some distinct advantages. They are more flexible, allowing for more specialized environments for services. Smaller clusters, with fewer dependencies, may be easier to evolve as well.
Learning Opportunities
Related Article: Have Microservices Rendered SOA Obsolete?
The Catch (and It's a Big One)
Larger clusters, however, defy the purpose of distributed systems. They risk becoming the monolithic platforms that are meant to replace, just with more and smaller components. The biggest problem with large clusters, which is shared with monolithic platforms, is that failure in one component has the potential to make the entire cluster inoperable. For example, if Kubernetes was to suffer a catastrophic failure, the entire cluster will be affected and, in the case of mega clusters, potentially the entire system.
The whole purpose of distributed systems is to reduce the blast radius. A small independent unit of computing limits the effects of failure to only that small unit. The bigger the cluster, the larger the potential area of effect.
Ultimately, large clusters are the opposite of why microservices and container clusters are deployed in the first place. The idea of a distributed system is to make all parts of the system as isolated as possible to insure high degrees of resiliency. This was the brilliance of the internet’s design — no one computer could take down the whole network if it failed. The same applies to microservices architectures based on container clusters.
Container clusters are an excellent example of how small can be safer. Small is the very reason for microservices. Clumping them in giant clusters defeats the purpose.
Learn how you can join our contributor community.