It came as a bit of a surprise to me that CMSWire, a publication ostensibly about improving digital customer experience (DX), has published so few items about the topics of quality control (QC) and quality assurance (QA).
There is a logical reason for this, but I’m not sure I like it.
“We speak to CIOs, CMOs and marketing and information managers who are concerned with the optimal use of their organization’s digital information,” reads CMSWire’s Media Kit, produced by its parent company, Simpler Media.
“Optimal use,” at least on the surface, would appear to imply the invocation of some kind of quality.
CMSWire has talked about quality, and certainly its people are devoted (at least in my opinion) to trying to provide it. It’s why I work with them.
Here’s a piece from last year, where Tamara Franklin cites the chairperson of the American Society for Quality, Stephen Hacker, who said, “To be effective, a culture of quality must permeate an entire organization.”
CIOs, CMOs and marketing information managers do constitute a good chunk of an organization, but arguably not the entirety of one.
Companies that certify other companies in the business of software quality will tell you there are three separate job functions to be considered:
QC is the function of implementing the standards and practices necessary to evaluate the quality of software production. Many suggest this is a function that can be effectively outsourced.
QA is the function of assuring that those standards are being implemented. Many suggest this is a function that can be effectively outsourced.
And then there is the actual role of software testing, which in some organizations is divided between defect testing and method testing. See above.
You may ask yourself why there needs to be several layers of oversight of assurance of control of management of implementation.
Software developers ask the same thing every day. In organizations where so-called Agile principles are in place (and actually being adhered to), developers who believe in reducing bureaucracy and improving workflow have advocated for the removal of layers, especially QA.
In a company blog post that is already eight years old, a pair of Google engineers touted how their organization created yet another role to add to the mix, which they called “Test Engineering.”
“We look at this as a bridge between the meta world of QA and the concrete world of QC,” wrote Google’s Allen Hutchison and Jay Han. “Our approach allows us to ensure that we get the opportunity to think about customers and their needs, while we still provide results that are needed on day to day engineering projects.”
Now, that sounds more like CMSWire’s bailiwick: customer focus.
Google, quite demonstrably, operates some of the largest swaths of data centers on the planet. Since that March 2007 blog post, Google has helped create the concept of containerization: a radically new method for managing workloads distributed across very broad infrastructures.
The open source container orchestration system that Google has stewarded is called Kubernetes; its commercial version is called Google Container Engine.
Containerization has been said to be best suited for software environments that employ continuous development (CD) and continuous integration (CI). The basic principles for both these concepts are that software need no longer be doled out to customers in huge, process-changing, major releases.
Put another way, software can be perfected along the way in small steps, and things can gradually keep getting better.
When we put it that way, it all sounds very nice.
Kubernetes is capable of a new, and heretofore untried, form of version control (VC). It works like this:
With containerization, services achieve high availability (HA) in data centers through replication. If more clients are requesting a process than there are containers available to fulfill them, Kubernetes responds by scheduling more processes.
(Mesosphere, a data center orchestration system built on the Apache Mesos platform, also manages workloads in this fashion, and has pioneered several best practices in CI/CD to that end.)
This is what folks mean when they use the phrase “scale out.” It’s also more efficient, in several respects, than the load balancing systems for a typical CMS, which anticipate the possibility of more requests by apportioning entire servers in advance.
Kubernetes and Mesosphere enable a truly revolutionary new concept in software deployment: handling multiple versions of containers simultaneously.
Thus, rather than deploying a new and largely untested software component in some safe, secluded, virtual staging environment, Google is capable of deploying freshly compiled components in limited numbers, alongside more proven and stable components.
When defects are found or overall system behavior is degraded, the orchestrator is capable of recalling the suspect versions and immediately replacing them with stable counterparts.
At software development conferences where Kubernetes and Mesosphere are featured on stage, there’s an exciting new message that’s given attendees reason to celebrate: The production environment is the development environment.
Or, as I’ve heard said aloud on more than one occasion, “Your users are now your testers.”
At one level, it’s a very scary thought. My editor on this publication, Noreen Seebacher, has written about the risks taken by early adopters — especially of consumer products — and the growing attitude among their manufacturers that disappointed customers in the early going represent a manageable minority.
This week, I published an article on the problems I encountered with the one-click setup routine for Microsoft Office 2016. A few of the comments I received in response projected the notion that I’m certainly in the minority of users here, and the problems I uncovered would never rise to the level of an epidemic.
We hear from vendors every day — especially Microsoft — the phrase, “We listen to our customers.” On the day Windows 8 was first revealed to the press at an event in Anaheim, I’d heard that phrase recited as though customers had actually built the operating system themselves.
Since then, I’ve said on numerous occasions that, if Microsoft actually had listened to its customers — had it paid serious attention to what everyday, rational, sensible people would say about the Start Screen — Windows 8 would never have happened.
So there is something to be said about a company that actually does listen to its customers.
By extension, you could conclude that an organization that builds into its software delivery processes the capability for bad experiences to be detected, mitigated and eliminated, may truly be taking that mantra seriously.
As long as a company was willing to outsource QC and/or QA to an outside firm in the first place, why not outsource that task to the people it already says it trusts most? Where’s the harm in that?
Based on that logic, maybe the Kubernetes/Mesosphere orchestration ethos is a truly good idea. If a software or service provider is to take customer feedback seriously, then hard-wiring it into their orchestration should not be a bad thing.
Yet there is a danger here that cannot be overlooked.
From the questions that arose from prospective customers after the first wave of consumer sentiment analytics services were unveiled, it was clear that companies were interested in determining whether the negative feedback they received every day from their customers was tolerably low.
You can foresee a data center applying that same rationale to software quality — for instance, setting up their orchestration system so that new and untested components are fielded by a tolerable percentage of the customer base. By “tolerable,” I mean a minority small enough (and, quite possibly, undesirable enough anyway) that if they were all pissed off by the poor quality of the product, it wouldn’t make that much of a dent in the bottom line.
Imagine if Microsoft’s CI/CD system were already in place four years ago. It could have tried the Windows 8 Start Screen on an otherwise unprepared five percent of its customer base. About half would have complained, representing less than three percent of the total base.
Three percent’s reasonable enough, isn’t it? Better than frustrating half the entire base, which is close to what actually happened.
When we talk about, write about, produce conferences about, “making the customer happy,” we speak about “the customer” in the singular. When we study “the customer” with analytics tools, we draw conclusions about “the customer’s” preferences based on what the majority of the feedback indicates.
It’s not that we ignore the minority, but at best, perhaps we console them for being in the losers’ bracket.
We often conclude that it’s impossible to make everyone happy, but we do our best. We calculate that, so long as we’ve made reasonable efforts to achieve positive responses, we’re living up to our credo.
But when we translate that credo into software, and put a percentage figure on the amount of bad news we’re willing to tolerate, we start to sound like George C. Scott’s character in “Dr. Strangelove.”
Specifically, we begin actively deciding — whether through demographics, behavioral trends, semantic analysis, geography, or frequency of appearances on Twitter — who among our customers don’t matter.
“A culture of quality must permeate an entire organization,” stated the American Society for Quality.
The fact that we don’t talk much here about the proper testing of software, and they don’t talk much there about lines of communication with customers, is just another indicator that not much at all — sometimes, not even the air conditioning — permeates an entire organization.
There is a burgeoning science in determining how much negative feedback an organization can safely ignore — how much bad CX or DX can be tossed aside as a small minority. We’ll know this science of quality tolerance has reached critical mass when it, too, is awarded a two-letter abbreviation (QT).
Title image by Pablo GarciaSaldaña.