To keep up with the speed of business, IT departments need to stop thinking about pinching pennies and using automation and outsourcing to make things cheaper and easier.
What many don’t realize is that these practices are stressing organizational infrastructure and making operations mediocre at best.
The use of these archaic tactics has been commonplace for more than two decades and it’s served to do nothing more than keep your organization’s head above water. Now is the time to change course and consider how to embrace success.
How Did We Get Here?
To be frank, this issue of mediocrity has been self-inflicted. The IT operations space has tried to pare costs by labor arbitraging its way out of the problem for years.
This was done — to a great extent — in parallel to the business process outsourcing (BPO) boom.
For many years, the BPO story was simple: “It costs us $500 and three weeks to do something here, but we can do it for $50 and in three days with our BPO partner.” The BPO firm would sign up for some quality assurance metrics and they were off and running.
The IT operations side didn’t follow that path quite as far, and instead just focused on finding cheaper labor somewhere else and then applying MttX (standard notify, repair, resolve, etc.) metrics as a means of applying that quality assurance at scale and in a way that’s easy to report.
As always, “management by objective” worked, and those MttX metrics were met. What didn’t happen, however, was finding a way to actually resolve tickets and problems, and make the back-end system more functional, and instead using outsourcing to apply band-aids to bullet holes.
The Curse of Automation
The problem didn’t stop with outsourcing. The next wave that washed upon IT operations was the mantra of “when in doubt, automate!”
Before the current advances in machine learning and the like, the use of automation was (simply put) an effort to “script” a response to each issue.
This effort resulted in armies of programmers trying to create an automated response to every known problem to exist. This process, of course, misses the “unknown errors” and more importantly doesn’t eliminate the issues, only hides them for the time being.
The flaw in this logic is that these tickets will show up time-and-time again.
When talking about automation, I often use this analogy. Imagine that you like watching Monday Night Football.
Every football season, on five random Monday nights, the telecast will go out and you can’t see or hear the game. Instead of fixing it the first time it happens, your cable company automatically sends you an email acknowledging the problem while also automatically re-booting your cable box, and the broadcast is back up and running 45 minutes later per an agreed upon SLA.
But it still isn’t fixed — the resolution has just been automated.
Any consumer that gets this kind of service would run, not walk, to the competitor when given a chance.
So why can’t this apply to IT operations? This is the model that automation has encouraged within the enterprise: no actual solutions and remedies, but rather automated recovery solutions.
How Did We Stoop So Low?
Both of these so-called “solutions” happened during a time when little was done to fundamentally make the underlying software, which runs the IT operations backbone, a better solution.
By “better” I don’t mean incremental code improvements. I mean becoming fundamentally better at delivering actionable information that enables an IT operations team to get better every day at improving business outcomes for the enterprise.
Much of the IT operations software development process is driven off of a combination of product management perspective on the market, as well as feedback from customers — these are both valuable sources of qualitative information.
That’s the issue though — it’s qualitative. If ITOM software manufactures hope to truly enable their customers, they need to find source quantitative data that assists — at scale — with an understanding of real world deployments.
This doesn’t mean ten, hundreds or even thousands of data points, but instead millions or even billions.
IT Operations has a wealth of natively digital data! Why isn’t that being used to drive better solutions instead of relying on coders in an ivory tower that have never actually done your job?
Ditch Meh, Aim for Superiority
While laying out the challenges and what pulled IT operations into its current conundrum, I’d offer a road map to move forward: stop focusing solely on the cost components and instead look over to the business organizations that you’re partnering with to determine what business outcomes they are looking to deliver.
This will be a challenge as the IT operations teams will need to learn to speak a new language, and it will be outside their comfort zone.
Here’s the problem though: unless this transition is made, the slow growth of shadow IT will pick up pace. I’d challenge you to find an organization that would rather have gone to someone like Salesforce instead of working with internal resources that could have provided the same solution (price, quality, availability, etc.).
This abundance of digital data must be harvested, and if done so, the reward will be enabling your team to move beyond MttX and deliver quantifiable business outcomes to enterprise organizations.
This journey is going to be more like an Indiana Jones adventure then a simple and sterile process.
You need the data, which must be relevant, and it needs to be managed, manipulated and updated.
The people working with it need the space to create a hypothesis, test that hypothesis, fail a few times, learn and test again.
That takes time and will likely require your most expensive and highly qualified resources — the very people that are already taxed to their limits every day. Take heart though, this is an investment that *will* payoff in dividends.