bicycle racing team in formation

When the excitement of a big project is over, intranet teams often fail to make a successful transition to business as usual. Managing change after a big bang launch therefore becomes patchy, and lacks meaning and purpose. People get bored and things fall apart. Creating effective improvement cycles can answer these issues. 

Improvement cycles are methodical plans that help you: 

improvement cycle

  1. Track how you are moving towards your goal with quantitative or qualitative data.
  2. Review the data consistently and objectively.
  3. Decide what to do about it.
  4. Intervene in a way that will improve performance.

It is most definitely not rocket science, but it's amazing the degree to which teams only nibble at the potential power of this technique or only dabble with it instinctively. At Spark Trajectory, our belief is this is mostly down to a misunderstanding of the role of measurement.

Digital Workplace Measurement Is Hard

Measurement is seen as the very essence of professionalism. Managers have been taught to value it above all else and the “If you can’t measure it, you can’t measure it” mindset persists. 

While this might be true in fields where output is clearly linked to input, in our world — internal communications, knowledge management, findability and collaboration — it is rare when we can find metrics that so cleanly track the benefits of our work. Practitioners are under pressure to look for data that easily describes their activities. The place they look is in the reports in their web analytics package or the admin console of their collaboration platform. At this point, they either fail to find things and stop, considering the problem too hard, or unwisely push onward, creating a fiction that they believe can prove their worth, mistaking the content of those analytics reports for what is truly important. For example:

  • Confusing page views for people understanding the information they read and changing their behavior.
  • Confusing likes and comments for employee engagement.
  • Confusing adoption of collaboration tools for improved collaboration.

Once this confusion is established, teams will pursue odd tactics to increase these metrics in absence of any true benefit because bigger numbers must mean better numbers. They'll chase more page views with aggressive internal marketing techniques, try to increase online discussion with internal clickbait, or push the use of collaboration tools whether or not teams would truly benefit.

However, the fundamental confusion is to mistake measurement for management: a trap laid for the lazy manager. A wise manager doesn’t really want the numbers, they want the objective control that measurement can bring, but applied to your complex world.

Related Article: 7 Ways to Measure Workplace Collaboration and Productivity Tool Efficacy

Creating an Improvement Cycle

Improvement cycles can help. We have created a process to help teams create their plan. This can get into some quite conceptual territory, so let's break it down and take it slow. The purpose of the activity is to create a plan that will help create improvement in the real world. It isn’t a PhD thesis, and no one is going to criticize the scientific validity of what you are doing. You are trying to create a management tool that will allow you to make some changes in how you do and what you do in response to data over time.

measurement plan

Related Article: Start Digital Workplace Change Management on Day One

Understanding the Purpose Behind the Activity

After you identify an activity that would benefit from using improvement cycles, the first step is knowing what you really want. This leads most people to get stuck straight away. You are publishing news stories and want people to comment on them, but why? You want people to adopt a social collaboration tool, but why? We need to identify the beneficial outcome that is sought to track progress towards that goal. This should be in your strategy but this is only occasionally documented.

Everything you do should contribute to some “good thing”:

Action > Good thing

And in turn we hope that the good thing will contribute to something bigger:

Action > Good thing > Bigger better thing

So let’s assume you are an internal communicator publishing news stories:

Publish news stories > Informed employees > Better decision making > Increased profit

The question is how far up this chain can you a) Measure or track and b) Claim success or (importantly) take responsibility for failure. Here’s another one:

Provide project collaboration tools > More effective projects > Reduced product time to market

Dare you claim as a digital workplace manager to have solely reduced time to market? Clearly this would be unachievable and you would be ridiculed. There are thousands of other factors involved and when viewed from the other direction it is a web of benefit, and not a simple chain. Here’s another example:

Provide remote working tools > Increased employee satisfaction > Reduced employee churn

Employee churn is not hard to measure. It is a simple spreadsheet calculation that someone in HR has got in a report. The adoption of various remote working tools should not be hard to measure either. We should be able to get sessions and users from the VPN system or the Mobile Device Management tool. The hard thing is linking the action to the intended benefit. Can we convincingly demonstrate that the complex benefits of working from home makes people less likely to quit their role and move elsewhere?

But is it important? We need to be clear about what our aim in obtaining measurements is here. Is your activity strongly linked with the benefit you are trying to achieve, or (more likely) are you one small part of a system pushing in a positive direction? It’s not possible to tell which drop of honey came from which bee, nor is it important.

Intranet and digital workplace practitioners get stuck here, when they and their managers can’t prove causality. But we don’t need it to create an improvement cycle that can manage our process. We just need to be clear about what we are measuring.

Related Article: Your Digital Workplace Is a Wicked Problem That Can Be Solved

Clarifying Intent

After we have analysed the benefits we are seeking, we can clarify our intent:

  • If our activity has a strong causal effect on the outcome, we can measure the outcome directly.
  • If our activity has a weak and positive effect on the outcome, we will just measure the effectiveness of our activity.
So for example:
  • We create a communications plan to ensure that all managers and employees have signed off on their performance plans in the performance management system by December 1. This has a strong causal effect on the outcome so we will measure the outcome directly and track the percentage of completed performance reviews. However …
  • We create a communications plan to promote trust in the CEO and leadership team after a rocky few months in the market. There is no way to really measure trust but we can ask some questions in the overall climate survey. We can’t claim full responsibility for it (for success or failure) on the basis of some communications activity. Therefore we will assume as communications professionals that this will contribute positively, and we measure how well the communications are read (page views) and reactions to it (such as likes and comments).

Finding Measures and Indicators

After we have identified the purpose of our improvement cycle and clarified what effects it will claim, we are ready to think about the focus of the activity. In the next post we’ll talk about what we can measure and crucially about tracking things that might be considered non-measurable by using non-quantifiable indicators. Again, improvement cycles are about taking action and changing things to try to improve them.

Editor's Note: This is the second in a three-part series on creating intranet improvement cycles. Read the first article which discusses the dangers of ignoring your intranet post-launch. The third and final installment will examine how to track improvements in areas which aren't readily quantifiable.