# Statistical Management Control?

### Audio Version

I was looking for a quick source to give a client an overview of what statistical process control (SPC) is. I found an article on Wikipedia, but, as is usual with SPC articles, it made a common, but in my mind crucial, omission.

Wikipedia is a great place to start looking into a topic (that is unless you keep clicking for hours like I do and inexplicably end up reading about the Sampshire slender bluetongue skink at 2 a.m.). But as a crowd-sourced reference it does merit some caution when using it.

The article (accessed 2020/07/24) about SPC is pretty good as far as it goes. The article accurately describes using SPC in producing a product, and most of the discussion is about manufacturing. This is still a frequent use, so it is important to recognize that.

The problem is that it makes a common omission to the benefit of using SPC. It doesn’t explain how SPC can be a critical management tool.

More fundamentally, why do we need SPC at all?

For any continuous data set, there are four important aspects you need to understand to describe what is going on. There are a variety of sample statistics that are used to measure the first three:

• Location – commonly measured by an average or median
• Spread – commonly measured by a range or standard deviation
• Shape – commonly measured by skewness and kurtosis

These three characteristics can be shown graphically on a histogram.

The fourth aspect is how the data changes through time, and that is what is analyzed by an SPC chart.

Why is it important? Imagine you are trying to choose a player to join your golf team. They both have identical scores, of:

78 81 83 83 83 84 86 86 87 89

They both have an average of 84 and a standard deviation of 3.16. The shape is the same, of course, with a skewness measure of -0.343 and a kurtosis measure of 0.283 indicating they might be distributed as a Gaussian or normal distribution. Here is a histogram of both player’s scores:

The first three statistics are identical, so are they identical players? Well, let’s take a look at the golf scores through time:

Now, looking at the scores through time, do you have a preference? You probably do.

So, how the process is changing through time can be absolutely critical to understand. What is it that SPC helps us to do that is more than a chart with the data in time order (a run chart)?

SPC uses some rules based on inferential statistics to examine data of the past performance of a process to predict current and future performance. It is pretty clever actually, since it doesn’t just look at all the data as an aggregate. Instead, SPC techniques use process knowledge to create meaningful subsets of the data, or characteristics of the type of data itself, to propose limits that are relatively insensitive to shorter-term perturbation than taking, say, an average and standard deviation of the entire data set.

These techniques result in a band where you would expect future observations to fall within if everything tomorrow is the same as today.

This is immensely helpful in manufacturing, of course, since you are usually trying to make a product, or provide a service, that is consistent today and tomorrow. Here is what a SPC chart looks like for our golf scores:

The top chart is the golf scores themselves, whereas the bottom chart shows the spread, in this case the difference between the current score and the previous score. Green lines are the average and red lines are the expected variability if the golf scores are random through time and follow a normal distribution.

These charts make it even more obvious what is going on with our two golfers – the first one has a lot of variability around an average of 84. So, for Player 1’s next game we would expect something between a 73 and 95. Good, to be sure. But looking at Player 2’s data, something has happened to greatly improve their game over time while maintaining low game-to-game variability. For Player 2’s next game, we might project a score of 78 or lower, with little variability compared to Player 1.

So here is how that type of information is useful to a manager.

As a manager, your job is to support all the people that work for you and give them what they need to:

• Keep the process at least as good as it was yesterday and,
• Help them make the process better through time

I had an ex-nun teacher once (a whole other story) who had a saying about a lot of my schoolwork, that it was “good but not delicious.” So, a good manager maintains a process so it performs consistently, but a “delicious” one makes it better through time.

Using SPC on management metrics suggests an approach to fixing your management processes and making them better.

The example below shows data from a process where you manage how long it takes to produce a design. Each point on the top chart is the average for a day while each point on the lower chart is the range (max-min) of the designs produced that day. The green lines are the averages of these two across all the days, the red lines show the expected variation of the average (top) and range (bottom) based on the average within-day range.

What does it tell you?

You can see that, while the daily average time to produce a design goes up and down (the top chart), it stays within the bounds predicted by the data, so it is stable through time. You don’t expect to see an daily average of 7 hours or of 4 hours. Either would be really unusual. And without changes, you wouldn’t expect the overall average to stray much from 5.55 hours.

If this is good enough, you leave the process alone. A stable process is kind of like a spinning top. If you randomly push at it, it tends to go out of control and get worse. For example, if you started yelling at your designers every time the average goes above 6 hours (easily expected on this process) you might get unintended results, like not reporting actual times or jobs finished on-time with poor quality just to meet the number.

But what if an average of 5.55 hours was not good enough? Perhaps you needed a mean of 4.6 hours to produce a design for very good business reasons. This process as it stands is simply incapable of that. The process is stable at an average of 5.55 hours. Looking at this chart, if we need to get the average (the top green line) down to 4.6 hours, we are talking about making a systemic change, something that will affect every design every day. It sounds hard (harder than yelling at people anyway), but it is a major clue.

You would approach this problem by trying to find how something you do all the time can be changed to lower that average. You would also know that telling everyone to do their designs like that one day that did get down to 4.6 is fruitless. One day having an average of 4.6 is well within the expected variation, so it is just another day like any other. If you were to investigate it, you would find all sorts of things that happened that day, but you probably wouldn’t find something that is actually different enough to change the average design time.

So, in this case, you would create a team to investigate possible reasons for why the designs take, on average, 5.5 hours and to seek ways of changing the whole process to reduce that average.

Or consider another scenario, let’s say the same process looked like this:

Here we have events (red dots above and below the red lines) that are clearly different than the rest of the data. This process is subject to strange spikes every once in a while. In this case it is worthwhile to investigate those points as something unusual to the process. It should be easy to identify why these three days were so very different than the others since something obviously weird happened. As you learn these reasons, you can prevent them from ever happening again.

Or, let’s say you are managing a team who are working to improve market share. How do you know if recent changes in market share were just random noise or the result of improvement activities that you supported?

Here you can see very strong evidence that your effort in supporting workers in never-ending improvement is paying off. In the early days your market share was averaging around 5.2%, then increased to around 5.7% and just recently shifted up to about 6.5%. We can easily see the effect that these process changes had, so we can easily demonstrate the effectiveness of the team working to improve it (as well as our own genius in supporting them!). We can make it easier to see by recalculating limits each time we made a process change:

This shows that the market share increased with each improvement implemented by the team – and these changes are real, not just due to chance.

In summary, using SPC on management metrics tells you if the processes you are managing are stable, it gives you clues to make them better, and shows evidence of process improvements. It makes the job of data-driven decision-making much easier, which every manager should appreciate.