menu SIGN IN
Stacey Barr
Stacey Barr
Performance Measure Specialist

Stacey Barr Pty Ltd
Websites:

Why Measures Don’t Have to be Precise

Posted over 7 years ago

With the help of Doug Hubbard, author of “How to Measure Anything”, we’ll explore why you can abandon the pursuit of perfectly accurate data and measures.

STACEY: Doug, for years now you’ve been coaxing people in the IT field to change their views from “IT is too intangible to measure” to “everything is measurable.” And I can attest that it’s not just in IT that needs to happen. Why has this been a hard transformation to them to make?

DOUG: IT often sees measurement as a choice between perfect 100% certain precision and nothing. Since they see perfect certainty as unachievable, they opt for no measurement at all. They’ve overlooked the usefulness of a third option: the “good enough” measurement.

STACEY: What have you found are the reasons this third option – and in my view often the only option – has been overlooked?

DOUG: Well, measurement doesn’t mean what they think it means. When I ask CIO’s and other IT managers at my seminars what measurement means, I often get an answer like “Assigning a specific value”. This is wrong in at least two ways.

First, in science values aren’t simply “assigned”, they are based on observation. Second, a measurement is seen as a reduction in uncertainty, almost never the elimination of uncertainty. In effect, science uses the term measurement to mean “observations that reduce uncertainty about a quantity”.

IT, on the other hand, thinks of measurement more like accountants think of it: absolutely precise, but often arbitrary and not always based on an observation.

STACEY: So that’s one thing you can do – reframe measurement as a process that helps to reduce uncertainty, not eliminate it. Are there any more hurdles to jump over before they get comfortable to start measuring imperfectly?

DOUG: Yes, and that’s accepting the idea that the presence of noise does not mean a lack of signal. Many managers, IT and otherwise, start anticipating potential errors in any measurement and assume that any existence of any kind of error undermines the value of the measurement.

The fact is that they have to make assumptions about how common these errors are or the effect they would have on the outcomes without having any idea of the relative frequencies of these problems. To put it another way, they have more error in their “error identification” method than the measurement is likely to have.

STACEY: Ha! I like that! It would be a really interesting experiment to find out how big the error in the measurement could get before it drowned out signals in the measure. Wouldn’t that help convince the sceptics?

DOUG: In my consulting practice I compute something the decision sciences call “Expected Value of Perfect Information” (EVPI) and “Expected Value of Sample (Partial) Information” (ESI). Of course, the cost of perfect information would almost always be greater than the value of perfect information. In fact, it is almost always the case that the “biggest bang for the buck” is the initial, small amount of uncertainty reduction in a measurement.

STACEY: Doug, what’s the take-home point you’d like to leave readers with, regarding the imperfect nature of measurement?

DOUG: These three concepts should help IT – and anyone hung up on precision – start to make usefully imperfect measurements. Of course, they require thinking about measurement more like a statistician, scientist, or actuary would and less like an accountant normally would. You should always choose imperfect but relevant measurements over arbitrarily precise and irrelevant measurements.

STACEY: Measures just need to have enough precision to give you the signals you need to make meaningful improvements to the business.

Get the full interview here: www.measureupblog.com/doug-hubbard-interview

TAKE ACTION: How many good measures have you ignored or abandoned because the data wasn’t perfect? Or even what you felt might be good enough? Even when the data is a bit misleading, the overall trend through time can still be very informative and valid. Might be worth asking yourself: is knowing something about how this measure is trending better than knowing nothing at all?

Comments (6)

I agree with this point of view, that reminds me of Christian Morel’s Book on absurd decisions.

Morel, following Nobel Prize Herbert Simon, takes distance from “substantial rationality” that believes that indetermination can be eradicated with the help of science, analytical and deductive knowledge. “Procedural rationality”, on the contrary, uses simplified methods to reach good enough solutions.

One of the main problem that IT Operations face is a paradox : how to start managing by performance, with hundred of indictors available and little experience on efficiency measurement. Many Dashboards have been built bringing perfect information and low expected value. The solution might be to use good-enough and simplified KPI!

Posted over 7 years ago | permalink
Stacey Barr
Stacey Barr
Performance Measure Specialist

I doubt we can eradicate anything with science, analytical and deductive knowledge. We keep forgetting that humans aren’t machines. The beliefs, attitudes and feelings we have can’t be put on ice. So yes, I agree that IT needs to get more in touch with the purpose of the dashboards, and not worry so much about the perfection of their technicality.

Posted over 7 years ago | permalink
Erik Hoffmann
Erik Hoffmann
VP Products at Mirror42

I do agree that IT needs to look at outcome and value, and use simplified methods. Unfortunately the nature of most people in IT is contrary to that believe. IT people have high analytical skills. Analytical people are focussed on the right way to get to an outcome presuming that if the way to an outcome is right, the outcome is also right and therefore valuable. And vice versa, that if the way is flawed that the outcome is also flawed and not valuable. And to most analytical people, simplified methods do not match their (complicated) view of the world — I know because I have to fight this myself. It is “over-analysis” that prohibits a pragmatic approach.

Another issue in IT management itself is that there is abundance of measurements. It is basically a characteristic of an IT system to have measurements. So how do you choose the right signals in all that noise?

Posted over 7 years ago | permalink
Stacey Barr
Stacey Barr
Performance Measure Specialist

Eric I wonder if that’s the wrong question? We can’t choose the right signals in all that noise until everyone has the same understanding of which signals are important and why. The culture that IT has – of over-analysis and a quest for perfection and complexity – has to shift before anything else can truly change. Play the anthropologist (of sorts) and help them enter the world of the decision-makers to understand the risks associated with waiting for perfect analysis vs making fast, well-informed (not perfectly- or even excellently-informed) decisions. I always find that the IT people and the managers/decision-makers really did need to get to know each other better. I was the meat in this sandwich when I worked as the Measurement Consultant at Queensland Rail. This is a good example of the “softer” side of performance measurement, and we need to be deliberate in how we deal with it. There is no real technical solution to it: it’s about people and expanding their worldviews.

Posted over 7 years ago | permalink
Erik Hoffmann
Erik Hoffmann
VP Products at Mirror42

Stacey, yes, it is all about having the same understanding. That is why I think IT should work top-down, instead of bottom-up looking at what measurements are available now (and build a dashboard for that). To answer my own question, basically you do not choose the signals from noise, but you need determine the signals first. But having already a lot of measurements makes it difficult to first take a step back and work from a clean sheet.

Posted over 7 years ago | permalink
radhika
radhika
polaris financial tech services ltd

We are an IT services company and performance measurements have two level impact one for use of managing our projects and two for demonstrating value to our customers. so we have built a 3 tier measurement framework that aims at satisfying different consumers of the metric viz Progress Metric aimed at the project management layer, Performance Metric or Trend metrics aimed at the middle management, program sponsors, Continuous improvement / Business Outcome related metrics aimed at the Senior Management. Ultimately the base measures will be a handful, its the point of analysis and the trending that drive different meaning; hence this model. We face the same challenges of correctness and completness as there are various combinations of projects and life cycles used in a services organization, and this is an interesting and assuring topic for us to follow…

Posted over 7 years ago | permalink

Log in to post comments.