Performance Measurement vs. Analysis

By on in , with 3 Comments

I’ve picked up some new terminology over the course of the past few weeks thanks to an intermediate statistics class I’m taking. Specifically — what inspired this post — is the distinction between two types of statistical studies, as defined by one of the fathers of statisical process control, W. Edwards Deming. There’s a Wikipedia entry that actually defines them and the point of making the distinction quite well:

  • Enumerative study: A statistical study in which action will be taken on the material in the frame being studied.
  • Analytic study: A statistical study in which action will be taken on the process or cause-system that produced the frame being studied. The aim being to improve practice in the future.

…In other words, an enumerative study is a statistical study in which the focus is on judgment of results, and an analytic study is one in which the focus is on improvement of the process or system which created the results being evaluated and which will continue creating results in the future. A statistical study can be enumerative or analytic, but it cannot be both.

I’ve now been at three different schools in three different states where one of the favorite examples used for processes and process control is a process for producing plastic yogurt cups. I don’t know if Yoplait just pumps an insane amount of funding into academia-based research, or if there is some other reason, but I’ll go ahead and perpetuate it by using the same as an example here:

  • Enumerative study — imagine that the yogurt cup manufacturer is contractually bound to provide shipments where less than 0.1% of the cups are defective. Imagine, also, that to fully test a cup requires destroying it in the process of the test. Using statistics, the manufacturer can pull a sample from each shipment, test those cups, and, if the sampling is set up properly, be able to predict with reasonable confidence the proportion of defective cups in the entire shipment. If the prediction exceeds 0.1%, then the entire shipment can be scrapped rather than risking a contract breach. The same test would be conducted on each shipment.
  • Analytic study — now, suppose the yogurt cup manufacturer finds that he is scrapping one shipment in five based on the process described in the enumerative study. This isn’t a financially viable way to continue. So, he decides to conduct a study to try to determine what factors in his process are causing cups to come out defective. In this case, he may set up a very different study — isolating as many factors in the process as he can to see if can identify where the trouble spots in the process itself are and fix them.

It’s not an either/or scenario. Even if an analytics study (or series of studies) enables him to improve the process, he will likely still need to continue the enumerative studies to identify bad batches when they do occur.

In the class, we have talked about how, in marketing, we are much more often faced with analytic situations rather than enumerative ones. I don’t think this is the case. As I’ve mulled it over, it seems like enumerative studies are typically about performance measurement, while analytic studies are about diagnostics and continuous improvement. See if the following table makes sense:

Enumerative Analytic
Performance management Analysis for continuous improvement
How did we do in the past? How can we do better in the future?
Report Analysis

Achievement tests administered to schoolchildren are more enumerative than analytic — they are not geared towards determining which teaching techniques work better or worse, or even to provide the student with information about what to focus on and how going forward. They are merely an assessment of the student’s knowledge. In aggregate, they can be used as an assessment of a teacher’s effectiveness, or a school’s, or a school district’s, or even a state’s.

“But…wait!” you cry! “If an achievement test can be used to identify which teachers are performing better than others, then your so-called ‘process’ can be improved by simply getting rid of the lowest performing teachers, and that’s inherently an analytic outcome!” Maybe so…but I don’t think so. It simply assumes that each teacher is either good, bad, or somewhere in between. Achievement tests do nothing to indicate why a bad teacher is a bad teacher and a good teacher is a good teacher. Now, if the results of the achievement tests are used to identify a sample of good and bad teachers, and then they are observed and studied, then we’re back to an analytic scenario.

Let’s look at a marketing campaign. All too often, we throw out that we want to “measure the results of the campaign.” My claim is that there are two very distinct purposes for doing so…and both the measurement methods and the type of action to be taken are very different:

  • Enumerative/performance measurement — Did the campaign perform as it was planned? Did we achieve the results we expected? Did the people who planned and executed the campaign deliver on what was expected of them?
  • Analytic/analysis — What aspects of the campaign were the most/least effective? What learnings can we take forward to the next campaign so that we will achieve better results the next time?

In practice, you will want to do both. And, you will have to do both at the same time. I would argue that you need to think about the two different types and purposes as separate animals, though, rather than expecting to “measure the results” and muddle them together.

Similar Posts:


  1. OK Tim, throw a kink in the theory. How does your achievement test define a bad teacher in the first place. I had numerous instructors in Calc and Stat classes that may have been good teachers and could probably do well on the achievement test. Problem is I could not understand a word they said because they were not of American descent. Wouldn’t lack of communication classify them as bad? But that is not a testable quality. Or maybe it just made me a bad student, which is highly likely at that point in time.

  2. I meant they’d test *you* on the achievement test, and how good of a teacher they are would be assessed on your performance. But…not just your performance — the performance of all of their students. The assumption (which is highly problematic in its own right, but I’m pretty sure my stats professor isn’t going to ever read this post, so I’ll run with it) is that calculus teachers all get a roughly similar distribution of talent across their student pool. An inherently low-performing student should still perform better with a great teacher than with a lousy teacher.

    Actually, The New Yorker had a great article by Malcolm Gladwell a few months ago ( Salient quote from the article: “Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of a very good teacher will learn a year and a half’s worth of material.”

    But…this post wasn’t really meant to be about my take on the woes of the American public education system!

Leave your Comment

« »