Search
  • MME

Using Measurement to Promote Learning Effectiveness

What’s the relationship between measurement and learning effectiveness? When most L&D practitioners think about measurement and learning effectiveness, they think about outcomes measures. Examples include:

  • Course enrollment and completion -- although, as we all recognize, this metric does not say anything about learning effectiveness, just that the training was completed (or not).

  • Level I (smile sheets) – although research demonstrates that traditional Level I evaluation scores do not correlate with learning effectiveness. (There are better ways to create and report on Level 1 evaluations that do correlate with learning effectiveness.)

  • Measurement to diagnose individual learning gaps and provide targeted remediation – this can be either rules-based remediation or it may use machine learning techniques. This is a promising field, but evidence is mixed on the effectiveness of machine-based personalization at this time.

  • Exam scores – with caveats (to be discussed below) an objective measure of learning effectiveness, either in the form of post-tests alone or pre/post-tests to measure gain scores.

There is a fifth category that has emerged over the last two decades for which there is convincing research evidence: Measurement, in the form of assessments, improves learning.


Let’s look in a little more depth at categories four and five.


Exam Scores as Measures of Learning Effectiveness

This is generally the gold standard: Can learners prove that they have mastered the material through an exam? While we agree with this, in our experience it comes with two caveats:

  • Caveat 1: Who created the exam? Is the test fair, valid and reliable? There is a science to testing, called psychometrics, and the exam development and analysis must follow well-established processes to be considered valid.

  • Caveat 2: When do you test? Most tests are given immediately post-training. Fine, but we know that learners rapidly forget after a learning event. That’s why we also encourage delayed testing -- a week, a month, three months later. This not only tests retention but also actually helps the learning process. Which leads us to category five.

Using Measurement (Assessments) to Improve Learning Effectiveness

There is now more than two decades of solid research that demonstrates that assessments before, during and after a learning event improve learning effectiveness. In the research literature this is sometimes called the “testing effect” and sometimes “retrieval practice.” When the testing is done in the days, weeks and months after the learning event, it takes advantage of the well-known “spacing effect.” No matter what it is called, research demonstrates that the cognitive effort required to answer questions strengthens the connections among the neurons in long-term memory. Let’s look at three ways to use testing to enhance learning:


Priming Exams

It might seem counter-intuitive but testing learners on a subject before the learning event (before they have seen the material) enhances learning during the training. This is called “priming” (it primes the students to learn). Here are two studies that demonstrate the efficacy of priming:

Cumulative Exams

Most module-level exams take the form of:



But cumulative exams accumulate material from all prior modules, like this:



Here’s a study showing the benefit of cumulative exams:

Spaced Testing

What’s more effective for learning: four study sessions or one study session followed by three practice tests? A lot of research has shown the latter. Here’s one such study:

Bottom Line: Assessment and measurement are not just about reporting results; they are highly effective in enhancing learning.