top of page
Search

Your Learning Results are Significant but Are They Important?

  • 6 hours ago
  • 2 min read

I studied a lot of statistics in graduate school.  I learned how to do multiple regression analysis, factor analysis, chi-square, analysis of variance, non-parametric data analysis etc. I also studied modern psychometric techniques using item response theory to create valid and reliable exams. But when I look at the analytics I produce for my clients it’s usually pretty basic stuff: medians, means, ranges, correlations, histograms, pie charts, and the occasional ROI calculation. Just about everyone is familiar with and can understand these metrics.


But there is one statistic that is very important and not known to everyone: Effect Size. Most of my clients understand the concept of statistical significance. They are aware that a result might not be meaningful if it is not statistically significant. So, for example, if you are comparing the final exam results of learners who have taken classes using two different teaching methodologies, let’s say, eLearning and ILT, you can’t really claim the superiority of one versus the other unless the resulting exam differences are statistically significant.


But statistical significance isn’t enough. Two results can have a statically significant difference but not actually be meaningfully different. That’s where effect size comes in. It tells you how different the results are. It’s calculated as a multiple of the pooled standard deviation of the groups. So, an effect size of one would mean the superior group outperformed the lower group by one standard deviation.


Illustration of Effect Size of 0.8


In practice, in learning measurement, an effect size of one or more is unusual. As a rough guide:


·         0.2 = Small effect

·         0.5 = Medium effect

·         0.8 = Large effect


So, significance and effect size go hand-in hand:

·         Statistical significance = “Is the difference real or just due to chance?”

·         Effect size = “How big is the difference in practical terms?”


with real-world implications:


·         Statistically significant but tiny effect → likely not worth the investment

·         Moderate to large effect → strong evidence of real behavioral impact

 

 
 

Get the Newsletter
Subscribe to MME to get industry articles and insights delivered to your inbox.

Thanks for submitting!

Contributing Thought Leaders

Steven Just

Jim Delaney

  • LinkedIn
  • LinkedIn

NOTE: To access MIRA, you must have a free license of OpenAI's ChatGPT. 

 

DISCLAIMER: This custom GPT is designed to provide helpful and accurate information, but it may occasionally produce errors or outdated content. Always verify critical details with trusted sources.

bottom of page