It's simply no longer acceptable to toss a random quiz at the end of a course and claim an arbitrary "passing score" is evidence of learning. Nor can we afford to hope that our unvalidated learning efforts will correlate with business impact (which is the equivalent of a chef determining if her recipes are good by tallying the sales total at the end of the night).
But L&D does not have a strong history of measuring the spaces between "did they complete training?" and "did training have a measurable impact on the business?" It's that space that most directly represents our mission and our purpose as an industry. It's the space that answers the question, "Did they learn?"
Did they learn?
Old models for learning measurement all advocated methods for measuring "did they learn?", but they weren't detailed or defined. Over the ears, shortcuts and bad habits have made it hard for L&D professionals to truly know how to measure this. Or when. Or what the test should tell them. Or why. (Fortunately, newer models have helped to clarify many of these questions.)
Recently, we've spent so much time speaking with L&D departments about this topic, so we decided that we would share one of these conversations via YouTube.
If your organization is having difficulty understanding how to implement a measurement or testing strategy, let us know. We'll be happy to discuss options with you!
A.D. Detrick is a strategy and measurement consultant, human capital analytics expert, project manager, instructional designer, and trainer. He's also a self-confessed comic book geek and a believer in using humor and humanity to teach complex concepts.
Request a Consultation
Contact us to discuss how MetriVerse can wield the power of data science to help your organization.