The purpose of training is to provide learners with the necessary skills to do their jobs. We assess our learners so we can validate that learning has occurred as a result of training - not previous knowledge, guessing, or poor question construction.
In order to achieve that validation, we need to construct effective assessments and test questions. Writing test questions that are optimized for statistical analysis is more complex than most instructional designers know. It requires an understanding of how people learn, and how to write test questions which are complex enough to accurately test the learning objective that was trained.
The document below is the first section of MetriVerse's "Writing Effective Assessments" toolkit. This section helps designers understand how to write tests that are assessing the right objectives at the right level. Click on the link below to download the PDF of this section.
Nothing excites the MetriVerse team more than seeing the recent - and long overdue - industry discussions about behavioral engineering. Understanding how behaviors change is critical to accurately and effectively measure, define, and optimize the work we do in L&D.
This week, the eLearning Guild released a video report in which Jane Bozarth and Julie Dirksen explain the Behavior Change Wheel of Susan Michie's recently-published COM-B system. The COM-B system is one of the newer behavioral engineering models that consider how ability (or capability, as it is defined in the COM-B system) is just one component of behavior and works to identify and optimize all of the components. The COM-B system, like most behavioral engineering models, identifies ability, motivation, and opportunity as the primary components of behavioral change.
It's simply no longer acceptable to toss a random quiz at the end of a course and claim an arbitrary "passing score" is evidence of learning. Nor can we afford to hope that our unvalidated learning efforts will correlate with business impact (which is the equivalent of a chef determining if her recipes are good by tallying the sales total at the end of the night).
But L&D does not have a strong history of measuring the spaces between "did they complete training?" and "did training have a measurable impact on the business?" It's that space that most directly represents our mission and our purpose as an industry. It's the space that answers the question, "Did they learn?"
Did they learn?
Old models for learning measurement all advocated methods for measuring "did they learn?", but they weren't detailed or defined. Over the ears, shortcuts and bad habits have made it hard for L&D professionals to truly know how to measure this. Or when. Or what the test should tell them. Or why. (Fortunately, newer models have helped to clarify many of these questions.)
Recently, we've spent so much time speaking with L&D departments about this topic, so we decided that we would share one of these conversations via YouTube.
If your organization is having difficulty understanding how to implement a measurement or testing strategy, let us know. We'll be happy to discuss options with you!
In recent weeks, the volume of requests to measure the effectiveness of learning has increased dramatically. The global economy is tightening and learning is shifting in an instant to virtual and distance learning.
And businesses want to make sure every dollar is well-spent and every minute spent learning is "effective".
Unfortunately, too often in L&D, we limit our perception of "effectiveness" to the correlation between learning and business measures. This connection is difficult enough to prove under optimal situations, but something even more alarming has become apparent in recent weeks: far too few organizations are adequately measuring their performance.
Instead, they rely on what we refer to as "spontaneous metrics", where each manager or leader uses an impromptu, personal gauge to measure something.
MetriVerse's President and founder A.D. Detrick has written a new article on TrainingIndustry.com, "Understanding Efficacy: The Key to Measuring Leadership Development".
"In the last few years, I've had the good fortune to build measurement strategies for leadership development initiatives in a number of large organizations," A.D. explained. "Any measurement strategy requires us to identify the measurable behaviors that will change as a result of our learning efforts. Unfortunately, 'leadership' is very difficult to define as a metric (or as a set of metrics). Leadership is a broad set of attributes and behaviors that shift in importance, based on the situation. It's one of those classic 'I don't know what it is, but I know it when I see it' conundrums. Despite this, we still see regular investment in leadership development. There is an implied belief i their importance; it as our job to help identify their effectiveness."
One of the ways to measure a broadly-defined term was to look back into the research of pioneering social psychologist Albert Bandura, and his study of Social Learning Theory and how observation leads to imitation and modelling. Bandura discovered the theory when working with parents and children, but recent studies have discovered that it applies to leadership, as well. Leadership Self-Efficacy now serves as a potent metric in MetriVerse's leadership development measurement strategies.
Read the article and let us know what you think!
Want to find out more about measuring leadership effectiveness? Contact us today.
MetriVerse is going to be out and about in September, bringing the word to masses about fear-free measurement and the power of business impact. We'd love to see you!
Association for Talent Development, Central Ohio Chapter
When I launched MetriVerse Analytics in 2016, I really had three primary goals:
In recent months, I have noticed a spate of articles (including a few I’ve written) advising L&D and HR departments that the first step in developing a measurement strategy is to define a goal. But as I navigate through the measurement world, I witness – with increasing regularity – that asking “what do I want to measure?” and/or “why do I want to measure it?” are inadequate questions. Because these recommendations come with a faulty assumption; they presume that we know how to define the thing we’re trying to measure.
If you only read one (other) thing today, go read Andrew Oliver’s article on InfoWorld, “With big data, CEOs find garbage in is still garbage out“.
Basically, the article reports that a recent KPMG survey of CEOs says that most don’t trust their organization’s analytics capabilities. Not that they don’t see analytics as important, or recognize the value in the practice; they just don’t believe their organization is doing it well.
A.D. Detrick is a strategy and measurement consultant, human capital analytics expert, project manager, instructional designer, and trainer. He's also a self-confessed comic book geek and a believer in using humor and humanity to teach complex concepts.