Brainformative Blog

Hey, what is your thought about the Kirkpatrick model?

I was recently asked to evaluate Kirkpatrick's model. The following was my answer:  kirkpatrick-model-1

*     *      *    *     *

Hey Paul,

Wow, this is a heady question.  How much of an answer do you really want?

>snicker<

High level . . .  I’m familiar with the Kirkpatrick model and while I understand many E-learning designers rely on it, I am however . . .  how to say . . . ambivalent.

All training needs to be evaluated for effectiveness. There is nothing worse than rolling out training “Just Because”, with no evidence based criteria for measuring if it was any good. (Unfortunately, businesses do this with stunning consistency.) So in as much as the Kirkpatrick Model inspires people to verify that their training works  . . . great.

But, my expanded thoughts are Level 1: Reaction  . . . really isn’t a criteria to measure training effectiveness. Learners in the heat of the moment are notoriously fickle. Simply put, if something is hard, learners tend to dislike the training, but the class rigor will inspire real subject  mastery. So the reactionary feedback is often counterintuitive to effectiveness. The fact that I rate a class with 5 stars on a Likert scale only measures that I found something that I “liked.” If designers develop learning events just to get a positive reaction . . . well, they are pandering  . . . not teaching.

Level 2: Learning is a given in my mind. And if the curriculum development cycle is even remotely effective, the learning event creators will know exactly how well it performs before they roll it out to the masses. They will know if they have created something that produces insight--those wonderful Ah Ha! moments that signal the brain has successfully integrated a crucial concept.  

I’m inclined to think that Level 3: Behavior is misplaced, or maybe better said, an incomplete measurement because so much else goes into reinforcing or undermining new learned behaviors after training.

For example, consider in-explicit/unstated managerial expectations. Corporate says this, but I’m the manager and really mean do that even though I won’t say so out loud.

Also every organization has tribal knowledge, siloed expertise that leverages its isolation, creates change resistance and thereby nullifies the training initiative.

Then there are  handicapped work flows, those endless technological work-around's that front line workers create because the in-house IT resources are misaligned with the mission. The organization can roll out training after training on the “proper” workflow but if the physical environment resists what is official . . . forget about it.

Or maybe you can identify with this: ACME business sends their people for a week of “Boot Camp” on the new Super Duper software being rolled out in six months.  

Sounds great, right? Get it over and done with. Limit the down time and immerse the learners in the content. Surely that will make it stick.

Put aside for a moment that the brain cannot develop any level of mastery in a cram session, (aka massed practice) no matter how good the training, no one is going to remember enough about the boot camp to be effective 180 days in the future: or even 2 weeks in the future. The Ebbinghaus’s Forgetting Curve always wins. For there to be a fighting chance that learning sticks the learners must leave the class and go straight to an environment where they can put what they learned into action.

Management never considered that they undermined the training with fantastically poor planning based on failed understanding of how the brain retains new content or the time required to develop expert habits of mind. As a result the learners return to a performance outcome vacuum and the organizational leadership wonders why the Super Duper software roll-out was an epic fail. Unfortunately, organizations almost always lay the failure at the trainers feet.

The moral of the story: Unless an organization identifies the behaviors the work environment and culture fosters before the training there is no way to measure effectiveness after the fact.

Level 4: Results  . . . mmm . . . well, every organization wants ROI. So It seems obvious that they would judge training effectiveness based on if they got the performance outcome they were looking for. And if they did due diligence early in the curriculum development cycle they will probably get what they wanted. But this stage can’t be a vague nebulous cloud of “something.”

I contend (and would bet Kirkpatrick would agree) that the organization must spend the money on measuring behavioral outcomes pre and post training. So as a matter of strategy the ROI statement must also include a baseline (pre), then allow for a messy transition period, and then (after a successful change management process) measure how well they moved the Learner Focused/Performance Based needle.  

These are my thoughts. What do you think about the Kirkpatrick model?

John