Incremental Innovations: Evaluation Strategy

How to know you're moving the needle

incremental innovations

Welcome back! We’re buttoning up the Incremental Innovations series – 7 blogs featuring high yield steps lifted from my Education Success Scorecard’s 25 drivers to advance your programs. We’ve touched upon

  • Mapping your content terrain and identifying your content priorities (activities included),
  • Profiling your learners so you can target programs more effectively,
  • Elevating your LX factor to transform and delight,
  • Designing learning pathways to optimize your education portfolio,
  • A quick start guide featuring the 3 M’s of learning design that will distinguish your programs from the competition,
  • And proven strategies for developing effective SMEs.

Let’s say we’ve taken strides with the previous incremental innovations — how do we know this effort moves the needle?

We know because we have prepared an evaluation strategy.

Not just that we have a program and session evaluation, but that we have a strategy for evaluating our objectives have been met. Which naturally means we must define our objectives. I recommend you group measures in three tiers: Strategy, Portfolio Management, and Learning Design.

Strategy 

Select metrics to track across all learning programs to assess the success of your organization’s strategic objectives. “More” is a common generic goal; dig deeper to reveal the specific targets you want to hit. Here are a few examples.

If your strategic objective is…

  • revenue, then track sales and profit/loss
  • member participation, track registrants, unique participants against the population of your membership, and return registrants
  • volunteer engagement, track who is contributing to what when and in what capacity – across products – note patterns
  • business efficiency, track staff hours to programs so you can calculate how much time and money is devoted to each, noting areas that are working smoothly and areas that need to be examined for greater efficiency.

Key: Define your strategic objectives and then measure your progress meeting them.

Portfolio Management

Select metrics that help you monitor whether each learning program is on track or whether tweaks are required. No need to wait until the end of a product cycle to determine whether it was a success or failure – monitor your expectations are being met so you can respond to issues as they arise. Many associations employ robust pre-conference tracking that yields valuable year-to-year comparison data, but don’t apply the same rigor to other learning programs. What could we achieve if we established program level objectives for each of the learning products in our portfolio?

If your program level objective is…

  • webinars will target young professionals, then track registrants by segments so you can see whether you’re hitting the desired margins (if not, switch up your marketing and ask questions about your program design)
  • eLearning utilization, then track registration, downloads, and completion rates (if not performing as expected, find out why now so you can be responsive). Also assess open ended evaluation questions for issues you can address to improve program performance.
  • growing referral registrations, track the program’s net promoter score (if not seeing referrals, follow up to assess what needs to change to achieve that goal)
  • understanding when to introduce new programs, track when members register and claim credits for courses over the course of the year to test your assumptions about buying cycles.

Key: Define what a high functioning learning portfolio looks like and measure your progress toward that target.

Learning Design

The focus of a vast majority of course evaluations is satisfaction – Kirkpatrick‘s Level 1. We don’t glean anything about the effectiveness of the learning from this measure. Consider ways to elevate your evaluations to measure at least Levels 2 and 3.

  • Level 2: Learning. Measure whether the intended knowledge and skills have been achieved. Ask whether learners intend to apply what they learned.
  • Level 3: Behavior: Measure whether learners actually applied what they learned. What might they need to support continued improvement?

Kirkpatrick Level 4 is the ultimate measure of training effectiveness: Results. This measure moves well beyond “the event happened and people seemed to like it” to “this event made a positive impact in members’ context of practice.” How could your association partner with employers within your industry to measure the results of your programs against their intended outcomes? Imagine the powerful feedback you could glean and the incredible value you could articulate with these insights in hand.

Key: Define the expected learning outcome and then design measures that will evaluate success – learning, behavior change, and results.

Measure your success by clarifying your vision for your learning programs. Then assess the opportunities to collect data before, during and afterward to map your efforts against your target.

Ready to bump your evaluation strategy to the next level? Get in touch.

Please note: I reserve the right to delete comments that are offensive or off-topic.

Leave a Reply

Your email address will not be published. Required fields are marked *