Implementation Evaluation
What is Implementation Evaluation?
Generally speaking, program evaluations fall into two complementary buckets: formative and summative. Summative evaluations look at the impact and effectiveness of a program (think testing, validating, etc.), while formative evaluations focus on improving the program’s design and performance. It’s considered good evaluation practice to use both, though in my opinion, formative should come before summative.
In this context, an implementation evaluation (sometimes called process evaluation) falls into the formative category and is essential for understanding your program’s model and quality. An implementation evaluation examines what is or isn’t working and why. Without conducting an implementation evaluation, it would be hard to attribute any outcomes to the program model and also reduces the chance of achieving the desired outcomes.
Think of it like this: if you aren’t sure that your program is the same at every site or being implemented the way it was designed, how can you be sure your program is the reason for any change in your participants, good or bad? And if you really believe your model will make a positive change, if you aren’t sure it’s being properly implemented, how can you expect to see a positive change in your participants?
Which should come first, implementation evaluation or outcome evaluation?
It’s often helpful (and some would argue required) to conduct an implementation and a summative outcome or impact evaluation simultaneously. However, I believe that you can conduct an implementation evaluation without an impact evaluation, but that you can’t conduct an impact evaluation without an implementation evaluation.
Examining the following simplified scenarios (with overly simplified good/bad language) helps illustrate the power of impact and implementation evaluations together:
If your outcomes are good and the implementation of your program model is good, you have a great case to replicate and expand your program.
If your outcomes are good but the implementation of your model is bad, you can’t attribute good outcomes to your program.
If your outcomes are bad but the implementation of your model is bad, you are protected in a way because you can’t attribute the bad outcomes to your program.
If your outcomes are bad but the implementation of your model is good, you know you need to make some program improvements / changes because you are either making no difference or causing harm.
When is the best time to engage in implementation evaluation?
Implementation evaluations are perfect for new programs who are still in the process of defining and/or refining their model. The definition of “model” is actually quite broad here, it’s not just the delivery of the program at the point-of-service, it’s also the training provided to staff and participants, the tools, etc. If any of those are still being developed or improved, the program should conduct an implementation evaluation. It’s also a great time to conduct an implementation evaluation any time you’re replicating your model or expanding to a new context, site, or to a different population.
Ok - this seems pretty straightforward and great, so why don’t you hear about implementation evaluation more often?!
A main challenge with an implementation evaluation is that it requires a high level of buy-in from program staff and partners in order to create trust, openness, and honesty. Without this buy-in, it’s unlikely staff will identify weaknesses or areas of improvement. Implementation evaluations will not be successful without a shared commitment between staff and the evaluators to learning and improving. That being said, all evaluations (including implementation evaluations) need a thoughtful and robust evaluation plan to be successful; promoting and building buy-in should always an essential piece of that plan, especially when considering use and equity.
Check-In with yourself and your stakeholders and partners, are you really ready for an outcome evaluation?
Every time an organization begins to rigorously examine impact and outcomes, they expect the best and do not at all prepare for the worst. In practice, most impact evaluations do not show strong results… you just don’t hear about those in the news. We all believe our work is making a big difference, but proving that statistically is challenging (and expensive). So before going after that federal grant or hiring that big quantitative evaluation firm, consider building a strong foundation with a highly skills implementation evaluator who will use a flexible, creative approach and guide you down that path to proving your impact.
My advice? Until you can clearly define all aspects of your model and measure that the model is being implemented with fidelity across all you sites and contexts, you shouldn’t expect to achieve your hypothesized outcomes, nor can you attribute any results to our program. A case can be made for doing implementation and impact together. However, think of it kind of like the square is a rectangle, but a rectangle is not a square adage: you can do implementation without impact, but you can’t do impact without implementation.