Evaluation Criteria along the Project Cycle
Now that the DAC criteria are being revisited, there seem to be a good momentum for thinking about evaluation criteria from different perspectives.
So taking the DAC criteria as a starting point (and building on the list that I included in the Periodic Table of Evaluation), here is a pool of potential criteria that in my opinion could (should?) enlarge the catalogue:
Defining the programme cycle phases as Design, Implementation, Monitoring and Evaluation (the two last grouped here as Effects), let’s see what happens when we play with them, rearranging them according to which phase of the cycle they are applicable to:
It can be observed that some of the evaluation criteria are predominantly present in the Design phase (Robustness of the design, Relevance). Other criteria are more relevant during implementation (Efficacy, Efficiency, Coverage, Coordination) and others after the programme (Impact, Sustainability, Unexpected outcomes). Finally, some criteria such as Ethics and Equity are relevant all along the life cycle of the intervention.
Ordering them by phase:
Keeping in mind many evaluations already ask too many questions, the idea is not to encourage to add more criteria (and more questions) to the typical ones, but to consider and eventually include new criteria instead of maybe less relevant criteria in that particular evaluation context. As a final thought, whichever the criteria end up being, the key evaluation questions shouldn’t be more than 5 or 6 (as I’ve heard Jane Davidson suggest). Thanks!
New posts coming up:
(published every two weeks-ish 🙂 )
- ToCs series
- Visual summary of impact designs
- Visual summaries of other criteria designs
- Ideas to make Bibliographies more informative
- Ways of mapping beneficiaries
- My favorite pre-attentive features
- Ideas for reports (series)
- Some day: iterations with the Periodic Table of Evaluation
Stay tuned! 🙂
You want to see more Visuals?