The Evaluation Periodic Table
(+ Negotiating Evaluation Designs visually)
Not long ago, I published a visual repository of (most of) the alternatives available for shaping an evaluation design: The Catalogue of Evaluation Choices, featuring from Paradigms to Methods, including Purposes, Objectives, Criteria, Approaches and Designs. Today I’m going to share an iteration of the Catalogue.
The original version firstly evolved in two ways: 1) Not a major change, but for me significant: I added Big Data as one of the Methods. I realized during the South African M&E Association (SAMEA) conference, that Big Data is maybe a type of Desk Review, but it is so new and unexplored, that I thought it deserves to be included as another very important source of information that evaluations could benefit from. It is challenging and most of us are not used (or equipped) to use it yet, but I think it deserves to be considered as a method.
The second iteration: 2) I transposed the display from portrait into landscape format. I realized by including the commissioners’ paradigms (and Purposes and Objectives) at the bottom in the original version, I had clearly started elaborating the catalogue from my evaluator’s perspective! But placing the Commisionners (and their Paradigms, Purposes, Objectives and Questions=Criteria) at the bottom doesn’t really make sense. In reality, evaluators are not the originators of the evaluation demand, so many of these options are already defined in the Terms of Reference, before our appearance.
So I thought this display was more representative:
Then I also realized that the views and decisions of the people involved in the Evaluation Design (evaluation team and commissioners at individual and organization level) are guided by their own paradigm/s, which usually remain underlying and do not surface along the process, but maybe should. However, these stances do not change easily, so they usually stay out of the negotiation – but they could still just be acknowledged.
Purpose and objectives of the evaluation are normally also determined by the program and organization’s circumstances, where the evaluator has usually little or no-room for influence.
So I place these in a different level and I ended up with something similar to a Periodic Table of Elements of Evaluation (giving myself permision to play with the metaphore):
Criteria are also usually defined with the ToRs, but I think evaluators could/should sometimes make more clear suggestions to drop some less relevant or evaluable criteria, or to include others that may have been left out.
In some occasions, other options like the Approaches may also be pre-defined due to the organizations’ Evaluation Policy or the nature of the intervention. In that case, the range of choices would be more limited, but still there are numerous alternatives and available:
So here are my conclusions:
The ToRs often suggest a proposed design, too often reducing the Methodology to the Methods, and often even including the methods preconized. This tool (the Periodic Table of Evaluation) aims to broaden the awareness of how many potential options have been described and developed that can be considered in an evaluation design.
In fact, the tool can also be useful for commissioners’ teams to discuss and align their stances when designing the ToRs, with a more open-design point of view, instead of just replicating the typical ToRs.
And I’m delighted that I just got an assignment for improving the QA of UNICEF Latin America and Caribbean Regional Office’s Knowledge Products, such as ToRs among others, where we will have the chance in the next couple of months to further explore these questions!
New posts coming up:
(published every two weeks-ish 🙂 )
- ToCs series
- Visual summary of impact designs
- Visual summaries of other criteria designs
- Ideas to make Bibliographies more informative
- Ways of mapping beneficiaries
- My favorite pre-attentive features
- Ideas for reports (series)
- Some day: iterations with the Periodic Table of Evaluation
Stay tuned! 🙂
You want to see more Visuals?