The Meta-Evaluation Dashboard

by Nov 12, 20140 comments

For summarising the most relevant aspects of an evaluation, we have created the “Meta-evaluative Dashboard” that reveals what happened during the evaluation, using the evaluation report itself as source. The Dashboard visually presents the information given in the report about the methodology choices take. After a thorough analysis, and a process of deep synthesis, the most important elements of an evaluation methodology have been selected. This Dashboard was conceived to be as generic as possible, but has the potential to be customized according to your needs.

Complexity Assessment

According to Patricia Rogers, the complexity -or simplicity- of an intervention and its context can be defined by 7 issues: Focus – Are there multiple objectives (complicated) or emergent objectives (complex)? Involvement– Are there multiple decision makers (complicated) or is it flexible and shifting (complex)? Consistency – Is there a standard delivery simple) or does it need to be adapt to context in ways that can be identified in advance (complicated) or can only be known through implementation (complex)?   Necessariness – is the intervention the only way to achieve the intended impacts?…

Participation scan

Evaluation reports usually declare with pride that they have followed a participatory approach. However, participation has many levels and many layers.   Defined the main stakeholders involved who might participate in the evaluation exercise, we map their implication in each of the evaluation phases. Darker shades indicate higher levels of responsibility.   With such a participation scan, the Dashboard can show which stakeholders were involved in each of the evaluation phases and to what extent (leading or simply as informants) they have actually participated.

Mix-methods Scan

To map the methods and techniques actually used, the Dashboard displays the number of techniques used in each phase. These are displayed in relation to the entire timeframe of the evaluation, to easily assess how these techniques complement each other, and how mixed methods (multi-methods) or a mono-method are utilised. This example only includes icons of some of the most commonly used methods, but other methods could be used and would be accordingly indicated by their icons.

Sampling decisions

Almost every evaluation study has to make some decisions in terms of sampling, as it would be cost-effective to reach the whole population of potential informants. To give a quantitative picture of these decisions, the dashboard gathers the estimated number of potential sources. Later it reflects the number of each type of source that has been finally consulted by the evaluators. Finally it represents its %.  Also it shows whether it was purposive or random sampling, as usual, according to the evaluation report.

Credible evidence

Accordingly, for assessing how credible the evidence found, a mix of alternative strategies should be adopted to be reasonably sure that the findings reflect reality. Not all are needed, but the mix should be complementary and convincing of causality or causal inference.   Based on: Davidson, J. & Rogers, P. (2010) Causal inference for program theory evaluation http://genuineevaluation.com/causal-inference-for-program-theory-evaluation/

Evaluative synthesis

Inspired by Jane Davidson, this is one of the most unique features that differentiates an evaluation from a research study. Having an Evaluative Synthesis, understood as a systematic judgement, is the essence of the evaluation.   Many evaluations do this to some extent, -defining the evaluation criteria and questions-, but higher levels of evaluative synthesis would require defining what is “good” in each particular context and the evidence that would demonstrate each element which is not so common.   Source: Davidson, J. (2014) It’s very core of evaluation and makes or breaks

Evaluation Standards

A mix of different standards and codes of conduct has been compiled, defining key aspects that should be taken into account and, at the same time, can be easily checked as specific behaviors.

Evaluation’s Purpose

Evaluations are done mainly for four main purposes: for accountability, improvement of an intervention or enlightenment of future interventions (Stufflebeam and Shinkfield, 1984) or Social justice (Mertens, 2007).   This graph reflects as a ranking the ultimate motivation to carrying out the evaluation, always according to the evaluation report (or Terms of Reference if attached). More than one purpose can be intended, although the focus will be dispersed.   Methodological strategies and designs should be driven by this first decision. This will allow to assess whether the entire methodological strategy is.

Core elements

It was also considered interesting to highlight whether a Logic model was developed, assessed or improved. Also it checks if a complete theory of change was included. It is also pointed out whether unintended outcomes were explored. Other tools, such as systems thinking can also be mapped.

Evaluation Outputs

Finally, most evaluation reports include Recommendations. However these vary enormously in number and quality.   The Dashboard attempts to summarize how many recommendations (if any) were included, and also to assess if they were elaborated ones, which should be more useful, insightful and inspirational than simplistic ones, which basically highlight an area that should be improved in a generic way.

1. Benefits of the Dashboard

Created to be an extensive reflection exercise for deepening one’s knowledge of the evaluation process, the Dashboard has other expected –and desired– consequences.

Incorporating it in your reports or ToR, it would:

  • Help you define what is important –key– in  your evaluation design
  • Help you widen and upgrade your practice as evaluator
  • Help you widen and upgrade your knowledge as evaluation commissioner
  • Save time for readers seeking information on evaluation methodology
  • Encourage specific evaluation aspects not common in reports (e.g. evaluative synthesis)
  • Provide a common interface for interacting with and analysing evaluation practice
  • Help in the comparison of activities and outputs of different evaluations.
  • Contribute to educating outsiders about what really defines an evaluation
  • Generate dialogue around evaluation methodology and meta-evaluation, and broaden evaluation conception.
2. Uses & applications

You may be interested in using this meta-evaluation dashboard if you are…

– an evaluator,

– an evaluation commissioner,

– an evaluation participant or

– anyone generally interested in evaluation and its many variants and possibilities.

 

The Dashboard has multiple applications and many potential uses:

– Initially, it can be used to visualize the evaluation methodology of an evaluation report after an evaluation has been carried out.

– It can be used by evaluators when explaining the methodology they have followed.

– It is a tool for meta-evaluating an evaluation after its completion.

– But it can also be used to visualize an evaluation design prior to its realization.

– It can be useful in discussing evaluation design with evaluation commissioners, so as to explore various options.

– And it could be used to reach agreement on methodology regarding an evaluation’s Terms of Reference.

3. Why this idea

Evaluation is a complex (trans)discipline for which there are “no (fixed) rules”.

Divided between being an art and a science, it is based on common sense but, as in other types of social research, its systematisation poses a challenge.

Evaluation designs vary depending on the object (the evaluand), purpose, commissioners, evaluators, participants and context.

 

However, certain aspects remain the same, and form the common ground of most evaluation studies. For example, methods and techniques such as interviews, surveys or group discussions are present in most evaluators’ toolkits, and used in every evaluation.

Intrigued by these similarities and wanting to better understand evaluation versatility, we created a system for presenting the main features of an evaluation in a transparent, visual way.

4. Limitations

The Meta-evaluative Dashboard is a powerful tool with multiple applications. However, it must be pointed out that:

 

Simplicity

Obviously, this attempt to capture the essence of the entire evaluation catalogue may in some ways seem too simplistic. It has been designed with a generic approach; that is, to be representative for most evaluations, and so it should therefore be customised for specific uses or evaluative contexts.

 

Objectivity

Some of its sections imply a judgment –an evaluation judgment– which has been standardised by rubrics of observable behaviours and facts, even though the topics may be difficult to objectivise.

 

The report as main source

In its meta-evaluative use, the main and perhaps only source of information is the evaluation report itself. Thus, the Dashboard will gather only the information reflected in that report. Much of this data may not be clearly stated in the report, and so will have to be searched for throughout the entire document, analysing the evaluators’ intentions and actual performance in the process. The meta-evaluator may overlook or misunderstand certain parts of this information.

 

Always some bias

Finally, there is the question of this author’s own values about what is important in evaluation, how to measure it and even how to highlight it, and this is inevitably biased by my own particular vision of the evaluation process.

 

You want to see more Visuals?