Visualizing Realist Evaluation Findings

by Jan 24, 20180 comments

I usually use Data Visualization to understand complex concepts that are not intuitive, or simply to explore things I don’t know.

Not long ago, I saw a tweet from Julian King (citing Ray Pawson) asking for new, effective ways of visualizing Realist (Realistic?) Evaluation findings (we need better visualisation of Realist theories):

I’ve been meaning to learn more about this evaluation approach, so I volunteered to give it a try, and he sent me this evaluation report as an example (EVALUATING “REALISTIC EVALUATION”: EVIDENCE FROM A STUDY OF CCTV by Martin Gill and Vicky Turbin Scarman Centre, University of Leicester, UK). The report is very clear and easy to read, and they present their findings in the form of 10 CMO (Context-Mechanism-Outcome), in the way the approach suggests to do:

Extract from EVALUATING “REALISTIC EVALUATION”: EVIDENCE FROM A STUDY OF CCTV (Martin Gill and Vicky Turbin Scarman Centre, University of Leicester, UK)

So I tried to create something connecting it all (the 10 C-M-O the report includes in the findings and conclusions) into one page, and this is what I got:

To have some feedback Julian circulated it among some Realist Evaluation experts and we had the chance to get the comments from Dr. Gill Westhorp – thanks, Gill :-), and these were her insightful comments:

  • It only seems to identify one Contextual factor per Mechanism (never true!) and only one Outcome chain from any action (almost never true)
  • There are no outcomes for some lines (prosecutions) other than deterrence; and no outcomes for customers or shop owners etc.
  • There are some things which suggest this is initial theory not findings – “staff can monitor trouble spots” – sure, but do they?
  • There’s only one possible outcome from actions in the third column.  Staff approaching thieves can result in staff being shot, as well as thieves being deterred…
  • The absence of arrow heads means you can’t tell which way some links are supposed to work (though arrow heads would also imply causation which might not be the intent).
  • Some connections seem dodgy (for example the triangle of lines between contexts 6 and 7).

Most of these (except the 2 last ones) come from the report (the diagram tried to represent as literally as possible the findings in the report)– but also they do demonstrate the issues with trying to represent the true complexity of realist findings in a simple diagram.

However, for now, we were reasonably happy with the result and we will try to replicate the model to present future examples.

Thanks for stopping by!

New posts coming up:

(published every two weeks-ish 🙂 )

  • ToCs series
  • Visual summary of impact designs
  • Visual summaries of other criteria designs
  • Ideas to make Bibliographies more informative
  • Ways of mapping beneficiaries
  • My favorite pre-attentive features
  • Ideas for reports (series)
  • Some day: iterations with the Periodic Table of Evaluation

And more!

Stay tuned! 🙂

You want to see more Visuals?