D.R.E.E.M. - Derive Rigorous Evaluations through Empowered Monitoring

The Challenge

Imagine: You are an evaluation consultant in the last stages of finalizing your field mission preparations to assess achievements of a program on the ground. You balanced budget with rigor, booked your flight tickets, bringing you to 50 locations in five days, and successfully scheduled 32 hours of interviews per day. 😊 You are feeling extremely lucky that after a million phone calls you've finally found the local experts and translators who will enable you to visit and engage with the remote communities the program is working with, hoping to gain peoples' views that will make your results truly meaningful. And then – a pandemic confines you to the isolation of your home office.

How will you ever get the credible evidence you need for a rigorous evaluation? Feel, see, assess what is happening on the ground without a clear vision of travelling to the field in the near future?

At a time where COVID -19 largely limits our means for primary data collection, we – the evaluation community - our clients and their funders nevertheless are in the need of good quality evaluations supporting learning that will contribute to transformative change where it is most needed – despite travel restrictions now adding further to the usual time and budget limitations.


What can we do now?

There are various approaches, methods and tools that can help us to collect robust data during the current COVID-19 situation. For example, we were amazed to see the wealth of work produced during the last months providing guidance on ethical and safe remote data collection (for a compilation see, e.g., here). We are also eagerly awaiting the innovative approaches that will be produced by our creative and energetic co-hackers during this Eval-Hackathon.

Yet, we acknowledge that there are some issues and challenges that simply cannot be addressed at this point in time where COVID-19 has hit us unexpectedly and in an unprecedented way. For example, government representatives dealing with immediate COVID-19 response shifting their attention from evaluations to more urgent matters or a foreseeable shift in aid spending.

What can we do for future MEL?

COVID-19 highlights problems already existing. Our team therefore took a forward looking design thinking approach, exploring from a holistic perspective how we can ADJUST NOW to ensure that our MEL systems will DELIVER IN FUTURE to the standards expected by the various users of evaluations, even when disrupted by unforeseen events such as a pandemic.

The central thought behind our D.R.E.E.M. approach is that we need to shift the emphasis from collecting un-biased, reliable data by external consultants only at specific points in time (e.g. for mid or end-term evaluations), and strengthen consistent, rigorous monitoring and self-assessments throughout the project cycle, guided and quality assured by external MEL experts. These data need to be stored both accessibly and securely to be available whenever needed for (re-)assessments of the program / project.

This concept is inspired to some extent by Empowerment Evaluation and its principles, e.g. capacity building, organizational learning, and accountability. Yet, we believe that for Empowerment Evaluation to be successful, we need an enabling environment, and this is what our D.R.E.E.M. is about:

Not only will we have to create the practical infrastructures and capacity to strengthen MEL and "empower" programming organizations and partners, we also need a shift in roles and thinking in funders, recognizing the advantages of and building trust in the credibility of continuous, rigorous monitoring mentored and assessed by external experts. Evaluators become MEL process consultants , negotiating the right balance between internal and external evaluation in any given circumstances. And we, the evaluation community and associations will have to invest and collaborate to facilitate this shift in emphasis.

A four-day Hackathon does not allow sufficient time to identify and test comprehensively all components needed to build such a complex MEL environment. The video below is only a start framing some of the key elements we identified – we are not even 80% there (Pareto), instead, we call on you to help us develop this idea further.

We have a D.R.E.E.M.! Let us all work together to make it come true!


The D.R.E.E.M. video presentations

Here is the three minute video requested for the pitch ....

.... and here is a a slightly longer (five) minute version!


About the D.R.E.E.M - Team

An energetic group of five from Brazil (Pietra Raissa Silva), Germany (Kornelia Rassmann), Lesotho (Thato Tramoeletsi), South-Sudan (Naveed Anjum), and United Kindgom (Christiane Kerlen)...

... greatly enjoying a wild design thinking process on how to address the challenges COVID-19 has brought to the field of evaluation.


Acknowledgements and ressources

A BIIIIG THANK YOU and APPLAUSE to the Design Thinking and technical facilitators and hosts of this Eval-Hackathon, as well as to the enthusiastic participants, especially those who responded to our small survey and took time for an interview!!!


Should you be interested in the ressources, please check out our repository (click the button "Source" below)

Launched at Evaluation Hackathon by

jeremiah_ipdet Kornelia Rassmann christiane.kerlen pietra_raissa_silva naveed_anjum

Maintainer jeremiah_ipdet

Updated 13.07.2020 12:41

  • jeremiah_ipdet / update / 13.07.2020 12:41
  • jeremiah_ipdet / update / 13.07.2020 12:41
  • Kornelia Rassmann / update / 13.07.2020 11:18
  • Kornelia Rassmann / update / 13.07.2020 09:44
  • Kornelia Rassmann / update / 13.07.2020 09:43

Quality of Evaluation

Maintaining the quality of evaluations

Regardless of a lot of hardships and limits in evaluation in times of COVID-19, it is obliged to keep good evaluation quality. Thus, we need to find a way how to maintain good quality of evaluation. We as evaluators have a role to play in providing evidence to our leaders – how can we ensure that it is given with good-enough rigour?

All attendees, sponsors, partners, volunteers and staff at our hackathon are required to agree with our Code of Conduct. Organisers will enforce this code throughout the event. We expect cooperation from all participants to ensure a safe environment for everybody.

Creative Commons LicenceThe contents of this website, unless otherwise stated, are licensed under a Creative Commons Attribution 4.0 International License.