
EVALUAPP: Evaluation Design Platform for Community Pioneers
⛶ Full screen
Covid-19 has posed an unprecedented challenge to evaluators. With the imposed movement restrictions all over the world, face-to-face data collection has had to be replaced with alternative remote data collection methods, however, how do we ensure that no one is being left behind?
EVALUAPP can be part of the solution. This innovative learning platform piggybacks on the current shift towards local data collection and online education to ensure that communities and those closest to the change not only participate in data collection but also the evaluation design process. This is important because time or resource constraints mean that communities (or their representatives) are often seen as a source of data, yet their contributions are rarely taken into account during the evaluation design phase. We understand that involving communities in the design is not only a way to empower the communities and obtain better evaluation results, but an essential step for the evaluations to contribute to the sustainable development of these communities. Besides, the shift to a more participatory and empowering assessment approach has the potential to rebalance power relations, build capacity and give voice to the most excluded throughout the whole evaluation process.
ACCESS THE FULL PROJECT DOCUMENT HERE: https://docs.google.com/document/d/1A-FtFisu9E5dcKdGgTWO8DzuVeelvhe-Diat0tnc6Og/edit?usp=sharing
App demonstration:
Adaptive Evaluation
Inclusive and adaptive evaluations in times of Covid-19
During crises such as COVID-19, evaluation teams need to rely on remote data collection methods. This entails intrinsic potential biases against hard-to-reach populations that need to be mitigated through innovative methods and tools. What methods and tools can evaluators use to ensure that hard-to-reach populations are not left behind in evaluations undertaken during crises? Covid 19 unveiled an invisible thread linking methodological challenges, need for equity, and “do no harm”. To be effective, evaluations have to adapt and respond to each of these challenges. What concrete and practical tools can we develop by looking at the intersection of methodological challenges, “do no harm”, equity, and innovation? What can we learn and immediately apply from evaluation methods designed to operate in fluid and uncertain conditions and with imperfect information?