Are we learning yet?

 Why impact measurement and learning are not always friends

Countless strategic plans and documents related to impact measurement dutifully call for the virtuous partnership of impact measurement and learning. CARE’s are no exception. In fact like many other organizations, we’ve tethered these concepts within the serviceable acronym MEL – monitoring, evaluation (of impact and other results) and learning. It makes sense – when we measure we get data. When we get data, the logical thing to do seems to be to learn from it. But what if this pairing is not as natural and mutually enriching as say chips and dips or afternoon tea and sweet treats?

There is of course a range of views on this. The conventional wisdom in development seems to be that the two are compatible but that it’s a delicate (but do-able) balancing act between measurement in the service of proving, or showing that we got the results we intended, and improving (using the results to change course). However, a small but seemingly growing minority is taking a more provocative tone, arguing that data for proving does not necessarily help, and in some cases can hijack the improving agenda. At last week’s American Evaluation Association conference, presenters at a session unambiguously titled, “Who sets the non-profit evaluation agenda?” argued that funder objectives can negatively hinder evaluation design. This is because what funders want to learn about and the methods they use to go about it is not necessarily the same thing that their grantees want to learn about. Consider this too: a blog post from ICT Works titled “What is the true cost of collecting performance indicator data?” posits that “a growing amount of the qualitative evidence indicates that costs of collecting and reporting on the data that inform high-level performance indicators (for various agencies) can be quite high – perhaps higher than the M&E community typically realizes.” The author illustrates the attendant opportunity cost with the example of an antenatal clinic where every minute spent on reporting requirements was a minute not spent providing care to an expectant mother.

In the case of CARE, I would hazard a guess that one of the greatest opportunity costs of impact measurement is learning. That’s right, time spent measuring, collecting, aggregating and reporting on data is time not spent learning. If this statement seems akin to blasphemy, that brings us back to the original assumption that impact measurement and learning work hand in glove. And yet we hear consistently of mountains of data that are collected and submitted yet do little to puncture our mental models, reframe our programming approaches or fundamentally change our discourses. Could it be that that the hard work of measurement leaves us little time for the hard work of learning? And what is the work of learning? The work of learning is pondering over data that is seemingly ambiguous or meaningless. It is convening the experts, the partners, the ones experiencing change, to piece together a single thread or a ping-ponging matrix of cause and effect in a way that the data cannot. It is allowing our theories of change to themselves change as we re-examine them in light of new experience and evidence. It is looking into the cosmos of questions whose answers could change our world and taking aim at the ones that we must seek to understand. And not least of all, it is time to breathe—to step off the treadmill, to give the mind the time and air to germinate new meaning from the soil of knowledge and experience.

Please don’t misunderstand me—measurement is vital to the work that we do, and in my view, vital to learning—as everyone has been saying all along. But can we do it smarter, better, and in a way that leaves more time for learning AND with findings geared to feed the learning mill, not just the demonstration-of-results mill? Probably.

Here are some suggestions of present and future efforts that might help us just do that:

#1 Outsource. There’s nothing particularly new here; after all, there have been external evaluations, conducted by consultants and third parties for as long as there have been evaluations. But how can we, a) outsource more regularly and strategically, say by having better consultant rosters and keeping evaluation outfits on contracts, and, b) influence others’ measurement activities in ways that fit our learning needs. For example, we can be more proactive in influencing donor-driven evaluations by identifying our learning questions and suggesting participatory methodologies that would be empowering to those we serve. CARE’s West Africa region had the brilliant idea of exploring whether our global indicators could be incorporated into national DHS surveys – voila!—others do the work of collecting and we can focus on digesting the results.

#2 Embrace the technology and data revolution. In a few years the approaches and technologies with which we so painstakingly calculate and measure today may be obsolete. Acumen’s Lean Data approach builds on the fact that project participants can quickly and directly report on impact metrics – via cell phones (incidentally Acumen offers a free course to those wishing to try out the approach). Working with Keystone Acccountability, CARE UK is piloting something similar, deliberately empowering the project participants that are the respondents and the best gauges of impact. Advances in machine learning may soon take the human element out of data analysis of combined sources, freeing up us non-droids to focus on what to do with the findings.

#3 Let learning lead measurement. Learning is too important to be left to chance. If we cultivate our curiosity, we will chose the right things to measure and truly “pioneer the metrics of progress and accountability” as envisioned in CARE USA’s strategy. Measurement to prove results, divorced from learning, can produce perverse incentives, such as focusing on initiatives that produce results that are easy to measure, and diverting energy away from innovation, which may be highly failure prone and unclear regarding intermediate or long-term outcomes. To do learning properly, we need metrics for learning itself– e.g. the number of improvements we make, the extent to which we examine our guiding frameworks, the extent to which we share ideas and systemize the use of best practices.

#4 Make reporting requirements less onerous. This one is tricky because it’s less within our control. But surely there’s space for more dialog with donors, being more proactive in sharing what we think is important to measure and questioning reporting requirements or indicators that seem unduly onerous or unnecessary.

 

When the dust of the budding prove vs. improve debate settles perhaps the logical conclusion will be that there can be both a symbiotic and a predatory relationship between measurement and learning. Let’s try and make it more of the former.

No Comments

Leave a Comment