In the last few years, ‘learning’ has made a resurgence in international development. The traditional monitoring and evaluation (M&E) teams have been replaced by monitoring, evaluation and learning (MEL) teams. And new development approaches, like adaptive management, put learning at their core.
We all agree that learning is important and that we should do more. But the reality is that often we don’t have time or resources to do it properly – other more pressing responsibilities take over.
So what to do?
I think that if learning is so important, we should be accountable for it.
I know, I know, another thing we are held accountable for. But hear me out.
If we are accountable, we have to prioritise it.
Take M&E as an example. We do it to varying degrees of robustness and for various reasons, but we always get it done because it is expected of us. As the common management saying goes, 'what gets measured gets done’.
For better or worse, I think it applies to learning too. By monitoring learning, we create incentives for it to take place.
How one programme began embedding learning
The Climate and Development Knowledge Network (CDKN)’s approach to the issue is a good example of learning being embedded into a programme from the start.
At first, learning for this seven-year programme on climate-smart development took place in an informal way, driven by enthusiastic individuals. However, it soon became clear that the network was not systematically feeding lessons learnt back into organisational planning and decision-making. So, mid-way through programme implementation, they decided to embed learning properly.
To do this, they added learning questions and areas of focus to the CDKN business plan and provided additional budget for learning to take place. They also formalised the learning process itself, identifying key learning areas, deciding specific learning questions and choosing methods. They set up a dedicated learning team to implement the plan, deliver learning outputs and support the uptake of learning across the programme.
Of course, there were still challenges, especially when it came to implementing the learning plans. The reality, as CDKN shared, is that ‘reflection and learning rarely get prioritised over the day job’. But they also learned that having senior management support, sufficient resources and questions that are strategically relevant yield the biggest outcomes.Having senior management support, sufficient resources and questions that are strategically relevant yield the biggest outcomes
Setting learning objectives and indicators
While CDKN’s approach to learning is a great start, I would take it even further by adding learning-specific objectives, indicators and milestones into programme logframes or workplans.
Deciding which learning objectives to include – and the indicators you use to measure them – will depend on your project’s overall aims and objectives.
In a programme targeting youth unemployment in Africa, for example, a learning objective could be programme members’ increased knowledge of issues and contexts affecting youth employment in target countries. In a climate change and resilience programme, by contrast, a learning objective could be partner organisations’ increased capacities and skills to assess and manage the risks of climate extremes and disasters.
When it comes to indicators, my colleague Blane Harvey, who specialises in organisational learning, suggested to me that it’s helpful to include indicators for both learning outcomes and learning processes.
Tracking learning processes and not just outcomes makes it possible to look at the enabling environment for learning, which is extremely important. In this case, indicators could be about setting up and using the systems and processes to support learning.
Outcome indicators, meanwhile, could include cognitive changes (such as indicators about the level of knowledge or capacity among participants or changes in attitudes), normative changes (such as new rules or practices), or relational changes (for example, new ways of understanding and relating to one another).
Monitoring and measuring these may not always be very easy, but you could use incremental progress markers that track those areas, breaking down the changes into smaller steps as is done in outcome mapping. This IIED publication provides additional ideas on developing and categorising learning indicators.
Of course, budget and resources for learning is what it often comes down to. Learning can’t happen on top of everything else if there is no time or resources for it. There’s also the risk that learning becomes another tick box exercise. And monitoring learning and adding more indicators into a logframe might not be everyone’s cup of tea, in which case we need to find other ways to foster curiosity and learning within programme teams.
But, if we are struggling to prioritise it as much as we seem to be, I think making ourselves accountable for learning could be a good start.