The Learning Alliances approach was used as a way for generating knowledge and fostering innovation processes. The authors indicated that it can be used to “strengthen capacities, generate and document development outcomes, identify future research needs or areas for collaboration, and inform public and private sector policy decisions.
Key principles for successful learning alliances
- Clear objectives (what does each organization bring to the alliance?)
- Shared responsibilities, costs and credit (since it seeks to benefit all, responsibilities should be shared)
- Outputs as inputs (outputs are used as inputs in the process of rural innovation)
- Differentiated learning mechanisms (more than one learning mechanism is need, as participants have different needs; e.g. participatory monitoring and evaluation, innovation histories, conventional impact assessment)
- Long-term, trust-based relationships (it takes time to influence and understand change)
Monitoring and Evaluation (M&E) is an integral part of good project management. At the most basic level it may simply focus on outputs i.e. were the intended deliverables of a project completed on time and to a sufficient standard. In the context of complex, multistakeholder, multi-disciplinary, learning-orientated and innovation-focused projects M&E needs to be much more elaborate.
Although it may be possible to plan the goal of the project it is almost impossible, at the outset, to design the right set of actions to achieve this goal in different cities. And even if it were, the ground in cities is constantly shifting.
People change positions, institutions evolve, politics follows its course and innovation (hopefully) happens. For these reasons, project design needs to be constantly revisited. Projects need to continually learn and re-orientate themselves in order to be successful.
Monitoring impacts is possibly a step to far. Indicators of most of these changes could realistically only be expected to show significant changes over a timescale of decades. Monitoring impacts will probably be too late and too distant to influence project implementation. So what is the alternative? This briefing note argues that the SWITCH project and its learning alliances should focus on the outcome level. Outcomes fall between outputs and impacts. Outcomes on the one hand are more than the production of deliverables, but on the other hand they reflect more immediate changes than the ultimate impacts sought. Most of the indicators and targets proposed in this note are related to these kind of intermediate changes. To be successful the monitoring must form the basis for evaluation and changes in project implementation. It is assumed that a mix of both quantitative and qualitative indicators and methods will be needed for monitoring change. Some things may be relatively easy to monitor, especially hardware (e.g. the number of people served, use of different services, etc.), but others are much more difficult to track especially the software (e.g. perceptions, behaviour change, collaboration etc.).
Since learning alliances place emphasis on these software issues, specific tailored methods are required. One tool that is suitable for monitoring these ‘software’ outcomes, is known as descriptive ordinal scoring or ‘micro-scenarios’. The micro-scenario method, if used, should compliment other approaches to monitoring change such as process documentation methods and others. Where significant levels of resources are available for M&E these may be appropriate and resources linked to these methods will always be useful for inspiration. However, with relatively limited resources for monitoring, most learning alliances will only be able to implement a simplified alternative like microscenarios.
Micro-scenarios as a framework for M&E are intended to:
- break down barriers to both horizontal and vertical information sharing and learning and this approach draws on the Methodology for Participatory Assessment (MPA) and on Qualitative Information Appraisal (QIA). Both use participatory methods to record people’s perceptions, QIA translates these descriptions into scores and numbers.
- speed up processes of identification, development and uptake of solutions
The micro-scenarios scoring method provides a starting point for reflection on these types of objectives. It potentially allows for some comparison across cities where the indicators and scenarios are common.
- Stakeholders choose the micro-scenario that most adequately reflects the situation.
- Ordinal scoring options are benchmarked and peerreviewed.
- The reason for a specific score is recorded.
- Identify key change objectives together with stakeholders. It is important to ensure unambiguous wording so that all involved understand the indicators.
- Identify the different levels: ‘micro-scenarios’.
- Identify a ‘benchmark’ – what is the minimum acceptable level we would like to achieve • Identify a ‘baseline’. What is the current level?
- Monitor at regular intervals: record, reflect and discuss why has change taken place (or not)? What actions are required?