Growing pressure for reform suggests that development needs to be done differently. Members of theDoing Development Differently (DDD) movement have come up with some key common-senseprinciples in this regard: starting with problems, not solutions; taking account of politics; risk-taking; being `entrepreneurial’, learning from mistakes; and supporting locally-led changes that are appropriate to context. In light of this agenda, ODI recently organised a workshop attended by a broad mix of academics, researchers and practitioners, to figure out how DDD might look in practice and how it can be managed.
Many participants readily admitted that elements of the DDD agenda are not particularly new, and also highlighted several past projects that had been carried out along similar lines, and within existing systems. Indeed, I couldn’t help but think that successful projects are usually characterised by negotiation, coalition-building, problem-solving of politically relevant issues, and lesson-learning from setbacks along the way. Isn’t this just being good at your job? But if that’s the case, what’s new or different about DDD?
In fact, while these principles are familiar and not very controversial, they have not been systematically applied by development agencies, despite growing evidence of their impact. So the key benefit of putting these ideas into a carefully-constructed framework is that it could help us to ‘do DDD differently`: that is, to reflect on why flexible, feedback-heavy and politically-smart approaches have not been systematically implemented by organisations, but rather have been the preserve of `lone rangers’. Given growing external pressures for reform of development practice, this reflection is particularly timely.
During our discussions, participants identified two key issues where management changes could help with Doing Development Differently: monitoring systems; and understanding the role of `policy entrepreneurs’. Using DDD principles, it is worth reflecting on why these changes might be difficult to implement and which strategies might help to do so.
Participants identified monitoring systems that focus on outputs, as opposed to outcomes, as a hurdle to a DDD approach. If you specify outputs in advance and insist on sticking to them, you shut down`purposive muddling’ and the potential to use feedback for helpful adjustments to a planned reform. Outcome-based indicators could advance DDD because they focus on what reform has achieved – the problem it has solved and the function it delivers – not just the form it has taken. This argument is familiar, so the obvious question is why development projects still rely so heavily on output measures for monitoring.
One key reason is attribution: it is hard to spend someone else’s money without being able to tell them what effect that money had. So for funders it is much easier to take credit for outputs like a new tax office building, or the passage of a new law on fiscal rules, than to attribute outcomes like a rise in tax revenue to their support. Agencies interested in DDD might therefore find it helpful to develop and experiment with innovative approaches to attributing outcomes.
For example, if many different organisations have supported key sectors, such as maternal health, each one could compare its contributions to other organisations and `share’ attribution of any fall in maternal mortality rates accordingly. Or different outcome indicators could be used: in the previous example of a tax office, an outcome indicator on the `tax gap’ may be more attributable than revenue growth. Indeed, indicators on the `implementation gap’ between laws/policies and real-life practice may be a useful tool for DDD-safe monitoring, because such indicators are agnostic about which laws or policies are chosen, focussing only on whether they were implemented.
The second reason that output indicators end up carrying much of the monitoring load is that there are many people who continue to believe that there are `right ways’ and `wrong ways’ to do reform. In some areas, like accounting standards or governments’ chart of accounts, they are probably right. But in others, like budgeting, there are many options that could work well or badly in different contexts.
So applying DDD principles to a public finance or governance reform programme might change the way it is monitored in two ways: firstly, an increased emphasis on outcome indicators; and secondly a more flexible approach to output indicators. The latter could be adjusted as the programme progresses, in light of evidence on which outputs are most efficient at delivering the outcomes we’re after. Such changes may be difficult for many headquarter-based wonks as they lose some of their grip on the technical details of reform, instead investing their trust in those closer to the action. And it would be quite a change compared to some reform targets that are simply copied and pasted fromPEFA output indicators. But shifts in control are not a new idea, and evidence from performance-based budgeting may be helpful in considering innovation in this direction.
Participants engaged in extensive discussions on `policy entrepreneurs’ – the key local players who actually drive change – and on the effect that a clearer understanding of their role could have on development practices. In this, the most heartening point for me was the dawning realisation of DDD’s radical proposition: donors could no longer see themselves at centre stage in reform. Instead, the key words were broker, facilitator, and convenor; working in support of local policy entrepreneurs who drive reforms with their local networks and coalitions.
However, there appeared to be an unspoken assumption in this discussion: that donors should identify and support policy entrepreneurs directly, an approach reminiscent of `picking winners’ in private sector development. This is hard – you probably only know who the policy entrepreneur is after they’ve been successful, and, even then, it may not be clear since coalition builders do not work alone. Participants also raised serious doubts about whether it was appropriate to provide policy entrepreneurs with external funds, remarking that a brokering/facilitating approach was both cheap and difficult – thereby the opposite of the easy and expensive approaches that donors tend to prefer.
Given the structure of donor organisations and the incentives they face, it seems unlikely that a direct approach to supporting policy entrepreneurs is scalable. Again using the analogy of private sector development, donors might be better suited to programmes which create an `enabling environment’ for policy entrepreneurs: creating the conditions that would help any of these critical individuals who may be at work. Developing such programmes is a rich research agenda. And there is a double whammy if these measures are easy and expensive, so that any brokering and facilitating work that is going on, can be relieved of the large disbursements that can lead to suffocating monitoring requirements.
My suggestion for this programme is a simple one: make sure you have the baseline outcome data. We already know that data, as a public good with spillover effects, is just the sort of thing donorsshould be delivering. But it is even more important when we think about DDD. Data help policy entrepreneurs to identify and highlight problems that could be targeted for reform, and to raise them up the political agenda with stakeholders. The availability of outcome data enables policy entrepreneurs to track a project’s impacts, regardless of whether these were initially planned or moved into the frame as a result of feedback and learning. This is so important that DDD should really add a fourth D – for data. And conveniently, the collection, production and publication of data are cases where output and outcome indicators are aligned, so innovative instruments such as theData Compact mean that supporting data could be both easy and expensive, as required.
Most elements of Doing Development Differently may be familiar, but two things are different. One is the timing: a new framework has appeared just as demands for change to aid practices reach new heights. The other is the opportunity to `do DDD differently’: to learn from past mistakes about why long-overdue reforms to development practice have been so difficult, helping us figure out new ways to finally implement them.