November 2016|bsi|

As governments in developing countries around the world prioritise spending on the delivery of services, it becomes increasingly important to verify that these services are being delivered successfully– that schools are conducting classes, clinics are treating the sick and infrastructure is being constructed. To do so, governments undertake monitoring, inspection and evaluationactivities, conducted by teams that travel to the service site and report back on their findings.

Through my work as an ODI Fellow in the Ministry of Finance, Planning and Economic Development in Uganda, I have been looking at the ways in which M&E and inspections instruments can be used to improve the delivery of essential services to communities. When the system is working well, assessments of public services should yield recommendations on how services can be improved, which would then be included in planning and budgeting for future years – an important feedback mechanism between the budget process and the community’s needs. If monitoring, inspections and evaluations are not carefully managed, however, they can quickly become an expensive and ineffective drain on public resources.

So what might explain why M&E and inspections functions are not as efficient and effective as they could be?Here in Uganda, I can point to three main reasons.

Firstly, evaluations, inspections and monitoring are spread across a range of government agencies. Various government bodies have overlapping mandates to conduct M&E and inspections, including central agencies, line agencies, local governments and donors – with the result thatservices in a particular location can be visited by assessment teams more than once within a short time. This is a wasteful duplication of effort and places a larger-than-necessary burden on service-delivery staff. On occasion, donors exacerbate this problem by setting up separate M&E activities for their projects, rather than working with government agencies to conduct joint, sector-level monitoring of service delivery.

Secondly, there is not a clear framework for how information collected through these monitoring activities should be shared across government.With so many teams collecting so much information, there needs to be a better way of storing and sharing these results. At the moment, teams are travelling to the field to collect information that previous teams have possibly already collected; making information easier to access and share would reduce this repetitive and inefficient collection of data.

Thirdly, in many instances the recommendations that result from monitoring, inspections and evaluations are not implemented or followed up by policymakers. A lack of follow-up includes a failure to act on identified problems (such as poor-quality construction of facilities), and a lack of accountability of the public officers responsible for poor quality (or reward, for those who have performed well). This is compounded by the lack of a formal process for recording, tracking and implementing recommendations made by M&E and inspections teams.

In light of this, I would argue that a single government agency should be made responsible for overseeing the whole of the government M&E process: from deciding which agency should be assessing which service, to coordinating field teams, to collecting and disseminating field reports, and to tracking when and how government follows up on recommendations. This would have three main benefits.  The first would be better coordination across government of M&E and inspections activities, so that these activities are not being conducted by multiple teams at the same time. Second would be improved sharing of the data and information collected through inspections and M&E, so that the government can make informed policy decisions based on strong evidence. Thirdly, there would be a more consistent process for following-up and implementing the recommendations that are made by field teams, so that M&E and inspections become a tool for improving the quality and efficiency of service delivery.

And beyond Uganda?As a general rule, M&E activities that create a large amount of poor-quality data that is scattered across government are of little use; far better is to have fewer monitoring teams and to make the best possible use of the information they collect. To achieve this, several lessons can be drawn on from Uganda’s experience and applied more broadly.

Firstly, as other authors have argued, there are significant benefits to be gained from having one, central, capable agency in charge of monitoring and evaluation of government services – even if service delivery itself is decentralised. Secondly, making sure that the legal framework is clear on which institution is responsible for overseeing which service is also important in ensuring that there is no overlap in M&E mandates. Third, storing the results of M&E activities in a single, easily-accessible location can improve access to evidence for policy-making.

Fourth, achieving better discipline and accountability of the public servants who are responsible for service delivery can be achieved through proper inspections and monitoring, even where the overall quality of governance is weak. Finally, donors can support better M&E in partner governments’ systems by using the M&E systems already in place locally, rather than setting up their own M&E teams for each project.

Drawing the control of monitoring, inspections and evaluation activities into a single government agency will contribute to making this aspect of governance more effective and efficient, and will ultimately lead to improvements in the quality and reach of the services governments provide to communities.

Christine van Hooft is an ODI Fellow in the Ministry of Finance, Planning and Economic Development in Kampala, Uganda. Opinions expressed in this post are those of the author and do not represent the policies of the Government of Uganda.

$permalink = the_permalink();