On 7 August, consultations opened for the revised Public Expenditure and Financial Accountability (PEFA) framework. Launched in 2005, PEFA has become the ‘go-to’ measure of quality in public finance systems. Now, nearly ten years on, sweeping changes are being proposed. But the PEFA members should tread carefully: it will take a lot more effort to make sure the new indicators are used appropriately.
Firstly, what can we expect from this makeover? Overall, PEFA is getting bigger, and denser, but not necessarily much better. Nearly all the indicators have changed and the number of ‘dimensions’ that need to be measured has increased. Still, the overall framework is the same and we’ve heard from early ‘testing’ that the new PEFA gives pretty similar scores as the old one. That’s good for continuity, even if individual indicators can’t be directly compared. But there are lots of factors that muddy the waters – scope for interpretation, bad fiscal statistics, different modes of governance and the focus on processes rather than outcomes – that should not be expected to be resolved fully by PEFA, but limit how accurate an assessment can be.
Significantly, the removal of the three donor indicators sends a poor statement over accountability for public finance outcomes. Generally, the donor indicators have not been scored well, or even at all. In its own right this should be an embarrassing finding. Donors can be significant players in the finance systems of many countries, and not always for the better. Aid processes can make budget coordination more difficult, resources more volatile and bank accounts difficult to reconcile. You can read the 2010 Public Expenditure Review for Sierra Leone for a good example of how budgeting is complicated by aid volatility and the IMF. Ironically, the three new indicators (for setting fiscal policy, public investment management, and asset management) can all be heavily influenced by donor activities. A good fiscal strategy depends on knowing how much money you can expect to have in the coming years, which is not easy if you are a fragile state that depends on budget support. Public investment management is often outsourced to donors, usually off the budget, which in turn makes it difficult to identify and value ‘public’ assets.
These issues are largely under the control of the Secretariat and its members. Clearer guidance over terminology will make assessments more precise. Assessors could also be required to note clearly when data does not add up, and where they have had technical challenges preparing the PEFA report. The donor indicators should be maintained, or at the very least require specific explanations and tables where donors have the greatest impact on country systems – like fiscal forecasts and banking arrangements.
Ultimately, the main concerns with the PEFA framework lie not in its content but in the way the framework is used. This will be much more difficult to address, because it lies in the personal and institutional incentives that underlie the aid architecture. The PEFA review is an opportunity to remind donors of three clear messages.
1) PEFA indicators make good measures, but poor targets. The worst abuses occur when a PEFA assessment is used as a blue print for reforms, without regard to country context – a kind of ‘off-the-shelf’ reform plan with actions to improve the scores of all 28 (now 30) indicators. PEFA does not tell you why public finance systems are weak any more than the Corruption Perception Index can be used to identify why corruption happens. Therefore, they are rarely an appropriate intervention.
2) There are no straight-A students. While it seems intuitive that ‘A’ is better than ‘B’, no country will score ‘A’s in all indicators. Nor is it necessary. Public finances are set within different accountability arrangements. Anglophone countries score better on external oversight, while Francophone countries get better results for internal controls. In 2008, Norway scored several ‘C’s and a couple of ‘D’s with particular ‘weaknesses’ in internal audit and procurement. Their response was very rational: the procurement systems needed improving, but internal audit reforms were not necessary given the strong internal controls that were already in place. So it is not ‘good’ (or ‘bad’) just because a PEFA score says so.
3) Beware of gaming, on all sides. When PEFA indicators are linked to resources, gaming is inevitable. During assessments themselves, there may be incentives for governments to massage evidence, and for donors to encourage a higher score. Similarly, people will target reforms that improve PEFA scores with the least amount of effort. With the new indicators proposed, we should be ready for a surge in ‘cheap’ fiscal strategies that are not adhered to (the new indicator PI-CFS); semi-independent project evaluation units (the new PI-PIM) that simply sign-off politically-motivated projects without question; and performance budgets (the new PI-23) that report using bad data.
The PEFA Secretariat can play its part in combatting bad behaviours. The most adroit way would be to enforce requirements for the Summary Assessment – something of an executive summary. In it, assessors can be required to draws linkages between areas like budgeting and cash management and suggest where the challenges are most acute or more analysis is needed. It could also be useful to note where government officials believe the greatest challenges lie. This should be combined with guidance targeting donors and assessors directly, explaining the limitations of PEFA, how it can be used more effectively and how to manage expectations.
In the end, it is up to the donors to play along. They will need to stop linking aid disbursements or allocations to changes in PEFA scores, and ensure assessors have the right incentives to deliver helpful and independent reviews. Donors also need to be prepared to move together to avoid confusion when the PEFA framework changes – just imagine the government official trying to keep track of both at the same time. But if they want somewhere to start right away, they should remember those three messages: ‘PEFA is not a target’, ‘there are no straight-A students’, and ‘beware of gaming’.
Changes to the framework are now inevitable, even if the benefits may be relatively limited. PEFA will remain a good (but crude) indicator of public finance performance that can point to where there may be problems. But the Secretariat and its members should reconsider how the impact of donors on government systems will be assessed in the absence of the three donor performance indicators. The Summary Assessment should also be strengthened to guide further analysis and interventions. If donors are financing poorly contextualised reforms, based on a crude indicator, it is unrealistic to expect improvements in public finance and development outcomes. For this we need new behaviours, not new indicators.