Reforms could hit target, miss point
IF there is one law of economics, it is that if you reward people to do more of something, they will. Assuming that is what you want, so far, so good. But there is a less pleasant corollary, which is this: if they do more of what you are rewarding them for, they will do less of those activities for which they are not rewarded or are rewarded less, with results that can eliminate the benefits you were seeking.
These effects are at the centre of the debate raging about performance indicators and performance pay in many areas of public-service provision, including education and health.
Outcomes in these areas are not readily defined and many important aspects of those outcomes are inherently difficult to measure. And it is more difficult again to evaluate the effectiveness and efficiency of individual efforts, as the outcomes of (say) a hospital do not depend solely on its own efforts but also on the initial health status and risk attributes of its patients, just as the outcomes of a school are affected by its students' socio-demographic characteristics.
That each school or hospital will have a relatively small intake in each period further complicates matters, as it may pick up a particularly good or bad lot, in ways that cannot be fully captured in standardised measures and that distort apparent performance.
The result is that even the best crafted performance indicators will be very partial. This is all the more the case as the indicators, if they are to be of real use, must be relatively few in number. But once you focus on only a few key measures, the incentives to game the system become very strong. Suppliers aim at the target, not at the outcome, and performance suffers.
The British experience with performance indicators shows how serious these problems can be. Under the Blair government, schools were set targets defined by performance on tests. Predictably, that is what they focused on, especially as the intention was to penalise low-performing schools. As even the chief inspector of schools recognised, teaching became concentrated on those skills most important in the tests, with less attention being paid to all the other aspects of student development. But even that was only part of the problem, as the emphasis on testing created incentives for schools to select pupils, including by trying to get rid of those who were likely to be the worst performers. The result was to distort the allocation of students across schools and the education students received.
While causing those distortions, the system did little to improve performance, even on the tests. The best evidence available suggests that outcomes improved more rapidly in the final years of the Conservative government, when no targets were set, than they did in the Blair years. The failure to improve performance was compounded by deficiencies in performance evaluation, with numbers fudged and assessments massaged so as to avoid political embarrassment.
Even when it became clear that some schools performed very poorly, cutting resources or sacking staff proved too hard politically, all the more so as the statistical reliability of many of the indicators was poor.
The British experience with targets for hospitals proved even more distressing, with three features dominating the picture.
First, the headline targets came to determine the allocation of resources, regardless of how poorly those targets were related to the objectives being pursued. As a result, things were done that otherwise would have been regarded as completely unacceptable.
For example, to ensure patients presenting at emergency would be seen within the four-hour target, some hospitals required patients to wait in ambulances outside until the queues in emergency had reduced to the point where they could be admitted without putting the target (rather than themselves) at risk. Equally, the emphasis on reducing waiting times for elective surgery meant that resources were shifted from other, more important, uses, causing a statistically significant increase in death rates. A substantial spending increase therefore resulted in no improvement in health outcomes.
Second, the focus on targets induced evasion, including through manipulation of data. Just as Soviet managers routinely lied about whether they had achieved production quotas, so creative accounting became widespread, including by altering measurement rules to exclude cases that would have led to the targets not being met.
Third, as embarrassments mounted, the system was corrupted to make it less politically troublesome. Targets were made softer; the number of indicators was multiplied to the point at which it was impossible to understand what was being measured or achieved; and little effort was put into auditing glowing reports from the field.
Reviewing these outcomes, the Royal Statistical Society concluded that while performance monitoring "is broadly productive" when done well, "done badly, it can be very costly and not merely ineffective but harmful and indeed destructive".
This is not to say that performance in public services should go unmeasured or that the measurements made should not be disclosed. As Julia Gillard has rightly emphasised, performance measurement is an essential part of accountability. However, what needs to be understood is that there are inherent limits on performance measurement as a way of improving outcomes in the public sector.
These limits arise for a simple reason: when you measure public-sector performance, you are not measuring outputs but merely surrogates for outputs, and often very crude ones at that. Doubtless, exam results and test scores are important, but an education is much more than that. Equally, it is all very well to measure service availability and process quality in health care, but it is the health outcomes patients experience that ultimately matter.
It is precisely these inherent limitations of performance indicators that doom central planning to failure. The central planners, looking at the indicators, can never really gauge performance in the round: as a result, they cannot provide sensible incentives for good performance to be secured.
In contrast, consumers of the service, in most instances, can weigh up the different elements that comprise performance and can evaluate, on the basis of their experience, the quality ofthe schools their children attend orthe outcomes of the health care they obtain.
As a result, performance measurement is never enough: rather, to be effective, it needs to be accompanied by policies that promote choice. The performance indicators government secures should help inform that choice, but they will never be able to replace the disciplines that choice brings. Indeed, without choice, the indicators can readily make matters worse, as consumers find themselves trapped in a world in which the indicators merely distort the decisions taken by service providers.
This then is the acid test for the Rudd Government: whether it will simply try to make central planning more effective or whether it will move to expose public-sector service providers to the pressures of competition.
If it stops at performance indicators, it may create a system that hits the target, but it will certainly miss thepoint.
Henry Ergas is chairman of Concept Economics.