Evaluation vital for higher aim
US and British programs for broadening university participation offer useful guidelines for Australia.
IT costs $US7 billion a year, has been publicly funded since JFK held office, and is close to useless. What is it? Head Start, one of the most expensive public programs to use education as an instrument of social mobility.
Like many similar educational programs introduced across the US, Europe and Australia, Head Start thrived under the illusion of success for many years because unscientific evaluation methods prevented public scrutiny of its performance.
The federal government is seeking more transparent policy in its $500 million Higher Education Participation and Partnerships Program, which is based on the premise that social mobility is raised by increasing university participation.
The draft reporting guidelines for the HEPPP released last week illustrate a government keen to monitor responsible use of public funds. However, there appears to be less understanding of the prerequisite for a scientific approach to national policy evaluation that will answer the question of whether the program has achieved its objectives.
The prelude to Australia's HEPPP was Aimhigher, instituted by Britain's New Labour government in 2001. Like the HEPPP, Aimhigher funded university partnerships with schools on the assumption that such partnerships would increase disadvantaged students' participation in higher education. After 10 years, no one could say whether the program had succeeded, because its evaluation methods were primarily qualitative. These methods, such as asking whether a student had higher aspirations after an outreach session, revealed nothing about impact on students' academic achievement and subsequent ability to compete for a place in higher education. As a result, evaluations could not be used to establish a causal relationship between the principal policy objective of raising university participation and program outcomes.
There was palpable frustration towards Aimhigher expressed in the two years before its discontinuation. In 2009, the National Foundation for Educational Research fired a series of questions at Aimhigher government administrators that resembled a cross-examination. What proportion of the various Aimhigher cohorts actually went on to higher education? Were there any differences in the proportion of young people from Aimhigher schools who entered university compared with young people from comparison schools where there was no such exposure? Were there any differences in the type of universities the young people went to? What proportion went to universities that traditionally have higher entry requirements?
Aimhigher evaluations showed only a slight increase in the proportion of the targeted cohorts articulating to university compared with controls, but the quantitative data came too late to improve the program. Last year the Conservative-led coalition government abolished the previous government's widening participation policy, including Aimhigher.
The reporting guidelines indicate a national evaluation of the HEPPP will take place in 2014. But the guidelines do not provide for a randomised trial, the principal enabling mechanism of scientifically valid evaluations based on cause-and-effect analysis.
The use of randomised trials in social program evaluation has a political history that arouses suspicion among some policymakers and researchers. Richard Nixon institutionalised the practice of measuring social programs by quantitative output measures. Later, George W. Bush established the US Department of Education's What Works Clearinghouse to conduct and publicise evaluations of educational policy. Political scientist Frank Farrelly contends that in the US, such rigorous evaluation requirements have been introduced selectively by conservative administrations to justify the elimination of social programs in Democratic constituencies.
The politics of evaluation have carried over into academe, where the paradigm wars erupted in the 1980s and 90s. Blaine R. Worthen, former editor of the American Journal of Evaluation, has openly criticised the paradigm wars, particularly the attack on quantitative evaluative methods that he contends became a respected speciality in many humanities departments during the 90s. Melvin Mark of the Institute for Policy Research and Evaluation has observed that the warring camps in the paradigm wars were philosophically divided into postmodernists and constructivists, who favoured qualitative evaluation methodology, and positivists, who favoured quantitative methodology.
Worthen has proposed that instead of avoiding quantitative methodology on the basis that it may reveal the problems of social programs and give elitist policymakers an excuse to terminate them, evaluators should embrace quantitative methods to advance social justice.
The Gillard government has enforced strong accountability measures for educational policy in schools. The HEPPP reporting guidelines indicate a commitment to extending this accountability to universities. But in the absence of randomised trials that test educational program outcomes against national policy objectives, it will be impossible to determine if the HEPPP has achieved its aims.
State governments are drawing up memorandums of understanding with universities for the purpose of regulating the HEPPP. The focus of the MOUs is on delineating territorial boundaries for university outreach programs, ostensibly to avoid duplication of program provision and institutional competition for low-socioeconomic student loading.
A more innovative and ultimately productive focus may be to encourage collaboration between universities for the purpose of evaluation research. This would be best administered by a relatively impartial third party, such as the soon to be established Tertiary Education Quality Standards Agency.
Adopting a scientific approach to evaluation of the Higher Education Participation and Partnerships Program may prove to be the single most important determinant of its long-term success.
Jennifer Oriel is a higher education policy analyst.
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout