You are on page 1of 2

CHRIS CLARE

2. DO INDUSTRIAL APPROACHES TO QUALITY


MANAGEMENT AND PERFORMANCE INDICATORS
WORK FOR HIGHER EDUCATION?

INTRODUCTION

There is a considerable amount of energy and resources in UK higher education


that are devoted to quality assurance, review and audit. This has resulted from a
variety of initiatives, many of which started to emerge some thirty years ago. At
the start of the 1980s a belief in the need to increase efficiency in higher education
emerged, as a consequence of general Government policies to increase public
accountability and performance through a more market-oriented approach to all
public services. In this chapter we explore some of the issues and debates that this
has raised within higher education as they have emerged in the literature over this
period.
Attempts at more direct involvement in higher education by various government
bodies including the Department of Education and Science, the Department of
Trade and Industry and the Department of Employment (through the Manpower
Services Commission) (Maclure, 1989) was one aspect of the pressures to bring
greater Government control to the sector. Both the Green Paper in 1985, and
the 1987 White Paper emphasised that higher education should be geared towards
the needs of business and industry and that there should be greater scrutiny of the
performance of universities (Department of Education and Science, 1985; 1987).
Much of the initial criticism was directed towards the universities as opposed to
the polytechnics who were under the jurisdiction of the Council for National
Academic Awards (CNAA). They were also scrutinised by the Department of
Education and Science and were subject to formal inspection of teaching and other
operations by Her Majesty’s Inspectors (HMI). However, the White Paper, and the
subsequent Education Reform Act (1988), followed by the Further and Higher
Education Act (1992) effectively led to the removal of the CNAA as a national
quality assurance body through the granting of autonomy to the ex-polytechnics
(Department of Education and Science, 1988; Department of Education and
Science, 1992).
The 1992 Act unified the higher education sector by removing polytechnics
from the direct control of Government or local authorities. The Act also enabled
the polytechnics to adopt university titles, and have the full degree awarding
powers of the traditional universities. Funding for the new unified sector was
channelled through the Higher Education Funding Council for England (HEFCE)

G. Bell, J. Warwick and P. Galbraith (Eds.), Higher Education Management


and Operational Research: Demonstrating New Practices and Metaphors, 31–48.
© 2012 Sense Publishers. All rights reserved.
C. CLARE

which was also charged with ensuring value for the money that was allocated to
the universities.
During the debates of the 1980s, there appeared to be a need to develop clear
systems of quality assurance, in part to justify Government expenditure on higher
education (Kells, 1999). This resulted in pressure for the development of metrics
and performance indicators as part of the system of monitoring universities
(Harvey & Knight, 1996). As a consequence, the use of performance indicators for
higher education was one of the thrusts of Government plans to emphasize
efficiency and effectiveness in the management of universities.
Various groups and committees had made proposals for the development of
metrics and indicators (Cave et al., 1997). These included the Jarratt report (CVCP,
1985) that proposed the introduction of a set of performance and other indicators
for use by institutional managers. The National Advisory Body for Public Sector
Higher Education (NAB) also published a report, which recommended a series of
performance indicators for use in the polytechnics (NAB, 1987). The Warnock
report proposed the development of metrics to be used in assessing teaching
quality (PCFC, 1990a). In the same year, another group, initiated by the
Polytechnics and Colleges Funding Council (PCFC), undertook a study into the
potential use of performance indicators for institutional management (PCFC,
1990b).
These moves towards increased measurement of the activities of institutions
were seen as a symptom of “managerialism”, through attempts to increase
efficiency and reduce costs using methods to assess institutional performance
(Trow, 1994). This trend towards managerialism was also felt to be symptomatic
of a lack of trust by Government in the academic community to maintain
appropriate levels of quality control at a reasonable cost (Harvey & Knight,
1996).

QUALITY ASSESSMENT AND AUDIT IN HIGHER EDUCATION

In order to discharge its duty of ensuring value for money, the newly formed
Higher Education Funding Council for England set up mechanisms for quality
assessment on a subject basis, focussing on qualitative self-assessment, coupled
with inspection visits along the lines of the former CNAA/HMI (HEFCE, 1993).
However, approaches to quality management and enhancement also included the
measurement and testing of an organisation’s own systems of quality assurance
and, to achieve this, a separate organisation for higher education was set up. This
organisation was the Higher Education Quality Council (HEQC) and it was owned
and part-funded by the universities themselves. The responsibility for “quality” in
English universities was therefore vested in two essentially separate organisations,
each of which adopted a different approach to its work and placed different
demands on the universities to prove compliance with defined quality standards
(HEQC, 1996).
Both organisations developed systems that required institutions to produce
written self-assessments, backed by substantial amounts of evidence in the form of

32

You might also like