Professional Documents
Culture Documents
Project:
Collaborative
benchmarking
in
public
services:
Lessons
from
the
UK
for
the
Brazilian
public
sector
Authors:
Salvador
Parrado
and
Elke
Loeffler
Product
III:
Verso
final
do
relatrio,
com
ajustes
solicitados
pelo
MPOG
Contact: salvador.parrado@govint.org
Elke.loeffler@govint.org
(This report is for internal use only. If the report is to be published online or in print, written
permission for the reproduction of tables and graphs is required).
Executive
summary
.............................................................................................................................
4
1)
The
benchmarking
initiatives
in
the
UK
public
sector
.................................................................
7
a)
Compulsory
Competitive
Tendering
and
(Total)
Quality
Management
as
starting
points
......
7
b)
The
Best
Value
Programme
as
a
central
government
benchmarking
programme
for
local
government
....................................................................................................................................
8
c)
League
tables
for
performance
comparison
and
greater
accountability
...............................
10
2)
Voluntary
benchmarking
clubs
or
networks
.............................................................................
12
a)
Different
type
of
benchmarking
networks
............................................................................
12
b)
Methodology
of
APSE
network
.............................................................................................
14
c)
Peer
review
as
complement
to
benchmarking
processes
.....................................................
20
3)
Advantages
and
limitations
of
different
approaches
................................................................
23
4)
Making
benchmarking
work:
critical
success
factors
from
UK
benchmarking
projects
............
27
d)
Contextual
factors
.................................................................................................................
28
e)
Organisational
issues
.............................................................................................................
28
f)
Organising
and
planning
the
benchmarking
exercise
............................................................
30
5)
Making
benchmarking
sustainable:
integrating
benchmarking
in
wider
public
service
and
governance
reforms
.........................................................................................................................
33
g)
Principles
to
apply
in
the
implementation
of
benchmarking
................................................
33
h)
Phases
for
individual
agencies
that
use
benchmarking
.........................................................
34
6)
Resources
..................................................................................................................................
36
i)
Benchmarking
literature
on
UK
experiences
..........................................................................
36
j)
International
research
on
public
sector
benchmarking
..........................................................
36
Executive
summary
1. This
report
was
commissioned
by
the
Brazilian
Ministry
of
Planning,
Budget
and
Management
through
the
Institute
for
the
Strengthening
of
Institutional
Capacities
(IFCI)
and
financed
by
the
British
Council
in
Brazil.
The
objective
of
the
Technical
Cooperation
Project
Supporting
Public
Administration
in
Brazil
is
to
assist
the
Ministry
of
Planning,
Budget
and
Management
in
identifying
experiences
with
benchmarking
in
the
United
Kingdom.
2. The
study
is
based
on
a
review
of
official
reports
and
academic
research
on
benchmarking
in
the
British
public
sector.
It
covers
benchmarking
experiences
from
central
and
local
levels
of
government
in
the
United
Kingdom.
In
practice,
most
benchmarking
exercises
have
been
conducted
at
the
local
level
but
lessons
from
these
levels
of
government
can
be
applied
in
the
Brazilian
federal
government.
3. The
report
is
structured
in
five
chapters:
British
experiences,
success
factors
and
sustainability
of
benchmarking.
The
first
part
deals
with
the
main
experiences
in
benchmarking
in
the
United
Kingdom.
These
experiences
cover
all
the
spectrum
of
benchmarking
by
including
benchmarking
against
performance
targets,
league
tables
and
voluntary
exercises
that
focus
on
gathering
information
for
performance
improvement.
Chapter
2
offers
a
more
specific
description
of
the
methodology
of
benchmarking
clubs
or
networks.
Chapter
3
outlines
the
different
advantages
and
disadvantages
of
the
experiences
based
on
a
classical
typology
of
results
and
process
benchmarking.
Chapter
4deals
with
the
success
factors
of
benchmarking.
Chapter
5
focusses
on
the
benchmarking
process
from
the
perspective
of
individual
agencies
and
ways
of
making
this
process
sustainable.
4. The history of benchmarking in the public sector has its seeds in the mid-1980s (for local
authorities) and early 1990s (for national agencies) when the conservative government of the
time wanted to bring public management closer to private enterprises. Tendering was made
compulsory for local authorities in order to identify whether in-house staff would be able to deliver
the services more efficiently than private enterprises. At national level, some pilots were established
in order to benchmark the performance of agencies against private sector organisations.
5. In the 1990s, performance indicators and comparisons were introduced for local councils
and police forces. A considerable number of performance indicators were established for these
authorities to collect data. The Audit Commission used to publish these data in league tables
without making further comments. However, the mere publication of the information triggered
certain interest in local managers to improve performance.
6. During the 2000s, a Best Value Regime was established for local authorities and the
government made it compulsory to review their services according to a set of criteria. One of
the criteria requested local authorities to do comparisons of performance. A considerable amount of
performance indicators was used for this comparison.
7. After the end of compulsory benchmarking, many public organisations decided to join
benchmarking clubs voluntarily. These clubs cover a wide range of services: housing, hospitals,
primary health care and regional partnerships. These clubs provide a softer approach than audit
regimes whereby auditors check whether particular authorities have achieved the performance
against specific targets.
8. Benchmarking clubs or networks, as they have been recently labelled, are the closest
experience to the Brazilian benchmarking colaborativo. These benchmarking networks exist
in sectors such as housing, health and regional development as well as local authorities. In most
cases, joining a network is based on the payment of an annual fee by the authority that may range
between $4000 and $10000 a year.
9. This report provides a detailed explanation of the methodology used by APSE (Association
for Public Service Excellence), which is a benchmarking network for local authorities with more
than 300 current members. The benchmarking exercise is based on quantitative performance
indicators that are defined and developed by the members. The data collection is subject to a
validation process and strict confidentiality rules. Data is not to be disclosed to third parties by any
of the network members. Based on this information, local authorities can receive all sorts of reports
for a range of up to 14 service areas. These reports compare the scores of the authorities with their
family groups (for instance, similar size) and for the whole service set. Besides, there is an award
for best and most improved performers, which are showcased in case studies and benchmarking
meetings.
10. The report also suggests complementing benchmarking networks with a scheme of peer
reviews as conducted by the former Improvement and Development Agency (IDEA). Although
peer review does not necessarily form part of a benchmarking network, it could be a good support
for a more qualitative benchmarking approach. The basic idea of this type of review consists of
peers from other authorities assessing the performance of one organisation against an agreed
benchmark and following a structured process. This process includes self-assessment against the
benchmark and an on-site visit of the peers for up to 3 days for the assessment of the organisation.
The exercises finishes with a final visit by the peers to see the developments of the organisation
after the implementation of the improvement plan that was produced as a result of the peer review
exercise.
11. In sum, the history of benchmarking in the UK has had different milestones. It started as an
exercise to make public sector organizations more similar to private sector businesses. It then
developed to impose national standards to local authorities. And it has been used more recently on
voluntary basis to spread good performance among public authorities in different sectors.
12. Benchmarking can be performed in three different ways: against results, processes, and
quality award schemes. Regarding results benchmarking, the most controversial issue is whether
those indicators should be published in order to enhance accountability to service users. Research
by the Audit Commission has shown that the publication of performance information of local public
services makes citizens aware that such services exist even though most citizens will not be very
interested in general performance information. Performance information needs to be targeted at
specific target groups so it gets to people who are interested in that specific service Regarding
process benchmarking, the challenge is to have a best performer within the club, otherwise
organisations would be copying mediocre public sector organisations. Finally, award schemes may
be problematic, as they may only offer a platform to recognise and celebrate well performing
organisations. However, the Beacon schemes in the UK for different sectors (schools, local
authorities and health) could be a relevant experience for Brazil because award winners have to
share their good practice through a very structured follow-up scheme which involves more than just
publishing information about the award winners.
13. There are several success factors for benchmarking clusteredin three categories: political
factors, organisational features and benchmarking factors. Some conclusions have been drawn
from this analysis. Benchmarking among public sector organisations governed by leaders from
different political parties is jeopardized by conflicts derived from the interpretation of the data.
Benchmarking against targets (where gaming with data is a risk) or league tables (where
competition does not allow for a true interorganisational learning) can be risky. Although the
external validation of the data through external audits can be helpful, it also raises gaming issues.
However, organizing benchmarking clubs with certain golden rules may be of good value for the
benchmarking organizations.
14. In order to make benchmarking sustainable individual agencies need to consider some
principles: 1) Start with outcomes and outputs to compare and improve processes and inputs
afterwards 2) Try to understand the results of the agency (outputs and outcomes) from the point of
view of users and stakeholders; 3) Engage not only top managers, but also middle managers and
front line staff in the benchmarking exercise.
15. Finally, benchmarking, as any other public management improvement tool, needs to be
integrated into the policy and public management cycle by defining the services/ policies that
will benefit most from benchmarking; measuring and comparing performance, managing change,
improving services and evaluating the improvements.
a) Compulsory
Competitive
Tendering
and
(Total)
Quality
Management
as
starting
points
16. This report uses a broad definition of benchmarking in order to capture different
conceptions and uses of benchmarking in the UK public sector. The definition used in this
report considers benchmarking as a tool for performance improvement. It goes beyond
measurement to tell a manager HOW to achieve better performance. By analysing and sharing
information about your own performance with partners who are achieving better performance, you
will learn where improvement is needed, how to achieve it and what impact it might have on your
overall success rate.1
17. The context is highly relevant to the success of benchmarking. Obviously, there have always
been many voluntary efforts by public organisations in the UK and other countries to improve
performance through structured performance comparisons and collaborative learning. However,
contextual factors such as government policies can strongly influence the scale and effectiveness of
benchmarking of public organisations.
18. The development of benchmarking policies in the UK dates back to the Margaret
Thatcher era (1979-1990), with its focus on emulating the private sector. One key mechanism
used by central government to improve the performance of the public sector was the creation of
alternative means of replicating the pressure to improve which exists in the private sector. This has
included requirements set by central government for local authorities to put certain activities out to
competitive tender and for all public services to consider functions to be contracted out to the
private sector.
19. Compulsory Competitive Tendering (CCT) was introduced by the Conservative
Government throughout the 1980s in an attempt to bring greater efficiency to local
government and health services through the use of competition. While it is generally recognised
that strong incentives were needed to stimulate reform, compulsion resulted in resistance by local
authorities and health trusts, an immature market and poorly-conducted procurements which
focused on price at the expense of quality and employment conditions. In particular, research
revealed three negative impacts of CCT2:
a
systematic
worsening
of
the
pay,
terms,
and
conditions
of
employment
of
workers
providing
local
public
services;
a
reduction
in
the
scope
and
power
of
local
government;
and
the
weakening
of
the
stabilising
tendencies
of
public
services
within
local
and
regional
economies.
Foot, J. (1998).
20. At central government level, there was a programme to compare public agencies. In 1995,
the Deputy Prime Minister announced his intention to benchmark the performance of central
government Next Steps Executive Agencies against both the private sector. At this time 375,000
civil servants - 71% of the total - worked in the 125 Next Steps agencies, or organisations operating
fully on Next Steps lines. Benchmarking performance on such a scale had never been done before.
It was therefore decided to launch a benchmarking pilot3 first.
21. Before rolling out benchmarking to a wide range of agencies, it was decided that
benchmarking should focus on a comparison of how specific functions such as human
resource management, which are common to all organisations. Since a key objective of
benchmarking was to facilitate comparisons with the private sector, the Business Excellence Model
of the European Foundation for Quality was selected as a methodology. A total of 30 agencies,
supported by external consultants, undertook a self-assessment process on the basis of the 91
questions related to nine themes of the Business Excellence Model. When the pilots had completed
their self-assessment, the scores of the pilots were compared against the standard set by the UK
Quality Award winners.
22. Comparative data seemed to be helpful for those agencies. Although the British Quality
Foundation could not release the results from individual companies for reasons of commercial
confidentiality, it was possible to compare the average scores under each criterion held on the
database for the private sector against those for the agencies4. These data made it possible to
identify those areas where public sector organisations appear to be performing particularly well, as
well as areas where further improvement appears necessary.
23. Following self-assessment, the public sector organisations identified key areas for
improvement and developed appropriate action plans. Many of the improvement actions reflect
the areas where the self-assessment scores identified weakness in performance, in particular
communication. A number of public sector organisations used the results of benchmarking as a
catalyst to drive forward initiatives to improve links both between staff and management, and
between the agency and its key customers.
24. As experience with these initiatives grew, flexibility in benchmarking was introduced. The
UK central government changed from specifying the use of particular tools towards allowing
organisations to select the techniques most appropriate to their circumstances, though they may be
challenged to justify their choices. This freedom, however, is within the context of moving towards
measuring and publishing organisations performance in league tables. Through this approach, the
UK government sought to achieve continuous improvement of public services, while retaining
public accountability for service delivery.
Cowper,
J.
and
Samuels,
M.
(1997),
Performance
benchmarking
in
the
public
sector:
The
United
Kingdom
Experience,
Paper
prepared
for
an
OECD
Meeting
on
Public
Sector
Benchmarking.
Best Value programme5. The new policy agenda was summarised in a White Paper with the title
Modernising Government.
26. The introduction of Modernising Government represented a full commitment by the
Labour Government to public services. First, the public wants improvement; second, quality
public services are a core value for the labour party, which is associated historically with the
development of the welfare state, and third, quality public services are a key component of
economic success (Benington, 2000). Second, the objective of modernisation has been encapsulated
in an initiative labelled as Best Value. As Bowerman et al. (2001, p. 321) stated: Best value
seeks, in sum, to promote quality services, but at a price the local community is prepared to pay.
27. In practice, the Best Value programme meant that all government services needed to be
reviewed against the criteria of challenge, comparison, consultation, competition and
collaboration (the so-called 5Cs). Furthermore, central government determined that local
government must carry out fundamental service reviews for all services at least once every 5 years
and they must:
28. One of the 5Cs is compare. This was, not so long ago, a very controversial area in public
services management in the UK. The conventional wisdom was that all public agencies are unique,
with unique user profiles, unique policies, unique histories, unique constraints (such as funding
opportunities, asset backing, geography, etc.). Consequently, anyone who attempted to benchmark
an agency against other agencies was regarded as ignorant of the basic rules of the game or, quite
likely, a traitor who was prepared to allow inappropriate comparisons to be made for nefarious
reasons.
29. However, the taboo on performance comparison has now been well and truly broken.
Under the Best Value regime, all service reviews in the pilot authorities had to undertake relevant
comparisons. Other non-pilot authorities started preparing for Best Value by joining benchmarking
clubs and undertaking preliminary comparison exercises. Nor is benchmarking confined to local
government all government departments and their executive agencies (the Next Steps agencies)
were told that they had to undertake performance comparisons in their reviews of activities in the
future (Cabinet Office, 1999).
30. Each best value authority had the duty to publish an annual Best Value Performance Plan
(BVPP), which must, by law, include:
31. In early 2000, the first set of BVPPs included mainly data on service performance.
However, as all authorities carried out cross-cutting and thematic reviews, BVPPs contained a
much greater level of detail on performance indicators and targets of this type. Clearly, this made it
much easier to undertake benchmarking exercises on governance issues.
32. The performance indicators and targets which have been designed centrally for the Best
Value regime emphasised service quality, efficiency and cost. All best value authorities are
required by law to include within their BVPPs:
Quality
targets
that
are,
as
a
minimum,
consistent
with
the
performance
of
the
top
25%
of
all
authorities;
and
Cost
and
efficiency
targets
over
5
years
that,
as
a
minimum,
are
consistent
with
the
performance
of
the
top
25%
of
authorities,
and
consistent
with
the
overall
target
of
2%
p.a.
efficiency
improvements
set
of
local
government
as
a
whole.
35. The Audit Commissions approach to the data has largely been to let the figures speak for
themselves, although it supplies a commentary seeking to bring out key issues. The aim of the
programme is to inform the public debate about the performance of public services. In publishing
the information, the Commission has not, in most cases, attempted to define what constitutes good
or bad service. In some cases this will be obvious but, in others, views will justifiably differ about
whether or not a given level of performance is good. In addition, the Audit Commission has been at
pains to ensure that the data are interpreted in a way that takes local circumstances into account,
such as rural or urban communities.
Scrutiny
of
service
delivery:
data
is
an
information
source
for
boards
and
tenants
to
monitor
the
performance
of
members
compared
to
peer
organisations
with
the
social
housing
sector.
Strategic
overview:
data
shows
the
relationship
between
the
costs,
resources
and
performance
across
all
major
business
areas.
This
enables
informed
assessments
of
6
http://www.housemark.co.uk/hm.nsf/Home?ReadForm
how
well
an
organisation
is
performing
to
assist
in
prioritising
areas
that
need
service
reviews.
Service
reviews:
organisations
can
concentrate
on
specific
service
areas
to:
compare
performance
against
other
organisations
in
the
sector;
carry
out
assessments
of
strengths
and
weaknesses,
and
identify
improvements;
and
find
out
who
is
performing
best
and
learning
from
them.
40. Another powerful national benchmarking club is the NHS Benchmarking Network7 which
was set up in 1996 in response to a need for a structure that would enable NHS organisations to
share best practice and learn from each other. Overall membership of the NHS Benchmarking
Network includes over 230 NHS member organisations with a subscription with on-going growth in
membership numbers. The NHS Benchmarking Network is now amongst the largest healthcare
benchmarking groups in the world.
41. Their benchmarking comparisons span the four home countries of the UK NHS and
therefore make for value added products for Network members that are not available from any other
source in the NHS. Membership costs 3000 annually, which provides unlimited involvement on
project strands, copies of network reports, presentations and data analyses; and access to its good
practice information exchange.
42. At any time the NHS Benchmarking Network is running a number of specific projects on
topics suggested by members. Member organisations have the opportunity to collect and
contribute data to these projects. The Network will then publish data comparisons and share
internally any good practice. Contributors have access to the detailed data from other contributors,
so that they can see how they are performing in comparison to others.
43. Another interesting benchmarking club which may be instructive for the Brazilian public
sector are Regional Improvement and Efficiency Partnerships (RIEPS)8. The nine RIEPs were
created in April 2008 with a three-year funding package of 185 million from the Ministry for
Communities and Local Government. They provided an integrated and sector-led approach to
improvement and efficiency at regional, sub-regional and local levels. RIEPs are led by local
government, and are a partnership of local government collaborating on shared improvement and
efficiency priorities. The nine RIEPs represent the nine English regions.
44. RIEPs have made significant progress in helping councils to benchmark their
performance. The activity of the RIEPs has enabled:
7
8
http://www.nhsbenchmarking.nhs.uk/
http://www.local.gov.uk/c/document_library/get_file?uuid=6a146fef-3f24-46db-af06-
e0b7b7e220f2&groupId=10171
45. RIEPs are working with their local authorities to benchmarking efficiency. This involves
analysing performance on services compared to the expenditure on them and how this relates to
other authorities.
Building
cleaning
Building
maintenance
Civic,
cultural
and
community
venues
Culture,
leisure
and
sport
Education
catering
Highways
and
winter
maintenance
Other
(civic
and
commercial)
catering
Parks,
open
spaces
and
horticultural
services
Refuse
collection
Sports
and
leisure
facility
management
Street
cleansing
Street
lighting
Transport
operations
Welfare
catering
47. Like the NHS network, this benchmarking network is based upon the subscription paid by
the members. The fee structure depends on the size of the municipality and whether this
municipality applies one service area or all 14 service areas for benchmarking (see Table). There
are different fees for non-members of APSE.
Table
1
Fee
structure
of
benchmarking
for
APSE
members
Fees
structure
APSE member
USD
6799
$10,792.06
1999
$3,173.02
3450
$5,476.19
1069
$1,696.83
Source: http://www.apse.org.uk/performance-networks/fees.html
48. Performance networks can be used for the following purposes: Set targets both over time
and in comparison to others; access performance across a range of input, process and output
measures; identify trends and inefficiencies from system failures; review and challenge; and
highlight areas for improvement and re-evaluate needs and priorities.
49. The benchmarking exercise of the network is quantitative and based on performance
indicators. Performance indicators are developed and continually reviewed by a working group of
practitioners from the network. Therefore, these performance indicators are owned by the
authorities that apply them, although mandatory performance indicators from national bodies are
also included.
50. The nature of the indicators is twofold. Some performance indicators are compulsory for all
members who carry out the performance exercise. From these compulsory indicators, it is
customary to include the measures suggested by the four main national audit bodies. Other
indicators are voluntary and are requested by particular groups of practitioners (see the Table with
examples for both types of indicators). Most typical performance indicators are related to cost,
productivity, quality, customer satisfaction and outcomes. APSE claims to have a robust system of
performance indicators because they have met all criteria in an assessment of consistency, reliability
and comparability of data required by the Audit Commission.
Location
characteristics
Competition
Transport
Car
parking
Social
pricing
Market
pricing
Peak
programming
Off
peak
programming
Investment
Brazilian context. The data is annually collected through a process that includes validation. The
adapted process is summarised below (APSE 2012).
Step 1 Preparation for annual data collection.
Profile information about the service. This is information that is unlikely to change
year on year (for instance, the number of pools the facility has).
The use of service profile data only applies to building cleaning, civic, cultural and
community venues, parks, open spaces and horticultural services and sports and
leisure facility management.
There are templates in electronic and printed format, and they can be downloaded in
the computer.
Step 4 Data submission (through different means, but internet upload is encouraged)
Step 5 Data validation (by the central office through the use of different reports and
sources)
Step 6 Undertake customer satisfaction analysis
56. All the performance information is produced in different outputs. The most relevant types
of outputs are performance reports, performance indicator (PI) standing reports, summary reports,
direction of travel reports, best and most improved performance case studies and additional
analysis. The description of these reports is given below.
57. A performance report (indexed by family group) shows the highest, lowest and average score
for each indicator, along with the range of data supplied by other family group members. Besides,
the report offers profile information for each local authority. With these reports, members are able
to assess their own performance, relative to overall family group.
58. The performance Indicator (PI) standing report is a personalised report for each authority
detailing the performance scores. This information is relative to the highest, lowest and average data
drawn from both the family group and the service wide data set.
Source: APSE, 2013.
59. The summary report (for each benchmarking service) contains all data submitted throughout
the year across the service and includes data ranges (highest, lowest and average), analysis by
country, trend analysis and participation information (see Table 5 for an example of how the
information is disclosed in a table and in writing).
Source: APSE, 2013
60. The direction of travel report provides an overview of each authoritys performance over the
last five years for all of the benchmarking services for which the authority is registered.
61. The best and most improved performer case studies are also another type of output
produced from performance indicators and additional qualitative information. The case
studies on how the winners achieved their successes are written up into a publication and emailed
out to all members. The peculiarity of these awards is that it considers the most improved
performers, implying that they do not need to be the first in class. It is enough that a particular
organization has made a considerable improvement regardless their position in the absolute or
family ranking.
62. The identification of the best and most improved performers follows several steps. After
data is collected, the best and most improved performers are identified in September. They are
selected based on a statistical methodology, additionally to inspection reports and scores to confirm
the accuracy of the data. Members are consulted on this process and may give input during the
process. Once top data submissions are identified, on site validations are carried out. This is
performed by a trained performance network validator or an APSE principal advisor. After
validation, top data submissions may become finalists and are eligible for a best or most improved
performer award. The list of finalists is decided in December. The best and most improved
performer scores are assessed again in March. A revised list of best performers is published in the
summary report.
63. APSE also offers benchmarking meetings. These meetings showcase best performers, share
information and encourage networking with peers. In these meetings, process benchmarking is
featured and trend analysis is performed.
Gain
first
hand
and
in-depth
insight
into
how
another
organisation
works
Develop
new
skills
in
assessing
current
practice,
identifying
problems
and
promoting
solutions
Generate
innovative
and
practicable
solutions
that
will
have
a
positive
impact
on
how
an
organisation
is
run
Learn
new
ideas
from
team
members
with
different
backgrounds
and
perspectives
from
your
own
A
chance
to
work
with
new
people
Disseminate
what
you
learn
from
the
review
within
your
own
organisation.
67. A structured approach to a peer review process may include the following aspects.
ISO 9000. (The Beacon label had already been used since 1994 in the Lotus (subsequently IBM
Lotus) Beacon Awards for excellence in technological solutions).
77. The distinctive feature of this family of Beacon models is that award winners have a
formal responsibility to disseminate their practices. Moreover, the various Beacon schemes have
gone further than simple publication of inspiring case studies and have adopted the open day
model, whereby one-day visits or open days are held during which organisations can share their
knowledge and experience with others, through a two-way exchange of information and giving
hands-on experience with new techniques and innovations Beacon awards are conditional upon
winners being prepared to do the same.
78. The Beacon Council scheme has lasted longest. It was one of the longest-standing policy
instruments of the Labour government, and in 2004 was even expanded from local authorities only
to include also national parks, police and fire services, passenger transport and waste management
authorities, and town and parish councils. By 2005, the scheme had attracted over 1200 applications
and nearly 250 beacon awards had been made. It is intended to recognize excellence or
innovation, and to disseminate good practice. Award winners must provide an excellent service in
the nominated theme; have evidence of a good overall corporate performance, including favourable
recent inspections (the so-called corporate hurdle), and must suggest interesting opportunities for
others to learn from their practice.
79. The Beacon Schools programme identified high performing schools across England. It was
designed to build partnerships between these schools. It represents examples of successful practice,
with a view to sharing and spreading that effective practice to other schools to raise standards in
pupil attainment. Each Beacon School (whether nursery, primary, secondary or special school)
received around 38,000 of extra funding a year, usually for a minimum of three years, in exchange
for a programme of activities enabling the school to collaborate with others to disseminate good
practice and raise school standards. The Beacon Schools programme was phased out in 2005 and
replaced by the Leading Edge Partnership programme for secondary schools.
80. The National Health Service Beacon programme in the UK gave awards to both hospitals
and general practitioners that exemplified local best practice and supported them to develop
and disseminate learning across the wider NHS. Research into the scheme suggested that, whilst
Beacon hospital projects had some potential in developing relatively innovative activity, they were
not perceived to be stepping-stones to wider public health action (Whitelaw et al, 2004). After the
scheme was disbanded in 2003, the NHS Modernisation Agency has held annual health and social
care awards, while pursuing a range of approaches to improvement and dissemination, particularly
'collaboratives', based on an idea from the US Institute for Healthcare Improvement, which bring
together experts and practitioners to pursue dramatic organisational improvements, which have a
very different flavour from the competitive Beacon approach (Bate and Robert, 2002, p.644).
81. The Beacon Awards in the further education sector were launched in 1994 by the
Association of Colleges, but have been sponsored by the governments Learning and Skills Council
(LSC) since 1999. In 2002, the award name was changed to Learning and Skills Beacon, open to
all post-16 providers funded by the Learning and Skills Council and inspected by post-16
inspectorates (DfES, 2002). The programme aims to recognize imaginative and innovative teaching
and learning practice in colleges and work-based learning programmes, and to encourage the
sharing of good practice through dissemination activities and collaborative work (LSC, 2004).
There are two closely associated awards run by the LSC the LSC Award for College Engagement
with Employers and the LSC Award for Equality and Diversity.
82. The Beacon Scheme is now spreading from the UK into another European country. In
2005-06 the associations of municipalities of Bosnia and Herzegovina and the Republic of Serbia,
in partnership with OECD and the Council of Europe, developed a Beacon Councils scheme based
on the UK Beacon Scheme, funded by the UK and Swiss Governments.
83. The processes which lie behind these benchmarking concepts are now widely understood:
Carry
out
in-depth
analysis,
function
by
function,
service
by
service
of
all
areas
of
the
agencys
work
Agree
clear
protocols
on
measurement
and
ensure
that
all
measurement
processes
adhere
to
these
protocols
Take
into
account
the
different
contexts
of
each
agency
Find
the
best
in
class
performer
Search
for
this
best
in
class
performance
in
all
agencies
and
all
sectors,
particularly
where
an
activity
or
function
is
being
benchmarked,
rather
than
a
whole
service
Check
on
the
transferability
of
the
lessons
from
the
best-in-class
provider
before
attempting
to
introduce
them
in
other
agencies.
84. Furthermore, it has become widely appreciated that the focus of benchmarking can range
well beyond the unit costs and productivity levels which were common in the early days of
performance comparison. In the UK public organisations now realise that benchmarking can allow
comparisons of (Bovaird and Halachmi, 2001):
Unit
costs
although
this
depends
on
some
artificial
distinctions
being
made
with
respect
to
the
allocation
of
joint
costs
Processes
-
which
require
detailed
process
mapping
Outputs
although
this
often
depends
on
an
agreement
to
aggregate
service
activity
levels
which
exhibit
a
certain
degree
of
heterogeneity
Outcomes
although
this
may
partly
rely
on
subjective
assessment
of
quality
of
outcomes
by
different
stakeholders
Policies
and
strategies
Policies
and
strategies
are
likely
to
differ
greatly
between
agencies
(e.g.
for
ideological
and
political
reasons),
even
in
similar
contexts.
Consequently,
benchmarking
of
these
variables
may
well
lead
to
pressures
of
divergence
(towards
relevant
practice),
compared
to
the
pressures
of
convergence
(towards
best
practice)
which
may
well
result
from
benchmarking
in
the
other
four
categories
(unit
costs,
processes,
outputs,
outcomes).
d) Contextual
factors
86. Performance comparison may be used by politicians to get a political advantage or to
defend their position. The existence of political competition may hinder learning and
improvements because results from the benchmarking exercise may be interpreted differently by
political parties (see Askim et al. 2008 for evidence of the influence of political competition in local
government in Norway). Such political competition is particularly strong where there is an
adversarial two-party system. For example, in the UK league tables are used by political parties to
make political capital from their opponents failures. Furthermore, the establishment of
benchmarking clubs between local authorities with a conservative or leftist political majority is
politically difficult in the UK, even if there is a desire at officer level to learn from a local authority
with a different political majority. Consequently, politics gets in the way of collaborative learning
from partners, however interested they are in each others approaches.
87. Similarly, the German experience with benchmarking shows that there is only very limited
appetite for performance comparisons between states (Lnder) if they are governed by different
political party majorities. Indeed, state governments have been highly reluctant to undertake any
benchmarking between their different education systems. It was only when OECD carried out the
PISA Test, which involved benchmarking of learning outcomes of schools, that there was
transparency on which states were doing better in terms of school performance and this then
started an intense political debate on education systems in Germany. The relevance of this critical
success factor is particularly high for those public sector organisations in which politicians are
elected by the population (for instance, local or state authorities).
e) Organisational
issues
88. These factors have to do with the way in which the learning is organized by the agency and
how the data is used to drive change. According to Christopher Hood, there are different
strategies for using performance information by public sector organizations: intelligence gathering,
ranking performers and target setting. This classification can be adapted to benchmarking. The
results of benchmarking will depend on the purposes of the higher level government that fosters it.
The abovementioned strategies are linked to this issue.
Target
setting
89. Target setting is a strategy that refers to the setting of quantitative standards to be
achieved, like for instance, maximum waiting time in hospitals or exam results for schools. In
some public services, there is the possibility to introduce some sort of competition against a
benchmark or yardstick model (based on the setting of targets) as opposed to individual
negotiations with service providers. For instance, Norway has used in the past two different
regulatory schemes in order to award contracts to bus companies. Dalen & Gmez-Lobo (2003)
asserted that those contracts negotiated against a performance benchmark were more efficient than
the individual contracts with each bus company.
90. In the target setting strategy, achievement is normally coupled with rewards or sanctions
(for instance by increasing or decreasing budgetary allocations to agencies that meet / do not meet
the target). Although this strategy has the advantages of focusing on priorities and motivating
organizations to achieve results, there are other disadvantages.
91. When performance management has fundamentally a control purpose, then a series of
responses from the staff concerned can be expected. These involve changes in behaviours, but
not only changes in delivering the service they also include changes in data recording and date
reporting behaviours. In the case of using this information as control device, there are incentives for
organizations to game with the data. For instance, Bovaird and Halachmi (2001) refer to
performance misreporting as a sort of gaming scheme. Many agencies have a tendency to make
more attractive their performance reports, particularly if they believe that their rivals (or partners)
are doing something similar. This may be because the managers wish to preserve their managerial
empires, or because they wish to protect the staff from external competition, or because they wish to
preserve the service (and therefore the service users) against budgetary cuts.
92. Furthermore, since the object of rewards is the achievement of targets, non-quantified
targets might be neglected by implementing strategy. For instance, if targets are set for schools
in line with PISA results, the teaching of other subjects that do not count for PISA might be
neglected by the schools or teachers may discourage poorly-performing students from entering for
public examinations, so that their results do not depress the average achieved by the class as a
whole.
93. However, the more mechanistic the performance management system, the more weight is
placed on it by senior managers, and the more serious the sanctions for poor performance, the
more likely is it that all the behaviour patterns mentioned above will occur. Changes in
behaviour which essentially mean revised approaches to data collection and reporting that will
make final results appear more favourable are, at best, misleading (because they suggest service
improvements which have not, in fact, taken place) and, at worst, damaging to the service (because
they undermine the integrity of the performance data in the system, so that no valid performance
assessments can be made for the service any longer). Indeed, the clearer are the control mechanisms
in this situation and the more rigorously they are pursued by senior managers, the more likely is it
that staff will seek to control senior management by giving them signals which please them,
although these signals may not correspond to any underlying improvement in the service system
this is known as the perverse control syndrome.
Intelligence
gathering
94. Intelligence gathering refers to the collection and evaluation of performance figures for
information purposes. In this case, it is not coupled with incentives or sanctions for good and bad
performing organisations like in the previous strategy. The intelligence gathering strategy lies on
the assumption that organisations with lower performance will develop an initiative to self-steer
towards organizational improvement.
95. There are two problems associated with intelligence gathering. On the one hand, nobody is
practising systematic control of the improvement of performance. On the other hand, the data
agreed upon in the benchmarking exercise might be very thin. The data compiled in benchmarking
exercises is often very much at the minimalist end of the spectrum, in terms of what is needed to
really judge the success of a service, particularly in qualitative terms (see Bovaird and Halachmi,
2001). In principle, many advocates of benchmarking focus rather on this strategy with the hope
that acquiring information from best performers would produce enough incentives for low
performing agencies to change their behaviour. If performance measurement systems are
introduced primarily for the purposes of giving strategic direction to staff (by making it clearer what
performance results are most valued in the organisation), or to enable organisational learning (by
finding out what works and what does not work), then it is less likely that the perverse aspects of
performance measurement mentioned above will occur.
96. Concerns are raised by the purposes of performance reporting. These normally include:
accountability to internal and external stakeholders and application for funding from financial
stakeholders. These purposes are likely to be seen by staff as potentially negative, which will tempt
them to report information in as favourable a light as possible, putting a gloss on their results by
whichever mechanisms they can find.
Ranking
performers
97. Ranking performers is a typical strategy of benchmarking exercise too that is based on
league tables by fostering the competition of diverse organisations (schools, hospitals, local
authorities) in order to increase user choice or to foster better management. The latter is done
through the idea of naming (worse performers) and shaming (them) for their inability to reach the
performance levels of other public sector organisations. This system has the advantage of
constituting an incentive for permanent performance steering. However, the disadvantages may
outweigh the positive aspects as losers from adverse contexts (for instance, a school cannot
improve because it only receives pupils from deprived areas where learning is difficult) get
frustrated.
98. Further, there is a natural tendency to form league tables from comparative performance
data, that are not necessarily helpful for improving performance by collecting only enough
information to make it possible to compile these league tables. However, league tables alone can
show only that there are identified differences in performance, not why they have arisen and
whether lessons could be transferred from the context of the most to the less successful agencies.
99. In sum, the likelihood that benchmarking will lead to positive organisational changes depends
to a major extent on the purposes of the performance management system in place in the
organisation.
103. Furthermore, even managers who are keen to promote change may be tempted to
misreport performance in a self-assessment framework, because they believe that colleagues in
other parts of the organisation are doing so, and they do not wish to be outshone by cheats (which
would mean that their budgets, their clients and their staff would be adversely affected if they do
not join in the cheating).
104. This clearly provides a strong rationale for independent audit of all data used in
benchmarking exercises. Of course, independent audit also has potential limitations and
disadvantages:
However, when done well, independent audit has the offsetting advantages of being probing,
rigorous and revealing.
Organisation
of
benchmarking
clubs
105. Benchmarking clubs may be seen as a middle way regarding the degree of
independence of the benchmarking process. Membership of benchmarking clubs is voluntary, so
there is less threat than with audit. Other members are respected, so that there may be less attempt
to misreport performance. Everyone has a vested interest in learning, so that there may be more
open exchange of uncontaminated information. This is reinforced if none of the club members is
particularly concerned about aiming for a high league table position. In a nutshell, clubs are a way
to bring together the social and technical organization of the benchmarking exercise. The use of
benchmarking clubs should not exclude the use of independent audits. A combination of both
strategies could be of help in benchmarking. Approaches to performance comparison which do not
involve at least one of these checks on self-assessment are clearly open to question.
106. The concept of benchmarking clubs in the UK public sector has more than a decade of
experience and the number of examples is relatively high. The basic idea of forming those clubs
relates to one success factor mentioned by Bulkley et al. (2010), the need to transfer knowledge out
of experience, especially the implicit knowledge that is so difficult to find elsewhere. From previous
work done by Bovaird and Halachmi (2001) on the Best Value pilot initiative, some lessons can be
drawn.
107. For the organisation of benchmarking clubs the following issues should be taken into
account:
They
must
contain
requisite
difference
between
their
members,
so
that
the
initial
focus
does
not
need
to
be
on
comparability,
as
was
originally
thought
in
many
cases.
Difference
can
be
conceived
as
of
the
type
of
business
or
the
actual
performance
of
different
organizations
that
are
very
similar.
However,
in
spite
of
these
differences,
lessons
must
be
transferable,
so
that
some
element
of
comparability
between
club
members
remains
important
Clubs
should
ideally
contain
members
who
represent
the
best-in-class
for
the
service
or
issue
or
activity
being
studied
or
at
the
very
least
have
members
who
have
close
links
with
best-in-class
organisations.
Otherwise,
the
process
of
learning
becomes
very
difficult.
Discussion
and
data
sharing
in
the
club
must
be
open
and
frank,
so
that
it
will
only
occur
where
there
is
trust.
Benchmarking
clubs
are
a
social
activity
as
well
as
a
technical
exercise.
A
club
needs
analytical
expertise
but
even
where
this
is
in
place,
it
probably
also
needs
considerable
time
before
it
can
be
brought
to
bear
on
the
core
problems
and
reliable
data.
Quantity
and
quality
of
resources
108. The use of benchmarking as a performance improvement tool cannot be clearly linked to
the budgetary situation of the public sector as an external contextual factor, i. e. there is no
clear indication that budgetary surplus or deficit will catalyse the use of benchmarking in the public
sector. If benchmarking is only oriented at making pure savings, the behaviour of the staff involved
is mostly defensive and will not lead to improvements (see Bowerman & Ball, 2001).
109. There is a consensus in many studies on the need of deploying financial, personal and
time resources for doing benchmarking (see Holloway et al, 1999; Bulkley et al, 2010; Blanc et
al, 2010). Financial and human resources are needed in order to develop joint performance
information for comparison as well as to set the basis for interpretation of the data and for looking
for causes of low-high performance (Askim et al, 2008). In the examples of benchmarking networks
in the UK, subscription fees may vary from $1600 to $10000 approximately.
110. The actors involved in the benchmarking process may require some training but the quality of
the project management and the facilitation of the benchmarking exercise are key.
Engagement
of
the
managers
111. The engagement of top civil servants (political appointees and politicians depending on
the type of organisation) is closely connected to the disposition to learn by the agency. Once
the top managers have shown interest in the exercise, there are several factors influencing the
outcomes of benchmarking. As several authors have confirmed, it is key that middle managers are
willing to benchmark with other organizations (Ammons & Rivenbank, 2008), that all members of
the organization are willing to reflect upon and change (whenever necessary) the work routines
(Bulkley et al. 2010).
The
use
of
performance
indicators
112. When a performance measurement system is being designed for the first time there is a
tendency to define too many performance indicators. Those experiences have been made by
many benchmarking clubs (Catalonian local authorities) or central authorities (Audit Commission).
For instance, at the early stages of developing the Best Value Performance Regime, the Audit
Commission used around 300 performance indicators, that were later on reduced to 70. It may be
difficult to escape the temptation of adding performance indicators to the basket, provided that
information is probably easy to obtain or only relevant to one of the partners. However, Ammos &
Rivenbank (2007) recommend focusing on strategic performance indicators that refer to efficiency
and results.
eventually taking part in some of the preparatory meetings with the partners, are likely to be
involved in information gathering as well as offering their view on how the data should be
interpreted.
119. First, one should define the services/ policies that are likely to benefit most from
benchmarking. In this phase, consultation with users and other stakeholders may help to
understand what needs to be benchmarked. Clearly, it is impossible to benchmark everything. The
idea is simple: service users, businesses, and other stakeholders are best placed to define the
challenges they face and the support they need from public services. There are several instruments
that can be of help in this task depending on the type of product (services or policies) for which the
benchmarking is designed. Consultation should be agreed upon with the partners of the
benchmarking exercise, so that different organizational dimensions are taken into account when
defining the services to be improved.
120. Secondly, performance should be measured and compared. Once the service/ policy to be
benchmarked have been identified, performance should be measured in order to assess the quality
of public services and policies. Since not all service/ policy dimensions are measurable, some
qualitative information must be added in a way that supports the benchmarking exercise.
121. Thirdly, change should be carefully managed. Once areas which need improvement have
been identified, it is necessary to identify how to improve. The exercise here consists of identifying
and interpreting the reasons behind this specific performance. Benchmarking might be followed by
peer review workshops (or any other tool) in order to improve processes that enhance the
organizational performance.
122. Fourthly, improve by learning from benchmarking. As this is a process that involves any
staff member linked to the affected processes, a team could be formed in order to design an
improvement plan in more detail covering the specific problems as well as cross-cutting issues of
the organization.
123. Finally, evaluate the service/policy improvements. Finally, this cycle involves reviewing
whether performance has really improved. This can be done by many different means, including
successive rounds of benchmarking as well as peer reviews or other instruments.
6) Resources
i) Benchmarking
literature
on
UK
experiences
APSE
(Association
for
Public
Service
Excellence)
(2013)
Benchmarking
for
success.
Manchester.
APSE
(Association
for
Public
Service
Excellence)
(2013a)
Terms
and
conditions
of
performance
networks
membership.
Manchester.
APSE
(Association
for
Public
Service
Excellence)
(2012)
The
essential
guide
to
performance
networks
Year
14
(2011/12).
Manchester.
APSE
(Association
for
Public
Service
Excellence)
(2012a)
2010-2011
Performance
indicators.
Manchester.
Benington,
J.
(2000),
The
modernisation
and
improvement
of
government
and
public
services,
Public
Money
and
Management,
April/June,
pp.
3-8.
Bovaird,
T.
and
Halachmi,
A.
(2001),
Learning
from
international
approaches
to
best
value,
Policy
and
Politics,
Vol.
29
No.
4,
pp.
451-63.
Bowerman,
M.,
Ball,
A.
and
Francies,
G.
(2001),
Benchmarking
as
a
tool
for
the
modernisation
of
local
government,
Financial
Accountability
and
Management,
Vol.
17
No.
4,
pp.
321-9.
Kouzmin,
A.,
Loeffler,
E.,
Klages,
H.
and
Korac-Kakabadse,
N.
(1999),
Benchmarking
and
performance
measurement
in
public
sectors:
towards
learning
for
agency
effectiveness,
The
International
Journal
of
Public
Sector
Management,
Vol.
12
No.
2,
pp.
121-44.
Parrado,
Salvador
(1996)
Una
visin
crtica
de
la
implantacin
del
benchmarking
en
el
sector
pblico,
Revista
Vasca
de
Administracin
Pblica,
Nm.
45
Pginas:
37-62.