You are on page 1of 37

Collaborative benchmarking in public services:

Lessons from the UK for the Brazilian public sector


Product 3


Project: Collaborative benchmarking in public services: Lessons from the UK for the Brazilian public sector
Authors: Salvador Parrado and Elke Loeffler
Product III: Verso final do relatrio, com ajustes solicitados pelo MPOG

Contact: salvador.parrado@govint.org
Elke.loeffler@govint.org
(This report is for internal use only. If the report is to be published online or in print, written
permission for the reproduction of tables and graphs is required).


Executive summary ............................................................................................................................. 4
1) The benchmarking initiatives in the UK public sector ................................................................. 7
a) Compulsory Competitive Tendering and (Total) Quality Management as starting points ...... 7
b) The Best Value Programme as a central government benchmarking programme for local
government .................................................................................................................................... 8
c) League tables for performance comparison and greater accountability ............................... 10
2) Voluntary benchmarking clubs or networks ............................................................................. 12
a) Different type of benchmarking networks ............................................................................ 12
b) Methodology of APSE network ............................................................................................. 14
c) Peer review as complement to benchmarking processes ..................................................... 20
3) Advantages and limitations of different approaches ................................................................ 23
4) Making benchmarking work: critical success factors from UK benchmarking projects ............ 27
d) Contextual factors ................................................................................................................. 28
e) Organisational issues ............................................................................................................. 28
f) Organising and planning the benchmarking exercise ............................................................ 30
5) Making benchmarking sustainable: integrating benchmarking in wider public service and
governance reforms ......................................................................................................................... 33
g) Principles to apply in the implementation of benchmarking ................................................ 33
h) Phases for individual agencies that use benchmarking ......................................................... 34
6) Resources .................................................................................................................................. 36
i) Benchmarking literature on UK experiences .......................................................................... 36
j) International research on public sector benchmarking .......................................................... 36

Executive summary
1. This report was commissioned by the Brazilian Ministry of Planning, Budget and Management through
the Institute for the Strengthening of Institutional Capacities (IFCI) and financed by the British Council in
Brazil. The objective of the Technical Cooperation Project Supporting Public Administration in Brazil is to
assist the Ministry of Planning, Budget and Management in identifying experiences with benchmarking in
the United Kingdom.

2. The study is based on a review of official reports and academic research on benchmarking in the
British public sector. It covers benchmarking experiences from central and local levels of government in the
United Kingdom. In practice, most benchmarking exercises have been conducted at the local level but
lessons from these levels of government can be applied in the Brazilian federal government.

3. The report is structured in five chapters: British experiences, success factors and sustainability of
benchmarking. The first part deals with the main experiences in benchmarking in the United Kingdom.
These experiences cover all the spectrum of benchmarking by including benchmarking against performance
targets, league tables and voluntary exercises that focus on gathering information for performance
improvement. Chapter 2 offers a more specific description of the methodology of benchmarking clubs or
networks. Chapter 3 outlines the different advantages and disadvantages of the experiences based on a
classical typology of results and process benchmarking. Chapter 4deals with the success factors of
benchmarking. Chapter 5 focusses on the benchmarking process from the perspective of individual
agencies and ways of making this process sustainable.

4. The history of benchmarking in the public sector has its seeds in the mid-1980s (for local
authorities) and early 1990s (for national agencies) when the conservative government of the
time wanted to bring public management closer to private enterprises. Tendering was made
compulsory for local authorities in order to identify whether in-house staff would be able to deliver
the services more efficiently than private enterprises. At national level, some pilots were established
in order to benchmark the performance of agencies against private sector organisations.
5. In the 1990s, performance indicators and comparisons were introduced for local councils
and police forces. A considerable number of performance indicators were established for these
authorities to collect data. The Audit Commission used to publish these data in league tables
without making further comments. However, the mere publication of the information triggered
certain interest in local managers to improve performance.
6. During the 2000s, a Best Value Regime was established for local authorities and the
government made it compulsory to review their services according to a set of criteria. One of
the criteria requested local authorities to do comparisons of performance. A considerable amount of
performance indicators was used for this comparison.
7. After the end of compulsory benchmarking, many public organisations decided to join
benchmarking clubs voluntarily. These clubs cover a wide range of services: housing, hospitals,
primary health care and regional partnerships. These clubs provide a softer approach than audit
regimes whereby auditors check whether particular authorities have achieved the performance
against specific targets.
8. Benchmarking clubs or networks, as they have been recently labelled, are the closest
experience to the Brazilian benchmarking colaborativo. These benchmarking networks exist
in sectors such as housing, health and regional development as well as local authorities. In most
cases, joining a network is based on the payment of an annual fee by the authority that may range
between $4000 and $10000 a year.

9. This report provides a detailed explanation of the methodology used by APSE (Association
for Public Service Excellence), which is a benchmarking network for local authorities with more
than 300 current members. The benchmarking exercise is based on quantitative performance
indicators that are defined and developed by the members. The data collection is subject to a
validation process and strict confidentiality rules. Data is not to be disclosed to third parties by any
of the network members. Based on this information, local authorities can receive all sorts of reports
for a range of up to 14 service areas. These reports compare the scores of the authorities with their
family groups (for instance, similar size) and for the whole service set. Besides, there is an award
for best and most improved performers, which are showcased in case studies and benchmarking
meetings.
10. The report also suggests complementing benchmarking networks with a scheme of peer
reviews as conducted by the former Improvement and Development Agency (IDEA). Although
peer review does not necessarily form part of a benchmarking network, it could be a good support
for a more qualitative benchmarking approach. The basic idea of this type of review consists of
peers from other authorities assessing the performance of one organisation against an agreed
benchmark and following a structured process. This process includes self-assessment against the
benchmark and an on-site visit of the peers for up to 3 days for the assessment of the organisation.
The exercises finishes with a final visit by the peers to see the developments of the organisation
after the implementation of the improvement plan that was produced as a result of the peer review
exercise.
11. In sum, the history of benchmarking in the UK has had different milestones. It started as an
exercise to make public sector organizations more similar to private sector businesses. It then
developed to impose national standards to local authorities. And it has been used more recently on
voluntary basis to spread good performance among public authorities in different sectors.
12. Benchmarking can be performed in three different ways: against results, processes, and
quality award schemes. Regarding results benchmarking, the most controversial issue is whether
those indicators should be published in order to enhance accountability to service users. Research
by the Audit Commission has shown that the publication of performance information of local public
services makes citizens aware that such services exist even though most citizens will not be very
interested in general performance information. Performance information needs to be targeted at
specific target groups so it gets to people who are interested in that specific service Regarding
process benchmarking, the challenge is to have a best performer within the club, otherwise
organisations would be copying mediocre public sector organisations. Finally, award schemes may
be problematic, as they may only offer a platform to recognise and celebrate well performing
organisations. However, the Beacon schemes in the UK for different sectors (schools, local
authorities and health) could be a relevant experience for Brazil because award winners have to
share their good practice through a very structured follow-up scheme which involves more than just
publishing information about the award winners.
13. There are several success factors for benchmarking clusteredin three categories: political
factors, organisational features and benchmarking factors. Some conclusions have been drawn
from this analysis. Benchmarking among public sector organisations governed by leaders from
different political parties is jeopardized by conflicts derived from the interpretation of the data.
Benchmarking against targets (where gaming with data is a risk) or league tables (where
competition does not allow for a true interorganisational learning) can be risky. Although the
external validation of the data through external audits can be helpful, it also raises gaming issues.
However, organizing benchmarking clubs with certain golden rules may be of good value for the
benchmarking organizations.

14. In order to make benchmarking sustainable individual agencies need to consider some
principles: 1) Start with outcomes and outputs to compare and improve processes and inputs
afterwards 2) Try to understand the results of the agency (outputs and outcomes) from the point of
view of users and stakeholders; 3) Engage not only top managers, but also middle managers and
front line staff in the benchmarking exercise.
15. Finally, benchmarking, as any other public management improvement tool, needs to be
integrated into the policy and public management cycle by defining the services/ policies that
will benefit most from benchmarking; measuring and comparing performance, managing change,
improving services and evaluating the improvements.

1) The benchmarking initiatives in the UK public sector


This section will introduce the Brazilian reader into the development of public sector benchmarking
in the UK and explain different rationales for launching benchmarking projects, the policy
framework and support put in place by central government and the evolution of public sector
benchmarking since the beginnings in the 1990s to the present.


a) Compulsory Competitive Tendering and (Total) Quality Management as
starting points
16. This report uses a broad definition of benchmarking in order to capture different
conceptions and uses of benchmarking in the UK public sector. The definition used in this
report considers benchmarking as a tool for performance improvement. It goes beyond
measurement to tell a manager HOW to achieve better performance. By analysing and sharing
information about your own performance with partners who are achieving better performance, you
will learn where improvement is needed, how to achieve it and what impact it might have on your
overall success rate.1
17. The context is highly relevant to the success of benchmarking. Obviously, there have always
been many voluntary efforts by public organisations in the UK and other countries to improve
performance through structured performance comparisons and collaborative learning. However,
contextual factors such as government policies can strongly influence the scale and effectiveness of
benchmarking of public organisations.
18. The development of benchmarking policies in the UK dates back to the Margaret
Thatcher era (1979-1990), with its focus on emulating the private sector. One key mechanism
used by central government to improve the performance of the public sector was the creation of
alternative means of replicating the pressure to improve which exists in the private sector. This has
included requirements set by central government for local authorities to put certain activities out to
competitive tender and for all public services to consider functions to be contracted out to the
private sector.
19. Compulsory Competitive Tendering (CCT) was introduced by the Conservative
Government throughout the 1980s in an attempt to bring greater efficiency to local
government and health services through the use of competition. While it is generally recognised
that strong incentives were needed to stimulate reform, compulsion resulted in resistance by local
authorities and health trusts, an immature market and poorly-conducted procurements which
focused on price at the expense of quality and employment conditions. In particular, research
revealed three negative impacts of CCT2:
a systematic worsening of the pay, terms, and conditions of employment of workers
providing local public services;
a reduction in the scope and power of local government; and
the weakening of the stabilising tendencies of public services within local and regional
economies.

Foot, J. (1998).

Patterson, A. and Pinch, P.L. (2000), p.1.

20. At central government level, there was a programme to compare public agencies. In 1995,
the Deputy Prime Minister announced his intention to benchmark the performance of central
government Next Steps Executive Agencies against both the private sector. At this time 375,000
civil servants - 71% of the total - worked in the 125 Next Steps agencies, or organisations operating
fully on Next Steps lines. Benchmarking performance on such a scale had never been done before.
It was therefore decided to launch a benchmarking pilot3 first.
21. Before rolling out benchmarking to a wide range of agencies, it was decided that
benchmarking should focus on a comparison of how specific functions such as human
resource management, which are common to all organisations. Since a key objective of
benchmarking was to facilitate comparisons with the private sector, the Business Excellence Model
of the European Foundation for Quality was selected as a methodology. A total of 30 agencies,
supported by external consultants, undertook a self-assessment process on the basis of the 91
questions related to nine themes of the Business Excellence Model. When the pilots had completed
their self-assessment, the scores of the pilots were compared against the standard set by the UK
Quality Award winners.
22. Comparative data seemed to be helpful for those agencies. Although the British Quality
Foundation could not release the results from individual companies for reasons of commercial
confidentiality, it was possible to compare the average scores under each criterion held on the
database for the private sector against those for the agencies4. These data made it possible to
identify those areas where public sector organisations appear to be performing particularly well, as
well as areas where further improvement appears necessary.
23. Following self-assessment, the public sector organisations identified key areas for
improvement and developed appropriate action plans. Many of the improvement actions reflect
the areas where the self-assessment scores identified weakness in performance, in particular
communication. A number of public sector organisations used the results of benchmarking as a
catalyst to drive forward initiatives to improve links both between staff and management, and
between the agency and its key customers.
24. As experience with these initiatives grew, flexibility in benchmarking was introduced. The
UK central government changed from specifying the use of particular tools towards allowing
organisations to select the techniques most appropriate to their circumstances, though they may be
challenged to justify their choices. This freedom, however, is within the context of moving towards
measuring and publishing organisations performance in league tables. Through this approach, the
UK government sought to achieve continuous improvement of public services, while retaining
public accountability for service delivery.

b) The Best Value Programme as a central government benchmarking


programme for local government
25. The new Labour government took a different approach to benchmarking. Under the
provisions of the 1999 Local Government Act, the requirement to submit defined activities to
compulsory competitive tendering (CCT) was abolished in January 2000 and replaced by the new

Cowper, J. and Samuels, M. (1997), Performance benchmarking in the public sector: The United Kingdom Experience,
Paper prepared for an OECD Meeting on Public Sector Benchmarking.

Cowper and Samuels (1997).

Best Value programme5. The new policy agenda was summarised in a White Paper with the title
Modernising Government.
26. The introduction of Modernising Government represented a full commitment by the
Labour Government to public services. First, the public wants improvement; second, quality
public services are a core value for the labour party, which is associated historically with the
development of the welfare state, and third, quality public services are a key component of
economic success (Benington, 2000). Second, the objective of modernisation has been encapsulated
in an initiative labelled as Best Value. As Bowerman et al. (2001, p. 321) stated: Best value
seeks, in sum, to promote quality services, but at a price the local community is prepared to pay.
27. In practice, the Best Value programme meant that all government services needed to be
reviewed against the criteria of challenge, comparison, consultation, competition and
collaboration (the so-called 5Cs). Furthermore, central government determined that local
government must carry out fundamental service reviews for all services at least once every 5 years
and they must:

Challenge why and how a service is being provided.


Secure comparison with the performance of others across a range of relevant indicators,
taking into account the views of both service users and potential suppliers.
Consult local taxpayers, service users, partners and the wider business community in the
setting of new performance targets.
Consider fair competition as a means of securing efficient and effective services.
Consider collaboration in commissioning and providing seamless services through joined-
up working.

28. One of the 5Cs is compare. This was, not so long ago, a very controversial area in public
services management in the UK. The conventional wisdom was that all public agencies are unique,
with unique user profiles, unique policies, unique histories, unique constraints (such as funding
opportunities, asset backing, geography, etc.). Consequently, anyone who attempted to benchmark
an agency against other agencies was regarded as ignorant of the basic rules of the game or, quite
likely, a traitor who was prepared to allow inappropriate comparisons to be made for nefarious
reasons.
29. However, the taboo on performance comparison has now been well and truly broken.
Under the Best Value regime, all service reviews in the pilot authorities had to undertake relevant
comparisons. Other non-pilot authorities started preparing for Best Value by joining benchmarking
clubs and undertaking preliminary comparison exercises. Nor is benchmarking confined to local
government all government departments and their executive agencies (the Next Steps agencies)
were told that they had to undertake performance comparisons in their reviews of activities in the
future (Cabinet Office, 1999).
30. Each best value authority had the duty to publish an annual Best Value Performance Plan
(BVPP), which must, by law, include:

Summary of local authorities (LA)s objectives in respect of its functions


Summary of current performance
Comparison with previous performance
Summary of LAs approach to efficiency improvements

For more details, see Bovaird and Halachmi (2001).

Statement describing the review programme


Key results of the completed reviews
Performance targets for future years
Plan of action to implement these targets
Response to audit and inspection reports
Consultation statement
Financial statement

31. In early 2000, the first set of BVPPs included mainly data on service performance.
However, as all authorities carried out cross-cutting and thematic reviews, BVPPs contained a
much greater level of detail on performance indicators and targets of this type. Clearly, this made it
much easier to undertake benchmarking exercises on governance issues.
32. The performance indicators and targets which have been designed centrally for the Best
Value regime emphasised service quality, efficiency and cost. All best value authorities are
required by law to include within their BVPPs:

Quality targets that are, as a minimum, consistent with the performance of the top 25% of
all authorities; and
Cost and efficiency targets over 5 years that, as a minimum, are consistent with the
performance of the top 25% of authorities, and consistent with the overall target of 2% p.a.
efficiency improvements set of local government as a whole.

c) League tables for performance comparison and greater accountability


33. The Local Government Act 1992 required for the first time the Audit Commission to
define indicators for comparing local authority performance, including that of police and fire
services. The resulting data was published annually. The first year following the legislation was
taken up with consultation between the Audit Commission and the bodies whose performance was
to be covered. The process was complex and required sensitive handling, since local authorities are
accountable to their own elected bodies, rather than to either the Audit Commission or Ministers.
The agreed approach was for performance indicators to be defined for each area of activity. Each
indicator was designed with the bodies whose performance it would measure, to ensure that the
activity measured was appropriate and that the resources required for collection of the data were not
excessive. The detailed methods by which performance was to be measured were published in 1993.
Given the very wide range of activities performed and the number of areas selected for comparison,
over 200 performance indicators were set. This was reduced to 134 indicators covering 40 services.
Finally, the report of December 1994 offered 77 indicators of a dozen of services.
34. Local councils and police forces had to publish in local newspapers the details of their
performance against the indicators. This information, as well as an explanation of the system
used for its measurement, was also supplied to the Audit Commission at the end of the year. The
Audit Commission then collated the data and produced a commentary on the key activities to
accompany its publication. The first set of data, covering the operational year 1993/94, was
published in March 1995. The second set, covering 1994/95, was published in March 1996, thus
starting the reporting of trends.

35. The Audit Commissions approach to the data has largely been to let the figures speak for
themselves, although it supplies a commentary seeking to bring out key issues. The aim of the
programme is to inform the public debate about the performance of public services. In publishing
the information, the Commission has not, in most cases, attempted to define what constitutes good
or bad service. In some cases this will be obvious but, in others, views will justifiably differ about
whether or not a given level of performance is good. In addition, the Audit Commission has been at
pains to ensure that the data are interpreted in a way that takes local circumstances into account,
such as rural or urban communities.

2) Voluntary benchmarking clubs or networks


Since the Brazilian government has focused on Benchmarking colaborativo, this section deals
more in detail with benchmarking clubs or networks, as they have been recently labelled. The
chapter shows several examples of benchmarking networks and provides a more detailed
methodology of the Association for Public Service Excellence (APSE) network, which is formed by
local authorities. Benchmarking is used in a wide range of sectors such as housing, health and
regional cooperation as well as local authorities.

a) Different type of benchmarking networks


36. One prominent UK benchmarking network is focussed on social housing. The duty to
benchmark under the Best Value regime and the subsequent duty to publish performance data
gave rise to commercial benchmarking activities of professional associations. HouseMark6 was set
up in 1999 and is jointly owned by the Chartered Institute of Housing (CIH) and the National
Housing Federation (NHF), two not-for-profit organisations dedicated to improving housing
standards. The organisation has nearly one thousand members. Housemark is one of the few large
benchmarking services that promises data validation. HouseMark works closely with its tenants
through its partnership with the Tenants and Residents Association of England to ensure
benchmarking products are relevant to the needs of tenants. All benchmarking products are
reviewed and endorsed by a tenants panel.
37. HouseMarks core benchmarking service collates the costs of service delivery, resources
used and key performance indicators across all areas of landlords business to the industry
standard. Furthermore, a more detailed breakdown is provided for the central landlord services of
housing management and maintenance. Taking part in the benchmarking enables members to
understand their cost base more, and by systematically using the results, they can understand where
they are not providing value for money and plan how to improve cost effectiveness. Members can
make comparisons across their main business areas or against national standards.
38. Housemark provides a number of specialist products that supply a greater depth of data
for particular service areas. These include:
Anti-social behaviour
Contact centres
Estate agents
Resident involvement
Satisfaction
Complaints
Gas safety
39. To make benchmarking data accessible, HouseMark has developed a benchmarking
dashboard. The dashboard can be used for:

Scrutiny of service delivery: data is an information source for boards and tenants to
monitor the performance of members compared to peer organisations with the social
housing sector.
Strategic overview: data shows the relationship between the costs, resources and
performance across all major business areas. This enables informed assessments of
6

http://www.housemark.co.uk/hm.nsf/Home?ReadForm

how well an organisation is performing to assist in prioritising areas that need service
reviews.
Service reviews: organisations can concentrate on specific service areas to: compare
performance against other organisations in the sector; carry out assessments of
strengths and weaknesses, and identify improvements; and find out who is performing
best and learning from them.

40. Another powerful national benchmarking club is the NHS Benchmarking Network7 which
was set up in 1996 in response to a need for a structure that would enable NHS organisations to
share best practice and learn from each other. Overall membership of the NHS Benchmarking
Network includes over 230 NHS member organisations with a subscription with on-going growth in
membership numbers. The NHS Benchmarking Network is now amongst the largest healthcare
benchmarking groups in the world.
41. Their benchmarking comparisons span the four home countries of the UK NHS and
therefore make for value added products for Network members that are not available from any other
source in the NHS. Membership costs 3000 annually, which provides unlimited involvement on
project strands, copies of network reports, presentations and data analyses; and access to its good
practice information exchange.
42. At any time the NHS Benchmarking Network is running a number of specific projects on
topics suggested by members. Member organisations have the opportunity to collect and
contribute data to these projects. The Network will then publish data comparisons and share
internally any good practice. Contributors have access to the detailed data from other contributors,
so that they can see how they are performing in comparison to others.

Shared and corporate support services


Radiology services
Inpatient mental health services
Emergency Care Benchmarking
Community/Prov Services - proposed UPDATED project
Community Mental Health
Community Hospitals
Child Adolescent MH - proposed UPDATED project
Acute Therapies and Diagnostics

43. Another interesting benchmarking club which may be instructive for the Brazilian public
sector are Regional Improvement and Efficiency Partnerships (RIEPS)8. The nine RIEPs were
created in April 2008 with a three-year funding package of 185 million from the Ministry for
Communities and Local Government. They provided an integrated and sector-led approach to
improvement and efficiency at regional, sub-regional and local levels. RIEPs are led by local
government, and are a partnership of local government collaborating on shared improvement and
efficiency priorities. The nine RIEPs represent the nine English regions.
44. RIEPs have made significant progress in helping councils to benchmark their
performance. The activity of the RIEPs has enabled:
7
8

http://www.nhsbenchmarking.nhs.uk/


http://www.local.gov.uk/c/document_library/get_file?uuid=6a146fef-3f24-46db-af06-
e0b7b7e220f2&groupId=10171

Benchmarking to improve procurement: procurement hubs have been created to


enable a regions local authorities to benchmark based upon online access to
framework contracts and research on the best deals. For example, the London Energy
Project was established to reduce the costs of energy procurement and promote best
practice. The project provides access to approved framework contracts which have
reduced costs by 3.5% and provides a self-assessment service against best-practice
benchmarks.
Using benchmarking when working with businesses: benchmarking and transformation
programmes offered by RIEPs have allowed authorities to consider the systems that
they use in dealing with businesses. On adults and childrens social care, the RIEPs are
working through the Joint Improvement Partnerships (Adult social care) and the
relevant childrens forums to support local authorities to communicate with,
understand and begin to manage the local social care markets.
Shaping the market for looked after children was an East Midlands RIEP. The project
led by Leicester City Council on behalf of the regions 9 care authorities analyses the
complex market to support for looked after children. The project started with the
collation of benchmarking data across the region to assist in the analysis of current
positions, trends and individual local authorities current operating models. The
projects working group considered quick win options with the aim of realising
efficiency savings with their 09.10 budgets by negotiating low/no annual fee increases
with key providers.


45. RIEPs are working with their local authorities to benchmarking efficiency. This involves
analysing performance on services compared to the expenditure on them and how this relates to
other authorities.

b) Methodology of APSE network


46. The Association for Public Service Excellence (APSE) is a not-for-profit organisation that
works with 300 local authorities in the UK. As a not-for-profit, this association covers 14 service
areas, including:

Building cleaning
Building maintenance
Civic, cultural and community venues
Culture, leisure and sport
Education catering
Highways and winter maintenance
Other (civic and commercial) catering
Parks, open spaces and horticultural services
Refuse collection
Sports and leisure facility management
Street cleansing

Street lighting
Transport operations
Welfare catering
47. Like the NHS network, this benchmarking network is based upon the subscription paid by
the members. The fee structure depends on the size of the municipality and whether this
municipality applies one service area or all 14 service areas for benchmarking (see Table). There
are different fees for non-members of APSE.
Table 1 Fee structure of benchmarking for APSE members
Fees structure

APSE member

USD

Large authority* (all services)

6799

$10,792.06

Large authority* (single service)

1999

$3,173.02

Small authority (all services)

3450

$5,476.19

Small authority (single service)

1069

$1,696.83

Source: http://www.apse.org.uk/performance-networks/fees.html
48. Performance networks can be used for the following purposes: Set targets both over time
and in comparison to others; access performance across a range of input, process and output
measures; identify trends and inefficiencies from system failures; review and challenge; and
highlight areas for improvement and re-evaluate needs and priorities.
49. The benchmarking exercise of the network is quantitative and based on performance
indicators. Performance indicators are developed and continually reviewed by a working group of
practitioners from the network. Therefore, these performance indicators are owned by the
authorities that apply them, although mandatory performance indicators from national bodies are
also included.
50. The nature of the indicators is twofold. Some performance indicators are compulsory for all
members who carry out the performance exercise. From these compulsory indicators, it is
customary to include the measures suggested by the four main national audit bodies. Other
indicators are voluntary and are requested by particular groups of practitioners (see the Table with
examples for both types of indicators). Most typical performance indicators are related to cost,
productivity, quality, customer satisfaction and outcomes. APSE claims to have a robust system of
performance indicators because they have met all criteria in an assessment of consistency, reliability
and comparability of data required by the Audit Commission.

Table 2 Examples of performance indicators in Highways and Winter Maintenance


Highways key performance indicators
PI 03 Damaged roads and pavements made safe within target time
PI 201a Percentage staff absence front line manual operatives
PI 202a Percentage staff absence - all staff
PI 203a Community consultation and quality assurance
PI 204a Human resources and people management
PI 207a Number of days lost through reportable accidents per FTE employee
PI 208a Customer satisfaction surveys
Highways secondary performance indicators
PI 02b Condition of principal roads (TRACS type surveys - England and Wales only)
PI 02c Condition of all non principal roads (England and Wales only)
PI 02d Condition of principal roads (SRMCS type surveys - Scotland only)
PI 02e Condition of all non principal roads (Scotland only)
PI 15 Percentage of total highways function cost (revenue and capital) spent directly on highways repairs

Source: APSE, 2012a.


51. Family groups of local authorities are created for comparison purposes. The main idea is to
develop a like-for-like system to group authorities. In this way, a fair indication of performance
can be ensured. The like-for-like system draws on several factors or drivers. The family groups
are formed when participating authorities generate an overall key driver score within the same
range. Drivers are those factors that affect the circumstances in which each front line service
operates (see Table 3).
Table 3 Key driver scores for the sports and leisure facility management
Facility type/size

Location characteristics
Competition
Transport
Car parking

Social pricing
Market pricing
Peak programming
Off peak programming
Investment

Source: APSE, 2013.


52. The benchmarking information can only be accessed by network members. The repository
of information of all data generated by the local authorities is an Internet web-based system. As a
rule, this is only accessed by members through a unique PIN for the whole authority.
53. The PIN code is revealed to other network members by default unless the local authority
chooses to opt out. Opting out means that the PIN code of other members in performance reports
which include the member who has opted out will not be identified.
54. There are strict rules of confidentiality regarding the use of performance information by
the members. Members cannot disclose other members PIN code or confidential information
which includes the following aspects: a) performance network data that include other members of
the network; b) the performance network database; c) the authority reference and PIN codes of
other members, and d) information disclosed by other members in performance network meetings
which ought to be considered confidential.
55. There are several steps for collecting and validating the information from the members.
Some of the steps have not been included in this report, because they are not appropriate for the

Brazilian context. The data is annually collected through a process that includes validation. The
adapted process is summarised below (APSE 2012).
Step 1 Preparation for annual data collection.

Training session for data collection through an IT enabled system

Peer support for new members

Step 2 Review the service profile data

Profile information about the service. This is information that is unlikely to change
year on year (for instance, the number of pools the facility has).

The use of service profile data only applies to building cleaning, civic, cultural and
community venues, parks, open spaces and horticultural services and sports and
leisure facility management.

Step 3 Annual data template(s)

There are templates in electronic and printed format, and they can be downloaded in
the computer.

Step 4 Data submission (through different means, but internet upload is encouraged)
Step 5 Data validation (by the central office through the use of different reports and
sources)
Step 6 Undertake customer satisfaction analysis

APSE has developed a common survey format

Every local service may add particular questions

The local service distributes the survey to the sample

Questionnaires are returned to APSE

APSE does the analysis of the data

56. All the performance information is produced in different outputs. The most relevant types
of outputs are performance reports, performance indicator (PI) standing reports, summary reports,
direction of travel reports, best and most improved performance case studies and additional
analysis. The description of these reports is given below.
57. A performance report (indexed by family group) shows the highest, lowest and average score
for each indicator, along with the range of data supplied by other family group members. Besides,
the report offers profile information for each local authority. With these reports, members are able
to assess their own performance, relative to overall family group.
58. The performance Indicator (PI) standing report is a personalised report for each authority
detailing the performance scores. This information is relative to the highest, lowest and average data
drawn from both the family group and the service wide data set.

Table 4 Example of data from a performance (PI) standing report


Source: APSE, 2013.
59. The summary report (for each benchmarking service) contains all data submitted throughout
the year across the service and includes data ranges (highest, lowest and average), analysis by
country, trend analysis and participation information (see Table 5 for an example of how the
information is disclosed in a table and in writing).

Table 5 Example of summary report


Source: APSE, 2013
60. The direction of travel report provides an overview of each authoritys performance over the
last five years for all of the benchmarking services for which the authority is registered.

61. The best and most improved performer case studies are also another type of output
produced from performance indicators and additional qualitative information. The case
studies on how the winners achieved their successes are written up into a publication and emailed
out to all members. The peculiarity of these awards is that it considers the most improved
performers, implying that they do not need to be the first in class. It is enough that a particular
organization has made a considerable improvement regardless their position in the absolute or
family ranking.
62. The identification of the best and most improved performers follows several steps. After
data is collected, the best and most improved performers are identified in September. They are
selected based on a statistical methodology, additionally to inspection reports and scores to confirm
the accuracy of the data. Members are consulted on this process and may give input during the
process. Once top data submissions are identified, on site validations are carried out. This is
performed by a trained performance network validator or an APSE principal advisor. After
validation, top data submissions may become finalists and are eligible for a best or most improved
performer award. The list of finalists is decided in December. The best and most improved
performer scores are assessed again in March. A revised list of best performers is published in the
summary report.
63. APSE also offers benchmarking meetings. These meetings showcase best performers, share
information and encourage networking with peers. In these meetings, process benchmarking is
featured and trend analysis is performed.

c) Peer review as complement to benchmarking processes


64. Peer review is a more qualitative process which helps organisations to improve. The
process could be defined as a review of an organisation or a particular service area using colleagues
from outside that organisation. The basic idea behind a peer review is the use of peers as critical
friends not inspectors. The subject of the review may vary from case to case. In some cases, the
peer review focuses on the effectiveness of a whole local authority or Government department. In
other cases, the peer review deals with a particular area such as customer services, financial
planning, human resources, or road safety.
65. The peer review process was introduced in the UK by IDEA (Improvement and
Development Agency). The IDEA was set up in the 1990s to develop and promote peer review in
UK local authorities. The agency was funded by central and local government. During the life
period of peer reviews, several hundred peers have been trained and many hundreds of reviews have
been carried out covering a variety of areas. Normally, all reviews included a mixture of peers who
are elected representatives and unelected officers working together with a full time IDEA officer in
support.
66. The peer review process offers advantages to the organisation under review and to the
peers who carry out the review. For the organisation under review, there is a honest but helpful
and supportive insights and recommendations from the peer reviewers. For the peer reviewer, there
are many advantages that foster their personal development:

Gain first hand and in-depth insight into how another organisation works
Develop new skills in assessing current practice, identifying problems and promoting
solutions
Generate innovative and practicable solutions that will have a positive impact on how an
organisation is run

Learn new ideas from team members with different backgrounds and perspectives from
your own
A chance to work with new people
Disseminate what you learn from the review within your own organisation.

67. A structured approach to a peer review process may include the following aspects.

An agreed benchmark (see Table 7 for an example of benchmark)


A self-assessment against the benchmark
Training for peers in skills and expectations
A team leader experienced in peer review
Clear ground rules on how the peer review process is going to be carried out.

Table 6 Corporate Peer Review Benchmark


1. Leadership and Governance
Ambition
Prioritisation
Decision making and scrutiny
2. Customer Focus and Community Engagement
Customer focus
Communication and community empowerment
Delivering through partnerships
3. Resource and Performance Management
Performance Management
Resource management
Change and project management
4. Organisation and People
Organisational design and development
Managing people
The typical review process has several phases: a request by local authority to have a peer review; an
off-site exercise where the local authority self-assess the organization, an on-site review by the peer
review team and a follow up visit on the improvements carried out by the local authority after the
review (see Table 8).

Table 7 Example of review structure


Pre Review
Authority requests a review
Meet to discuss content and timing
Identify team members
Review off site
Invite authority to do self-assessment
Request documents and review them
Review on site 2/3/4 days
Interviews and meetings with groups of staff, residents , elected officials, etc
Present findings
Follow up
Authority prepares action plan
Brief follow up visit after 6 months to review progress

3) Advantages and limitations of different approaches


This chapter draws on the previous experiences and approaches offering a more analytical view on
the way benchmarking is practiced.
Different types of benchmarking
68. Since the start of formal benchmarking by Xerox Corporation in 1979, there have been
many concepts and types of benchmarking in the UK and other countries. Originally,
benchmarking was defined as: Benchmarking is the continuous process of measuring products,
services, and practices against the toughest competitors or those companies recognized as industry
leaders (David T. Kearns, chief executive officer, Xerox Corporations). This definition focuses on
achieving better performance by learning from best practices regardless where they exist in ones
own organisation, sector or even outside ones sector.
69. An alternative definition of benchmarking is provided in an OECD publication:
Benchmarking as an efficiency tool is based on the principle of measuring the performance of one
organisation against a standard, whether absolute or relative to other organisations (Cowper and
Samuels, 1997). This definition distinguishes the following types of benchmarking:
Benchmarking of self-assessment scores against a checklist of questions such as provided
by the European Excellence Model or the Malcolm Baldridge Award.
Results benchmarking -- comparing the performance of a number of organisations
providing a similar service. In the public sector, this technique can serve to allow the public
to judge whether their local provider makes effective use of its resources, compared to
other similar providers. In the absence of the competitive pressures which operate in the
private sector, this can provide a significant incentive to improve efficiency.
Process benchmarking -- undertaking a detailed examination within a group of
organisations of the processes which produce a particular output, with a view to
understanding the reasons for variations in performance and incorporating best practice.
70. Self-assessments are very useful as they help staff and managers to reflect daily practice
and have a dialogue about perceived strengths and weaknesses. Typically, this raises awareness
of the need to improve communication and co-operation across departments. It is also useful to
compare specific organisational areas (e.g. HR) against other organisations which have high scores
in these areas. Of course, such comparisons are more productive if the scores resulting from selfassessment are backed up by external assessments as this is the case in quality awards. Indeed,
process benchmarking against the winners of quality awards or other externally accredited best
practice may enable public organisations to transfer best practice into their own context and
improve the way they function. However, self-assessments are not a panacea.
71. There is a risk that the tool used becomes an objective in itself without any tangible
improvements. The use of comprehensive excellence frameworks tends to absorb significant
amounts of time while in many cases there is already awareness in public organisations about
organisational weaknesses. It is important to be aware that self-assessment frameworks do not offer
any solutions but merely provide structured self-assessments based on checklists and a scoring
system.
72. Results benchmarking is very useful for public organisations to understand how their
performance or costs compare with peer organisations or alternative providers. Again, they
are an important diagnostic tool to identify areas which need improvement. However, results

benchmarking is not easy. It requires the development and implementation of a performance


management system with all the challenges involved with performance measurement.
Case Study: Use of Health Benchmarking Data in the UK
Using the Health Quality and Productivity Calculator the Health Agency in South Gloucestershire
identified a potential 4% overspend on acute care and potential savings of 42m over four years.
The planning and finance departments worked together to develop a strategy to maximise
investments in primary and community care. The results benchmarking enabled the Agency to
analyse the correlations between investments in elective and unplanned care. The benchmarking
provided detailed evidence to show that while emergency admissions were below the national
average, per-patient costs were higher. It also showed that the Agency was providing 9% more bed
days for long term conditions patients than the national average equating to 10,000 days a year.
Evidence from the benchmarking was used in a presentation that convinced the executive team to
shift investment from acute to community settings as part of its strategic plan.
Source: http://www.nhsbenchmarking.nhs.uk/docs/newsletter_March2010.pdf
73. A controversial issue is whether benchmarking data should be published (e.g. in league
tables) or not. In the UK there was a fair degree of resistance to the benchmarking of performance
and the publication of benchmarking results in its initial stages. This was largely attributable to two
factors: First, there was some perception that it represented a politically-motivated intrusion by
central government into the affairs of local government. Second, many in local government
predicted that publication of the data would have little impact and that the resources required for its
collection would therefore be expended for little gain. These concerns proved to be unfounded.
74. The Audit Commission researched into the publics views and found that people valued
the information being made available, believing that this would enhance public accountability.
This was confirmed by the major press coverage which greeted publication -- over 2,000 pages in
national and local newspapers and over 100 radio and television items were devoted to the subject.
Concerns that reporting might be biased also proved unfounded -- research indicated that people
tended to interpret the performance indicators as throwing a positive light on local government and
were impressed at the range of services provided for the money spent. Press coverage was also
more positive than negative. The result has been that there now is a broad national and local
political acceptance of the value of the performance indicators and accountability to the public.
75. Process benchmarking may be done internally in public organisations or in a club or
network with other organisations. This type of benchmarking allows organisations to learn how
someone does it better and to pinch their good ideas! The key challenge is not to learn from other
mediocre organisations but to choose the best in class which may be organisations providing
different public services or even private or non-profit organisations or organisations in other
countries. The information to establish which organisations are the best in class is often lacking in
the public sector. This is where quality or innovation awards may fulfil an important function to
identify national champions and expose their good practices so that other organisations can learn
from them.
76. A very instructive quality award scheme for Brazil are the Beacon schemes, organised in
several sectors local authorities (1999 to 2011), schools (1998-2005), health (1999 2003) and
further education (1999 2011). Although central government also ran a beacon scheme from 2000
to 2002, this was rather different in nature, being essentially a super-branding exercise, which
gave beacon status to those central government organisational units which had already collected a
series of threshold badges, namely Charter Mark, Investors in People (reaccreditation), EFQM and

ISO 9000. (The Beacon label had already been used since 1994 in the Lotus (subsequently IBM
Lotus) Beacon Awards for excellence in technological solutions).
77. The distinctive feature of this family of Beacon models is that award winners have a
formal responsibility to disseminate their practices. Moreover, the various Beacon schemes have
gone further than simple publication of inspiring case studies and have adopted the open day
model, whereby one-day visits or open days are held during which organisations can share their
knowledge and experience with others, through a two-way exchange of information and giving
hands-on experience with new techniques and innovations Beacon awards are conditional upon
winners being prepared to do the same.
78. The Beacon Council scheme has lasted longest. It was one of the longest-standing policy
instruments of the Labour government, and in 2004 was even expanded from local authorities only
to include also national parks, police and fire services, passenger transport and waste management
authorities, and town and parish councils. By 2005, the scheme had attracted over 1200 applications
and nearly 250 beacon awards had been made. It is intended to recognize excellence or
innovation, and to disseminate good practice. Award winners must provide an excellent service in
the nominated theme; have evidence of a good overall corporate performance, including favourable
recent inspections (the so-called corporate hurdle), and must suggest interesting opportunities for
others to learn from their practice.
79. The Beacon Schools programme identified high performing schools across England. It was
designed to build partnerships between these schools. It represents examples of successful practice,
with a view to sharing and spreading that effective practice to other schools to raise standards in
pupil attainment. Each Beacon School (whether nursery, primary, secondary or special school)
received around 38,000 of extra funding a year, usually for a minimum of three years, in exchange
for a programme of activities enabling the school to collaborate with others to disseminate good
practice and raise school standards. The Beacon Schools programme was phased out in 2005 and
replaced by the Leading Edge Partnership programme for secondary schools.
80. The National Health Service Beacon programme in the UK gave awards to both hospitals
and general practitioners that exemplified local best practice and supported them to develop
and disseminate learning across the wider NHS. Research into the scheme suggested that, whilst
Beacon hospital projects had some potential in developing relatively innovative activity, they were
not perceived to be stepping-stones to wider public health action (Whitelaw et al, 2004). After the
scheme was disbanded in 2003, the NHS Modernisation Agency has held annual health and social
care awards, while pursuing a range of approaches to improvement and dissemination, particularly
'collaboratives', based on an idea from the US Institute for Healthcare Improvement, which bring
together experts and practitioners to pursue dramatic organisational improvements, which have a
very different flavour from the competitive Beacon approach (Bate and Robert, 2002, p.644).
81. The Beacon Awards in the further education sector were launched in 1994 by the
Association of Colleges, but have been sponsored by the governments Learning and Skills Council
(LSC) since 1999. In 2002, the award name was changed to Learning and Skills Beacon, open to
all post-16 providers funded by the Learning and Skills Council and inspected by post-16
inspectorates (DfES, 2002). The programme aims to recognize imaginative and innovative teaching
and learning practice in colleges and work-based learning programmes, and to encourage the
sharing of good practice through dissemination activities and collaborative work (LSC, 2004).
There are two closely associated awards run by the LSC the LSC Award for College Engagement
with Employers and the LSC Award for Equality and Diversity.
82. The Beacon Scheme is now spreading from the UK into another European country. In
2005-06 the associations of municipalities of Bosnia and Herzegovina and the Republic of Serbia,

in partnership with OECD and the Council of Europe, developed a Beacon Councils scheme based
on the UK Beacon Scheme, funded by the UK and Swiss Governments.
83. The processes which lie behind these benchmarking concepts are now widely understood:

Carry out in-depth analysis, function by function, service by service of all areas of the
agencys work
Agree clear protocols on measurement and ensure that all measurement processes adhere
to these protocols
Take into account the different contexts of each agency
Find the best in class performer
Search for this best in class performance in all agencies and all sectors, particularly where
an activity or function is being benchmarked, rather than a whole service
Check on the transferability of the lessons from the best-in-class provider before
attempting to introduce them in other agencies.

84. Furthermore, it has become widely appreciated that the focus of benchmarking can range
well beyond the unit costs and productivity levels which were common in the early days of
performance comparison. In the UK public organisations now realise that benchmarking can allow
comparisons of (Bovaird and Halachmi, 2001):
Unit costs although this depends on some artificial distinctions being made with respect
to the allocation of joint costs
Processes - which require detailed process mapping
Outputs although this often depends on an agreement to aggregate service activity levels
which exhibit a certain degree of heterogeneity
Outcomes although this may partly rely on subjective assessment of quality of outcomes
by different stakeholders
Policies and strategies Policies and strategies are likely to differ greatly between
agencies (e.g. for ideological and political reasons), even in similar contexts. Consequently,
benchmarking of these variables may well lead to pressures of divergence (towards
relevant practice), compared to the pressures of convergence (towards best practice)
which may well result from benchmarking in the other four categories (unit costs,
processes, outputs, outcomes).

4) Making benchmarking work: critical success factors from UK


benchmarking projects
This section will discuss the lessons learnt from UK benchmarking projects based on scientific
research and identify success factors which ensure that performance and process comparisons are
translated into actual service and outcome improvements. This will include success factors both at
organisational and policy levels. The section will also describe a range of brief benchmarking case
studies from the UK to provide practical examples to the Brazilian government.
85. The experiences with collaborative benchmarking in the UK as well as international research
point out several key success factors to explain the success of benchmarking processes. As
Graph 1 illustrates these factors can be grouped into three themes:
Contextual factors
Organisational factors
Benchmarking factors.
Graph 1: Success factors of public sector benchmarking

d) Contextual factors
86. Performance comparison may be used by politicians to get a political advantage or to
defend their position. The existence of political competition may hinder learning and
improvements because results from the benchmarking exercise may be interpreted differently by
political parties (see Askim et al. 2008 for evidence of the influence of political competition in local
government in Norway). Such political competition is particularly strong where there is an
adversarial two-party system. For example, in the UK league tables are used by political parties to
make political capital from their opponents failures. Furthermore, the establishment of
benchmarking clubs between local authorities with a conservative or leftist political majority is
politically difficult in the UK, even if there is a desire at officer level to learn from a local authority
with a different political majority. Consequently, politics gets in the way of collaborative learning
from partners, however interested they are in each others approaches.
87. Similarly, the German experience with benchmarking shows that there is only very limited
appetite for performance comparisons between states (Lnder) if they are governed by different
political party majorities. Indeed, state governments have been highly reluctant to undertake any
benchmarking between their different education systems. It was only when OECD carried out the
PISA Test, which involved benchmarking of learning outcomes of schools, that there was
transparency on which states were doing better in terms of school performance and this then
started an intense political debate on education systems in Germany. The relevance of this critical
success factor is particularly high for those public sector organisations in which politicians are
elected by the population (for instance, local or state authorities).

e) Organisational issues
88. These factors have to do with the way in which the learning is organized by the agency and
how the data is used to drive change. According to Christopher Hood, there are different
strategies for using performance information by public sector organizations: intelligence gathering,
ranking performers and target setting. This classification can be adapted to benchmarking. The
results of benchmarking will depend on the purposes of the higher level government that fosters it.
The abovementioned strategies are linked to this issue.
Target setting
89. Target setting is a strategy that refers to the setting of quantitative standards to be
achieved, like for instance, maximum waiting time in hospitals or exam results for schools. In
some public services, there is the possibility to introduce some sort of competition against a
benchmark or yardstick model (based on the setting of targets) as opposed to individual
negotiations with service providers. For instance, Norway has used in the past two different
regulatory schemes in order to award contracts to bus companies. Dalen & Gmez-Lobo (2003)
asserted that those contracts negotiated against a performance benchmark were more efficient than
the individual contracts with each bus company.
90. In the target setting strategy, achievement is normally coupled with rewards or sanctions
(for instance by increasing or decreasing budgetary allocations to agencies that meet / do not meet
the target). Although this strategy has the advantages of focusing on priorities and motivating
organizations to achieve results, there are other disadvantages.
91. When performance management has fundamentally a control purpose, then a series of
responses from the staff concerned can be expected. These involve changes in behaviours, but

not only changes in delivering the service they also include changes in data recording and date
reporting behaviours. In the case of using this information as control device, there are incentives for
organizations to game with the data. For instance, Bovaird and Halachmi (2001) refer to
performance misreporting as a sort of gaming scheme. Many agencies have a tendency to make
more attractive their performance reports, particularly if they believe that their rivals (or partners)
are doing something similar. This may be because the managers wish to preserve their managerial
empires, or because they wish to protect the staff from external competition, or because they wish to
preserve the service (and therefore the service users) against budgetary cuts.
92. Furthermore, since the object of rewards is the achievement of targets, non-quantified
targets might be neglected by implementing strategy. For instance, if targets are set for schools
in line with PISA results, the teaching of other subjects that do not count for PISA might be
neglected by the schools or teachers may discourage poorly-performing students from entering for
public examinations, so that their results do not depress the average achieved by the class as a
whole.
93. However, the more mechanistic the performance management system, the more weight is
placed on it by senior managers, and the more serious the sanctions for poor performance, the
more likely is it that all the behaviour patterns mentioned above will occur. Changes in
behaviour which essentially mean revised approaches to data collection and reporting that will
make final results appear more favourable are, at best, misleading (because they suggest service
improvements which have not, in fact, taken place) and, at worst, damaging to the service (because
they undermine the integrity of the performance data in the system, so that no valid performance
assessments can be made for the service any longer). Indeed, the clearer are the control mechanisms
in this situation and the more rigorously they are pursued by senior managers, the more likely is it
that staff will seek to control senior management by giving them signals which please them,
although these signals may not correspond to any underlying improvement in the service system
this is known as the perverse control syndrome.
Intelligence gathering
94. Intelligence gathering refers to the collection and evaluation of performance figures for
information purposes. In this case, it is not coupled with incentives or sanctions for good and bad
performing organisations like in the previous strategy. The intelligence gathering strategy lies on
the assumption that organisations with lower performance will develop an initiative to self-steer
towards organizational improvement.
95. There are two problems associated with intelligence gathering. On the one hand, nobody is
practising systematic control of the improvement of performance. On the other hand, the data
agreed upon in the benchmarking exercise might be very thin. The data compiled in benchmarking
exercises is often very much at the minimalist end of the spectrum, in terms of what is needed to
really judge the success of a service, particularly in qualitative terms (see Bovaird and Halachmi,
2001). In principle, many advocates of benchmarking focus rather on this strategy with the hope
that acquiring information from best performers would produce enough incentives for low
performing agencies to change their behaviour. If performance measurement systems are
introduced primarily for the purposes of giving strategic direction to staff (by making it clearer what
performance results are most valued in the organisation), or to enable organisational learning (by
finding out what works and what does not work), then it is less likely that the perverse aspects of
performance measurement mentioned above will occur.
96. Concerns are raised by the purposes of performance reporting. These normally include:
accountability to internal and external stakeholders and application for funding from financial
stakeholders. These purposes are likely to be seen by staff as potentially negative, which will tempt

them to report information in as favourable a light as possible, putting a gloss on their results by
whichever mechanisms they can find.
Ranking performers
97. Ranking performers is a typical strategy of benchmarking exercise too that is based on
league tables by fostering the competition of diverse organisations (schools, hospitals, local
authorities) in order to increase user choice or to foster better management. The latter is done
through the idea of naming (worse performers) and shaming (them) for their inability to reach the
performance levels of other public sector organisations. This system has the advantage of
constituting an incentive for permanent performance steering. However, the disadvantages may
outweigh the positive aspects as losers from adverse contexts (for instance, a school cannot
improve because it only receives pupils from deprived areas where learning is difficult) get
frustrated.
98. Further, there is a natural tendency to form league tables from comparative performance
data, that are not necessarily helpful for improving performance by collecting only enough
information to make it possible to compile these league tables. However, league tables alone can
show only that there are identified differences in performance, not why they have arisen and
whether lessons could be transferred from the context of the most to the less successful agencies.
99. In sum, the likelihood that benchmarking will lead to positive organisational changes depends
to a major extent on the purposes of the performance management system in place in the
organisation.

f) Organising and planning the benchmarking exercise


Benchmarking among similar / different organizations
100. The impact of benchmarking among similar organizations (schools, hospitals, local
authorities, prisons) and among different organizations but similar processes is not seen as
unanimous by different authors. For instance, in studies done in the United States and in Norway,
the heterogeneity of the organizations under comparison is a key success factor:
municipalities learn much from contexts that are outside their usual frames of reference
(Askim et al. 2007, S. 311).
if the purpose is to find new ideas for improving operations, then omitting all but those
operating in a similar fashion defeats this purpose (Ammons und Rivenbark 2008, S. 312)
However, the studies carried out in Germany emphasize the contrary point of view:
"The foundation of an effective comparison is based on a minimum level of structural
similarity of the municipalities under comparison." (Adamaschek und Baitsch, 1999, S. 37)
101. However, practice shows that both types of benchmarking are useful and could even
complement each other. There are several critical success factors that are common to both of them.
Independent external validation of comparative performance data
102. Self-assessment is normally a major element in any benchmarking process with
advantages and disadvantages. It has the obvious advantage that it is more informed and more
improvement-oriented, because the staff themselves have most to gain from implementing any
improvements which emerge in the benchmarking process. However, self-assessment might be
blinkered, because familiarity with the service reduces the ability to see potential changes that are
uncomfortable or threatening.

103. Furthermore, even managers who are keen to promote change may be tempted to
misreport performance in a self-assessment framework, because they believe that colleagues in
other parts of the organisation are doing so, and they do not wish to be outshone by cheats (which
would mean that their budgets, their clients and their staff would be adversely affected if they do
not join in the cheating).
104. This clearly provides a strong rationale for independent audit of all data used in
benchmarking exercises. Of course, independent audit also has potential limitations and
disadvantages:

It is less knowledgeable about the service


It is more easily fooled by smooth talkers
It will not be (closely) involved in making recommendations work
It may be unduly sceptical of in-house providers
It may not be trusted by in-house providers (or even by external providers)

However, when done well, independent audit has the offsetting advantages of being probing,
rigorous and revealing.
Organisation of benchmarking clubs
105. Benchmarking clubs may be seen as a middle way regarding the degree of
independence of the benchmarking process. Membership of benchmarking clubs is voluntary, so
there is less threat than with audit. Other members are respected, so that there may be less attempt
to misreport performance. Everyone has a vested interest in learning, so that there may be more
open exchange of uncontaminated information. This is reinforced if none of the club members is
particularly concerned about aiming for a high league table position. In a nutshell, clubs are a way
to bring together the social and technical organization of the benchmarking exercise. The use of
benchmarking clubs should not exclude the use of independent audits. A combination of both
strategies could be of help in benchmarking. Approaches to performance comparison which do not
involve at least one of these checks on self-assessment are clearly open to question.
106. The concept of benchmarking clubs in the UK public sector has more than a decade of
experience and the number of examples is relatively high. The basic idea of forming those clubs
relates to one success factor mentioned by Bulkley et al. (2010), the need to transfer knowledge out
of experience, especially the implicit knowledge that is so difficult to find elsewhere. From previous
work done by Bovaird and Halachmi (2001) on the Best Value pilot initiative, some lessons can be
drawn.
107. For the organisation of benchmarking clubs the following issues should be taken into
account:
They must contain requisite difference between their members, so that the initial focus
does not need to be on comparability, as was originally thought in many cases. Difference
can be conceived as of the type of business or the actual performance of different
organizations that are very similar.
However, in spite of these differences, lessons must be transferable, so that some element
of comparability between club members remains important
Clubs should ideally contain members who represent the best-in-class for the service or
issue or activity being studied or at the very least have members who have close links
with best-in-class organisations. Otherwise, the process of learning becomes very difficult.

Discussion and data sharing in the club must be open and frank, so that it will only occur
where there is trust. Benchmarking clubs are a social activity as well as a technical exercise.
A club needs analytical expertise but even where this is in place, it probably also needs
considerable time before it can be brought to bear on the core problems and reliable data.
Quantity and quality of resources
108. The use of benchmarking as a performance improvement tool cannot be clearly linked to
the budgetary situation of the public sector as an external contextual factor, i. e. there is no
clear indication that budgetary surplus or deficit will catalyse the use of benchmarking in the public
sector. If benchmarking is only oriented at making pure savings, the behaviour of the staff involved
is mostly defensive and will not lead to improvements (see Bowerman & Ball, 2001).
109. There is a consensus in many studies on the need of deploying financial, personal and
time resources for doing benchmarking (see Holloway et al, 1999; Bulkley et al, 2010; Blanc et
al, 2010). Financial and human resources are needed in order to develop joint performance
information for comparison as well as to set the basis for interpretation of the data and for looking
for causes of low-high performance (Askim et al, 2008). In the examples of benchmarking networks
in the UK, subscription fees may vary from $1600 to $10000 approximately.
110. The actors involved in the benchmarking process may require some training but the quality of
the project management and the facilitation of the benchmarking exercise are key.
Engagement of the managers
111. The engagement of top civil servants (political appointees and politicians depending on
the type of organisation) is closely connected to the disposition to learn by the agency. Once
the top managers have shown interest in the exercise, there are several factors influencing the
outcomes of benchmarking. As several authors have confirmed, it is key that middle managers are
willing to benchmark with other organizations (Ammons & Rivenbank, 2008), that all members of
the organization are willing to reflect upon and change (whenever necessary) the work routines
(Bulkley et al. 2010).
The use of performance indicators
112. When a performance measurement system is being designed for the first time there is a
tendency to define too many performance indicators. Those experiences have been made by
many benchmarking clubs (Catalonian local authorities) or central authorities (Audit Commission).
For instance, at the early stages of developing the Best Value Performance Regime, the Audit
Commission used around 300 performance indicators, that were later on reduced to 70. It may be
difficult to escape the temptation of adding performance indicators to the basket, provided that
information is probably easy to obtain or only relevant to one of the partners. However, Ammos &
Rivenbank (2007) recommend focusing on strategic performance indicators that refer to efficiency
and results.

5) Making benchmarking sustainable: integrating benchmarking in


wider public service and governance reforms
This chapter provides a framework on how to integrate benchmarking into wider public service and
government reforms. Given the resources required to plan and carry out benchmarking projects it
is important that they are not conceived as an additional activity but add maximum value to other
public service and governance reforms.
113. Benchmarking is not different from other management instruments and should not be
become an end in itself. As any other tool, benchmarking should have the only purpose of
enhancing the performance of the organization and not to be an end in itself, which is often
forgotten by managers.

g) Principles to apply in the implementation of benchmarking


114. Benchmarking must be embedded in the service/policy cycle of a public sector
organisation. Otherwise it will be a spurious exercise without any added value for the agency.
There are three key principles that should help the agency to make effective use of benchmarking:
115. First, start with outcomes and outputs in order to compare and improve inputs and
processes. This is a key element of any benchmarking exercise. Regardless whether similar or
dissimilar agencies are compared, an agency should start analysing to what extent the defined
outcomes and outputs have been achieved. In many cases, quantitative measures of performance are
required in order to have an objective and subjective assessment of the achievement of results.
Therefore, a solid performance management system is required by the organization. 115. A
performance management system is helpful for federal public sector organizations that
deliver services to citizens and for regulatory agencies. In the case of regulation, when the
activities of the design of regulation and its enforcement rests with the same agency, the measures
of outputs may be related to how, for instance, the regulation and the management of risk is
approached by agencies from different sectors (i.e. financial, food or pharmacy regulation). In this
case, each agency may offer their numbers on the percentage of the private market covered by their
risk regulation strategy. For both services and regulatory policies, the agency should then analyse
processes and inputs in order to ascertain how a particular performance level is achieved.
116. Secondly, try to understand the outcomes and outputs of the agency from the point of
view of users and stakeholders. Again, benchmarking is not an instrument to improve the
organization per se but an instrument to achieve results that matter to users and other stakeholders.
In this regard, the benchmarking exercise should include the view of the end users as a part of the
learning process. Otherwise, the improvement resulting from benchmarking may be too much
focussed on internal improvements without many tangible benefits to citizens and other
stakeholders. However, a public sector organisation may find it useful to compare satisfaction rates
and other performance data gathered from customer surveys and evaluations with other
organisations as part of a benchmarking process.
117. Thirdly, try to involve middle and frontline staff in the benchmarking exercise. As
mentioned in the previous section, benchmarking requires high involvement of all managers from
the organization at different stages of the process: Senior managers have to make sure that the
benchmarking initiative is understood, supported and used as a performance improvement process
Middle managers are likely to take an active role in the benchmarking process by shaping its design
as well as analysing causes for good or bad performance. Finally, front line managers, apart from

eventually taking part in some of the preparatory meetings with the partners, are likely to be
involved in information gathering as well as offering their view on how the data should be
interpreted.

h) Phases for individual agencies that use benchmarking


118. Benchmarking should be embedded in an overall improvement process. In order to make
effective use of benchmarking in public organisations, the Quality and Efficiency model
developed by Governance International outlines how to embed benchmarking in the overall service
improvement process, by taking into account five phases.
Graph 2: The Quality and Efficiency Cycle

119. First, one should define the services/ policies that are likely to benefit most from
benchmarking. In this phase, consultation with users and other stakeholders may help to
understand what needs to be benchmarked. Clearly, it is impossible to benchmark everything. The
idea is simple: service users, businesses, and other stakeholders are best placed to define the
challenges they face and the support they need from public services. There are several instruments
that can be of help in this task depending on the type of product (services or policies) for which the
benchmarking is designed. Consultation should be agreed upon with the partners of the
benchmarking exercise, so that different organizational dimensions are taken into account when
defining the services to be improved.
120. Secondly, performance should be measured and compared. Once the service/ policy to be
benchmarked have been identified, performance should be measured in order to assess the quality
of public services and policies. Since not all service/ policy dimensions are measurable, some
qualitative information must be added in a way that supports the benchmarking exercise.
121. Thirdly, change should be carefully managed. Once areas which need improvement have
been identified, it is necessary to identify how to improve. The exercise here consists of identifying
and interpreting the reasons behind this specific performance. Benchmarking might be followed by

peer review workshops (or any other tool) in order to improve processes that enhance the
organizational performance.
122. Fourthly, improve by learning from benchmarking. As this is a process that involves any
staff member linked to the affected processes, a team could be formed in order to design an
improvement plan in more detail covering the specific problems as well as cross-cutting issues of
the organization.
123. Finally, evaluate the service/policy improvements. Finally, this cycle involves reviewing
whether performance has really improved. This can be done by many different means, including
successive rounds of benchmarking as well as peer reviews or other instruments.

6) Resources
i) Benchmarking literature on UK experiences
APSE (Association for Public Service Excellence) (2013) Benchmarking for success. Manchester.
APSE (Association for Public Service Excellence) (2013a) Terms and conditions of performance networks
membership. Manchester.
APSE (Association for Public Service Excellence) (2012) The essential guide to performance networks Year
14 (2011/12). Manchester.
APSE (Association for Public Service Excellence) (2012a) 2010-2011 Performance indicators. Manchester.
Benington, J. (2000), The modernisation and improvement of government and public services, Public
Money and Management, April/June, pp. 3-8.
Bovaird, T. and Halachmi, A. (2001), Learning from international approaches to best value, Policy and
Politics, Vol. 29 No. 4, pp. 451-63.
Bowerman, M., Ball, A. and Francies, G. (2001), Benchmarking as a tool for the modernisation of local
government, Financial Accountability and Management, Vol. 17 No. 4, pp. 321-9.

Foot, J. (1998), How to do benchmarking: a practitioners guide. Inter Authorities Group.


Walker, S., Masson, R., Telford, R. & White, D. (2007). Benchmarking in National Health Service
Procurement in Scotland, Health Services Management Research, 20, 253-260.
Cowper, J. and Samuels, M. (1997), Performance benchmarking in the public sector: The United Kingdom
Experience, Paper prepared for an OECD Meeting on Public Sector Benchmarking.
Patterson, A. and Pinch, P.L. (2000), Public sector restructuring and regional development: the impact of
compulsory competitive tendering in the UK, Regional Studies, 34 (3), 265-275.

j) International research on public sector benchmarking


Ammons, D. N. & Rivenbark, W. C. (2008). Factors Influencing the Use of Performance Data to Improve
Municipal Services: Evidence from the North Carolina Benchmarking Project, Public Administration
Review, March/April, 304315.
Askim, J., Johnsen, . & Christophersen, K. (2008). Factors Behind Organizational Learning from
Benchmarking: Experiences from Norwegian Municipal Benchmarking Networks, Journal of Public
Administration Research and Theory, 18, 297320.
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E. & Bulkley, K. E. (2010). Learning to Learn from
Data: Benchmarks and Instructional Communities, Peabody Journal of Education, 85, 205225.
Bulkley, K. E., Christman J. B., Goertz, M. E., Lawrence, N. R. (2010). Building with Benchmarks: The Role of
the District in Philadelphias Benchmark Assessment System, Peabody Journal of Education, 85,
186204.
Holloway, J., Francis, G. & Hinton, M. (1999). A Vehicle for Change? A Case Study of Performance
Improvement in the "New" Public Sector, The International Journal of Public Sector Management,
12, 351365.

Kouzmin, A., Loeffler, E., Klages, H. and Korac-Kakabadse, N. (1999), Benchmarking and performance
measurement in public sectors: towards learning for agency effectiveness, The International
Journal of Public Sector Management, Vol. 12 No. 2, pp. 121-44.
Parrado, Salvador (1996) Una visin crtica de la implantacin del benchmarking en el sector pblico,
Revista Vasca de Administracin Pblica, Nm. 45 Pginas: 37-62.

You might also like