You are on page 1of 12

University of Stellenbosch

Economics of Education 1

Measurement problems in education

Meriel Lawrie

14570971

Dr. P de Villiers

23 March 2009
Contents
Introduction.................................................................................................................................................2

Literature Review........................................................................................................................................2

Input accounting..........................................................................................................................................3

Direct costs.....................................................................................................................................3
Indirect costs...................................................................................................................................3
Environmental cost factors.............................................................................................................4
Quality of inputs.............................................................................................................................4
Educational output measures.......................................................................................................................5

Cognitive outcomes........................................................................................................................5
Quantifying Educational Attainment.................................................................................5
Quality and cognitive skills Measures...............................................................................6
Non-cognitive outcomes.................................................................................................................6
Social outcomes.................................................................................................................7
Personal outcomes.............................................................................................................8
Bias and Measurement errors......................................................................................................................8

Conclusion...................................................................................................................................................9

List of References......................................................................................................................................10
Introduction

The literature concerning the economics of education relies heavily on econometric techniques to
support explanatory theories and related conclusions concerning proposed educational policy.
This implies that much of the literature is based on measurements of educational inputs, costs
and benefits. Many of the variables commonly used are, however proxies for immeasurable
characteristics. Many of these have become widely accepted, although there is still no consensus
as to which is optimal, or whether any one is in all circumstances. Several of these proxies are
directly measurable, whilst others are estimates based on samples. These sample values and
proxies pose the additional problems of measurement errors and bias.
It is of great importance to consider these measurement problems, since these proxies are used to
measure efficiency, to deduce rates of return to education and to compare systems and
educational entities.
This paper is structured in such a way that it follows the flow of a cost-benefit analysis,
discussing the problems encountered with measuring each element and mentioning certain
solutions that have been used or suggested.
The next section is a brief overview of the existing literature about educational measurement
problems. This is followed by a section on measuring inputs into the educational system which is
broken down into more specific subsections concerning different inputs. Thereafter a discussion
about output measures follows. A section is then devoted to studying the effects of measurement
errors. A short conclusion then follows.

Literature Review
Literature concerning measurement problems in education date back to 1970 ( and probably
further), although there has been very little written about this subject in itself.
The literature is divided up into specific references to the applicable issues in each cost benefit
analysis, efficiency analysis or other case study. Most of the research in this direction has been
done in order to prove or disprove the existence of bias as in Ashenfelter (1994) and Card (1994)
or to support theories as to the social costs and benefits of education as in Cohn and Geske
(1990). Since educational measurements are used in many disciplines (human resources,
sociology, policy making and so forth) contributions to the literature have come from many
sources. There seems to be a need for the extensive findings to be collated in order to allow for
easier understanding and interpretation.

Input accounting
Inputs here refer not only to direct and indirect costs, but also to all other resources expended in
the educational process.

Direct costs
It is often assumed that direct costs of education are easily measured. This is not always the case.
In most educational systems school budgets are inaccurate as they represent planned expenditure
and not actual expenditure. Furthermore, these budgets are not done on a per pupil basis. This
problem can be circumvented by using extensive questionnaires to determine the resources
expended per pupil, which must then be valued. This is commonly known as the ingredients
approach. (Hummel-Rossi, 2002:11-14)

Indirect costs
Cohn and Geske (1990:71) mentions that there are many costs brought about by education that
are not included in the school budgets but are included in the private cost of education. These
include transportation costs and the costs of books and supplies. These costs would also have to
be estimated by survey data. Other indirect costs are not so much outlays as they are losses
incurred. Earnings foregone by students are one of the largest losses that need to be taken into
account. Cohn and Geske (1990:76-77) suggests that this cost is estimated by assuming that
children (who are below the legal working age) have no such opportunity costs, but that students
forego a certain proportion of the market wage they would have received. This still does not
reflect the amount of household production a child could be doing though.
Fraumeni and Jorgenson (1992:67) recommend that the time parents spend on their children’s
education also be measured and added to the total set of inputs. This however is not a simple
task. One would once again have to depend on survey data which would introduce large
measurement errors.
Indirect costs are not all borne by private citizens, but also include the depreciation of
government owned school buildings and imputed rents. These are hard to estimate since
buildings like schools are not generally let, so there is no clear market price.
Educational institutions are usually nonprofit and as such they are tax exempt. The foregone
taxes can be seen as lost income, but are also seldom included in cost measures.

Environmental cost factors


When comparing the efficiency of different schools, it is important to note that, “Through no
fault of their own, some school districts must spend more than other districts to obtain the same
level of educational outcomes.”(Duncombe, Ruggiero, and Yinger, 1995:2) Duncombe et. al.
(1995:2), mentions that one has to take into account that some schools have to pay teachers with
similar skills and experience levels more to compensate for uncontrollable circumstances. These
environmental cost factors must be taken into account when schools with different circumstances
are compared. Duncombe et.al. (1995:13) builds an index by which to adjust teacher salaries in
order to attempt to overcome this problem, by using a data set that gives the percentage of pupils
with disabilities, single parent households, language barriers and the proportion of households in
poverty. To this end the TIMSS and PIRLS data includes a wide range of such socio-economic
variables. (Foy, Kennedy, Martin & Mullins. 2006 and Martin, Mullis & Olson. 2008) Even
though on average this measure may be suitable, it is an aggregate measure and cannot be
determined without error.

Quality of inputs
The High School and Beyond longitudinal study and several others include many variables, some
of which are pupil/teacher ratios, length of the term, degrees and experience of teachers and
teachers’ test scores which have been used in conjunction with per pupil expenditures as proxies
for the quality of educational inputs (Brewer and Ehrenberg. 1994:1). Todd and Wolpin
(2003:F14) however warn against the use of too many of these proxies, as they are not
uncorrelated and may in fact introduce more bias to a model than the reduction in omitted
variable bias that it would cause.

Educational output measures


There is much disagreement between educational economists as to which outputs should be
considered and how to measure them. The first output which is always considered is the level of
attainment achieved and the cognitive outcomes attached to it. Thereafter follows a whole range
of non-cognitive skills and social outcomes. These are particularly difficult to measure, as they
are indirect in nature.

Cognitive outcomes
Traditionally cognitive outcomes have been measured in two ways, firstly as years of education
completed, then as achievement levels in standardized tests.

Quantifying Educational Attainment


The most widely accepted measure of attainment level is given by the years spent within the
educational system. On a macro economic level in cross country analysis this is often estimated
on the basis of beginning of the year enrollment or enrollment flows. Reported enrollment rates
are often inaccurate though, particularly in secondary and tertiary schooling levels. (Krueger &
Lindahl. 2001:1102) On a micro-economic level this is determined by national surveys. This is a
potential source of error as the questions are open for incorrect interpretation. In South Africa’s
1996 and 2001 Census for instance, it was found that the question as to the highest level of
schooling attained was probably interpreted as the level having been attempted or completed. (In
the 2001 Census, 28.4% of respondents who claimed to have attained the highest grade in
secondary school also claimed to be attending secondary school at the time of the survey). When
the question was split into the highest level having been “completed” or “attended but not
completed” in 2007, this discrepancy was removed. (Yu. 2009:20)
Furthermore higher educational levels are defined differently from country to country, so
international comparisons based on years of schooling are not always simple. (Krueger &
Lindahl. 2001:1114) Even at a national level, years of education are not always comparable. In
countries like Germany and Denmark for instance, after a certain level, schools are diversified
into separate systems for students wishing to acquire vocational or academic training.
(Hoffmeyer-Zlotnick & Warner. 2005) Therefore, after the tenth year of education, it is no longer
accurate to measure education in years. This holds for tertiary levels of education in all countries
in that it is not possible to equate years of study at different types of institutions, or in different
courses.
It is also standard practice to use a qualification such as a degree or diploma as measure of
educational level completed. This then prevents anything less that the first school diploma to be
taken into account, substantially limiting the use of the measure.
Until now our discussion of quantity of education has been limited to formal education, but as
Cohn and Geske (1990:56) suggests, other (non formal) types of education such as home-
schooling and on-the-job-training must also be taken into account as forms of education. In most
regression analysis, this problem has not been taken into account, and the effect of such
education is captured by the experience variable.

Quality and cognitive skills Measures


The International Association for the evaluation of Educational Achievement have devised
intricate tests of mathematical and scientific skills (the TIMSS test) and reading skills (the
PIRLS test), the results of which are standardized in order to allow for international comparisons.
These test results however are still not fully comparable over time since the procedures were
changed in the last round of testing in 2007. Furthermore the international comparability is
questioned since even these tests are curriculum based. (Foy, Kennedy, Martin & Mullins. 2006
and Martin, Mullis & Olson. 2008)

Non-cognitive outcomes
Many non-cognitive outcomes of education have been suggested and considered in the literature,
all of which cause measurement problems.
These outcomes can be divided into social and personal outcomes, where personal outcomes are
more easily measured but still not without problems.
Social outcomes
De Villiers (1993: 230) mentions non-cognitive skills such as tolerance, honesty, motivation to
achieve, respect for authority and obedience that have been stressed as outcomes of the
educational system. The next four paragraphs deal with indicators of these skills as used by
Jacob (2002:591). Whilst motivation and desire to achieve could be measured by the amount of
time a student spends on his homework per week it is impossible to accurately measure this time
spent and the intensity of the effort during this time.
During the educational process one could use the number of times a student has had to be
disciplined as a measure of his respect for authority and obedience, his honesty and belief in
justice.
A student’s social skills and organizational skills could be measured by grades achieved during
activities involving group work.
Whether a child has been kept back in school could serve as a proxy for the perceptions of his
social maturity and adaptability, but here natural ability (intelligence) also plays a large role.
None of these variables are dependent only on education, so they cannot be used to determine the
ceteris paribus effect of education on social outcomes. They can also only be used for students in
the educational system and therefore cannot reflect the effect that the total amount of education
procured had on these traits although at least these measures could be procured for the same
student at different ages.

Dee (2004:1697) suggests that newspaper readership be used as a measure of civic knowledge,
but this is also flawed in that a lower level of education increases the opportunity costs of
reading the newspaper and therefore the problem of selection bias is run into. Dee (2004:1698)
also claims that more educated individuals might see voting as unlikely to change anything
without implying that they are less socially responsible, discrediting this as a measurement tool.

In all the previously mentioned measures, a family background that encouraged education would
also encourage such activities, so the educational system cannot be held solely responsible for
such actions. This does not necessarily prevent the use of these variables in establishing whether
there is a positive correlation, but does limit the precision of estimates of the size of this effect.
Personal outcomes
It is possible to measure the employment and wage effects of education, although measuring
fringe benefits within the workplace is troublesome. Haveman and Wolfe (1984) found that there
exists no measure for the benefits of occupational choice associated with higher levels of
education.
As for consumption effects, Haveman and Wolfe (1984) theorizes that it is impossible to
measure the increased utility of consumption caused by an increase in education.
Another personal return to education which cannot be readily measured is that of increased
household productivity as described in Cohn and Geske (1990:46).

Bias and Measurement errors


Two more problems in measurement in educational economics are that of measurement errors
and the bias it causes as well as the bias caused by omitted variables and correlated proxies.
Ashenfelter (1994: 1171) finds that measurement error in schooling levels accounts for a large
proportion of the variation in the difference in the reported measures of schooling of twins and
that this causes a significant downward bias in the estimated returns to schooling. Card (1994:
32) also finds that returns are underestimated, but questions whether this is completely
attributable to measurement errors.
Omitted variable bias occurs when a significant variable that has been omitted is correlated to a
variable that has been included in the model. In this case the included variable’s effect will be
biased. The most commonly reported omitted variable bias in educational economics is that of
ability bias. Conventional wisdom implies that this will bias the education effect upwards since
natural ability is positively correlated with educational attainment. Ashenfelter (2000) however,
mentions that since most ability measures are influenced by the level of educational attainment at
the time of the ability measurement, including the ability variable could in fact cause downward
bias in the effect of education.
Other omitted variable biases could be caused by family related or other unobserved
characteristics since schooling is not independent of other factors that influence earnings
(Ashenfelter, 2000).
As mentioned before, when adding a proxy for the social outcomes of education to a model of
returns, it is important to take into account that these proxies are also correlated to other
exogenous variables. This would cause an upward bias in the returns to education, additional to
the bias caused by omitted exogenous variables.

Conclusion
There are still many unresolved measurement problems in educational economics. Many of them
have been circumvented to some extent by the use of proxies, but these are all still problematic in
themselves and there is no consensus as to which of these should be used. Furthermore whenever
any proxies are used the economist has to be aware of possible bias in the results caused by these
proxies. His is of great importance especially when policy decisions are to be based on
econometric results involving education.

(2496 words)
List of References
Ashenfelter, O., Krueger, A. (1994) ‘Estimates of the Economic Return to Schooling from a New
Sample of Twins’. The American Economic Review, 84(5):1157-1173.
Ashenfelter, O., Harmon, C., and Oosterbeek, H. (1999). ‘A review of estimates of the
schooling/earnings relationship, with tests for publication bias’. Labour Economics
6(4):453-470
Barbara Hummel-Rossi, B. and Ashdown, J. (2002, April). The state of cost-benefit and cost-
effectiveness analyses in education. Review of Educational Research 72(1): 1-30. 
Available from: http://www.proquest.com/ (accessed March 22, 2009).

Brewer, D.G. and Ehrenberg, R.G. (1994). ‘Do School and Teacher Characteristics Matter?
Evidence from High School and Beyond’. Economics of Education Review 13(1):1-17.
Card, D. (1995). ‘Earnings, schooling and ability revisited’. In: Research in labor economics.
Polachek S. (ed.), 14: 23-48 (JAI Press, Greenwich, CT) Available from:
http://emlab.berkeley.edu/users/card/papers/earn-school.pdf (Accessed March 22, 2009)
Cohn, E. & Geske, T.G.(1990). Economics of education. Pergamon Press: Oxford.
Dee, T.S. (2004). ‘Are there civic returns to education?’ Journal of Public Economics. 88:1697–
1720
De Villiers, P. (1993). Incentives for efficiency in educational systems, with special reference to
the South African education system. Mimeo. EDUPOL research report: Stellenbosch.
Duncombe, W.D., Ruggiero, J. and Yinger, J.M.(1996). Alternative approaches to measuring the
cost of education. In: Ladd, H.F.,(ed.)1996. Holding schools accountable: Performance-
based reform in education, The Brookings Institution, Washington, DC Available from:
http://www-cpr.maxwell.syr.edu/efap/Publications/Alternative_Approaches.pdf (accessed
March 22, 2009)
Fraumeni, B.M. and Jorgenson. (1992) ‘Investment in Education and U.S. Economic Growth’.
The Scandinavian Journal of Economics, 94:67 Supplement. Proceedings of a Symposium
on Productivity Concepts and Measurement Problems: Welfare, Quality and Productivity
in the Service Industries. Available from: http://www.jstor.org/stable/3440246 (accessed
March 22, 2009)
Foy, P., Kennedy, A.M., Martin, M.O., and Mullins, I.V.S. (ed.). (2006) Pirls 2006 International
Report. Chestnut Hill, MA, United States: TIMSS & PIRLS International Study Center,
Boston College.
Haveman, R.H. and Wolfe, B.L. (1984). ‘Schooling and Economic Well-Being: The Role of
Nonmarket Effects’. The Journal of Human Resources, 19(3):377-407. Available from:
http://www.jstor.org/stable/145879. (Accessed March 23, 2009).
Hoffmeyer-Zlotnick, J. H. P., and Warner, U. (2005). How to Measure Education in Cross-
National Comparison: Hoffmeyer-Zlotnick/Warner -Matrix of Education as New
Instrument. In Hoffmeyer-Zlotnick,J. H. P. and Harkness,J. (ed.), Methodological Aspects
in Cross-National Research. Mannheim: GESIS - ZUMA Zentrum fur
Umfragen, Methoden und Analysen
Jacob, B.A. (2002). ‘Where the boys aren’t: non-cognitive skills, returns to school and the
gender gap in higher education’. Economics of Education Review, 21: 589–598.
Krueger, A. and Lindahl, M. (2001, December). ‘Education for Growth: Why and For Whom?’
Journal of Economic Literature, 39(4):1101-1136. Available from:
http://www.jstor.org/stable/2698521 (Accessed March 22, 2009)
Martin, M.O., Mullis, I.V.S. and Olson, J.F. (ed.). (2008). TIMSS 2007 Technical Report
Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.

Todd, P.E. and Wolpin, K.I. (February 2003). ‘On the Specification and Estimation of the
Production Function for Cognitive Achievement’. The Economic Journal, 113(485):F3-
F33 Available from: http://www.jstor.org/stable/3590137 (Accessed March 22, 2009)

Yu, D. (2009). The comparability of Census 1996, Census 2001 and Community Survey 20007.
Unpublished.

You might also like