You are on page 1of 21

Organizational Research Methods

http://orm.sagepub.com

History and Use of Relative Importance Indices in Organizational Research


Jeff W. Johnson and James M. Lebreton
Organizational Research Methods 2004; 7; 238
DOI: 10.1177/1094428104266510
The online version of this article can be found at:
http://orm.sagepub.com/cgi/content/abstract/7/3/238

Published by:
http://www.sagepublications.com

On behalf of:

The Research Methods Division of The Academy of Management

Additional services and information for Organizational Research Methods can be found at:
Email Alerts: http://orm.sagepub.com/cgi/alerts
Subscriptions: http://orm.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations http://orm.sagepub.com/cgi/content/refs/7/3/238

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

ORGANIZA
10.1177/1094428104266510
Johnson,
LeBreton
TIONAL
/ HISTOR
RESEARCH
Y AND
METHODS
USE OF RELATIVE IMPORTANCE

History and Use of Relative Importance Indices


in Organizational Research
JEFF W. JOHNSON

Personnel Decisions Research Institutes

JAMES M. LEBRETON
Wayne State University

The search for a meaningful index of the relative importance of predictors in multiple regression has been going on for years. This type of index is often desired when
the explanatory aspects of regression analysis are of interest. The authors define
2
relative importance as the proportionate contribution each predictor makes to R ,
considering both the unique contribution of each predictor by itself and its incremental contribution when combined with the other predictors. The purposes of
this article are to introduce the concept of relative importance to an audience of
researchers in organizational behavior and industrial/organizational psychology
and to update previous reviews of relative importance indices. To this end, the authors briefly review the history of research on predictor importance in multiple regression and evaluate alternative measures of relative importance. Dominance
analysis and relative weights appear to be the most successful measures of relative
importance currently available. The authors conclude by discussing how importance indices can be used in organizational research.
Keywords: relative importance; multiple regression analysis; dominance analysis; relative weights; organizational research

Multiple regression analysis has two distinct applications: prediction and explanation
(Courville & Thompson, 2001). When multiple regression is used for a purely predictive purpose, a regression equation is derived within a sample to predict scores on a criterion variable from scores on a set of predictor variables. This equation can be applied
to predictor scores within a similar sample to make predictions of the unknown criterion scores in that sample. The elements of the equation are regression coefficients,
which indicate the amount by which the criterion score would be expected to increase
as the result of a unit increase in a given predictor score, with no change in any of the
other predictor scores. The extent to which the criterion can be predicted by the predicAuthors Note: Correspondence concerning this article should be addressed to Jeff W. Johnson, Personnel Decisions Research Institutes, 43 Main Street SE, Suite 405, Minneapolis, MN 55414; e-mail: jeff.
johnson@pdri.com.
Organizational Research Methods, Vol. 7 No. 3, July 2004 238-257
DOI: 10.1177/1094428104266510
2004 Sage Publications

238

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

239

tor variables (indicated by R2) is of much greater interest than is the relative magnitude
of the regression coefficients.
The other use of multiple regression is for explanatory or theory-testing purposes.
In this case, we are interested in the extent to which each variable contributes to the
prediction of the criterion. For example, we may have a theory that suggests that one
variable is relatively more important than another. Interpretation is the primary concern, such that substantive conclusions can be drawn regarding one predictor with
respect to another. Although there are many possible definitions of importance (Bring,
1994; Kruskal & Majors, 1989), this is what is typically meant by the relative importance of predictors in multiple regression.
Achen (1982) discussed three different meanings of variable importance. Theoretical importance refers to the change in the criterion based on a given change in the predictor variable, which can be measured using the regression coefficient. Level importance refers to the increase in the mean criterion score that is contributed by the
predictor, which corresponds to the product of a variables mean and its unstandardized regression coefficient. This is a popular measure in economics (Kruskal &
Majors, 1989). Finally, dispersion importance refers to the amount of the criterion
variance explained by the regression equation that is attributable to each predictor
variable. This is the interpretation of importance that most often corresponds to measures of importance in the behavioral sciences, when the explanatory aspects of
regression analysis are of interest (Thomas & Decady, 1999).
To draw conclusions about the relative importance of predictors, researchers often
examine the regression coefficients or the zero-order correlations with the criterion.
When predictors are uncorrelated, zero-order correlations and standardized regression coefficients are equivalent. The squares of these indices sum to R2, so the relative
importance of each variable can be expressed as the proportion of predictable variance
for which it accounts. When predictor variables are correlated, however, these indices
have long been considered inadequate (Budescu, 1993; Green & Tull, 1975; Hoffman,
1960). In the presence of multicollinearity, squared correlations and squared standardized regression coefficients are no longer equivalent, do not sum to R2, and take on very
different meanings. Correlations represent the unique contribution of each predictor
by itself, whereas regression coefficients represent the incremental contribution of
each predictor when combined with all remaining predictors.
To illustrate the concept of relative importance and the inadequacy of these indices
for reflecting it, consider an example from a situation in which a relative importance
index is frequently of interest. Imagine a customer satisfaction survey given to bank
customers, and the researcher is interested in determining how each specific aspect of
bank satisfaction contributes to customers overall satisfaction judgments. In other
words, what is the relative importance bank customers place on teller service, loan
officer service, phone representative service, the convenience of the hours, and the
interest rates offered in determining their overall satisfaction with the bank? Regression coefficients are inadequate because customers do not consider the incremental
amount of satisfaction they derive from each bank aspect while holding the others constant. Zero-order correlations are also inadequate because customers do not consider
each bank aspect independent of the others. Rather, they consider all the aspects that
are important to them simultaneously and implicitly weight each aspect relative to the
others in determining their overall satisfaction.

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

240

ORGANIZATIONAL RESEARCH METHODS

Because neither index alone tells the full story of a predictors importance,
Courville and Thompson (2001) recommended that both regression coefficients and
correlations (or the equivalent structure coefficients) be examined when interpreting
relative importance (see also Thompson & Borrello, 1985). Examining two different
indices to try to determine an ordering of relative importance is highly subjective,
which is why the search for a single meaningful index of relative importance has been
going on for years. Although a number of different definitions have been offered over
the years, we offer the following definition of relative importance:
Relative importance: The proportionate contribution each predictor makes to R , considering both its direct effect (i.e., its correlation with the criterion) and its effect when combined with the other variables in the regression equation.
2

This definition integrates previous definitions, highlights the multidimensional nature


of the relative importance construct, and is consistent with contemporary research and
thought on this topic (cf. Budescu, 1993; J. W. Johnson, 2000a, 2001a).
A rich literature has developed in the areas of statistics, psychology, marketing,
economics, and medicine on determining the relative importance of predictor variables in multiple regression (e.g., Azen & Budescu, 2003; Budescu, 1993; Gibson,
1962; Goldberger, 1964; Green, Carroll, & DeSarbo, 1978; Healy, 1990; J. W. Johnson, 2000a; Kruskal, 1987). Because there is no unique mathematical solution to the
problem, these indices must be evaluated on the basis of the logic behind their development, the apparent sensibility of the results they provide, and whatever shortcomings
can be identified.
The purposes of this article are (a) to introduce the concept of relative importance to
an audience of researchers in organizational behavior and industrial/organizational
psychology and (b) to update previous reviews of relative importance indices
(Budescu, 1993; Kruskal & Majors, 1989). To this end, we (a) briefly review the history of research on predictor importance in multiple regression, (b) evaluate alternative measures of relative importance, (c) discuss how importance indices can be used
in organizational research, (d) present issues to consider before applying a relative
importance measure, and (e) suggest directions for future research in this area.

Brief History of Relative Importance Research


Although the proper method of measuring the relative importance of predictors in
multiple regression has been of interest to researchers for years (e.g., Englehart,
1936), the first debate on the issue in the psychology literature appeared in Psychological Bulletin in the early 1960s. Hoffman (1960) sought to statistically describe the
cognitive processes used by clinicians when making judgments about patients. He
introduced the term relative weight, which referred to the proportionate contribution
each predictor makes to the squared multiple correlation coefficient when that coefficient is expressed as the sum of contributions from the separate predictors. He showed
that the products of each variables (x) standardized regression coefficient (x) and its
associated zero-order correlation with the criterion (rxy) summed to R2, and he asserted
that these products represented the independent contribution of each predictor
(p. 120). This was an unfortunate choice of words, as the term independent can have
many different meanings. Ward (1962) objected to the use of this term, stating that the

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

241

independent contribution of a predictor refers to the amount by which R2 increases


when the predictor is added to the model and all other predictors are held constant.
Hoffman (1962) replied that his relative weights were not intended to measure independent contribution in this sense but did not reply to Wards criticism that negative
weights were uninterpretable.
This exchange led Gibson (1962) to suggest a possible resolution to this difference
of opinion. Noting that both conceptions of the independent contribution of a variable
are the same when all predictors are uncorrelated, he suggested a transformation of the
original variables to the set of orthogonal factors with which they have the highest
degree of one-to-one correspondence in the least-squares sense. The squared regression coefficients from a regression of the criterion on these orthogonal factors would
represent a proxy for both types of independent contribution.
A few years later, Darlington (1968) published his influential review of the use of
multiple regression. He reviewed five possible measures of predictor importance and
concluded that no index unambiguously reflects the contribution to variance of a variable when variables are correlated. He further concluded that certain indices have
value in specific situations, but Hoffmans (1960) index has very little practical value.
Despite the criticisms of Hoffmans index, arguments for its use have continued to persist (e.g., Pratt, 1987; Thomas, Hughes, & Zumbo, 1998).
Customer satisfaction researchers have long been interested in determining how
customers perceptions of specific attributes measured on a survey contribute to their
ratings of overall satisfaction (Heeler, Okechuku, & Reid, 1979; Jaccard, Brinberg, &
Ackerman, 1986; Myers, 1996). Multiple regression is a frequently used technique,
although its limitations have been recognized (Green & Tull, 1975; McLauchlan,
1992). When attempts have been made to deal with multicollinearity, principal components analysis has been commonly used to create uncorrelated variables or factors
(Green & Tull, 1975; Grisaffe, 1993). Green et al. (1978) used this approach to create
an index that better reflected relative importance, but there is little evidence that this
index has ever been used.
Kruskal and Majors (1989) reviewed the concept of relative importance in many
scientific disciplines, concluding that there is widespread interest in assigning degrees
of importance in most or all scholarly fields. The correlation-type importance indices
that they reviewed were nothing out of the ordinary or particularly clever, and statistical significance was used as a measure of importance to an alarming degree. Kruskal
and Majors reviewed several measures but made no recommendations other than a call
for broader statistical discussion of relative importance.
In psychology, the topic of relative importance of predictors in multiple regression
was resurrected somewhat when Budescu (1993) introduced dominance analysis.
This is a technique for determining first whether predictor variables can be ranked in
terms of importance. If dominance relationships can be established for all predictors,
Budescu suggested the average increase in R2 associated with a variable across all possible submodels as a quantitative measure of importance. This measure was computationally equivalent to a measure suggested by Lindeman, Merenda, and Gold (1980),
which had not received much attention. This was a major breakthrough in relative
importance research because it was the first measure that was theoretically meaningful
and consistently provided sensible results.
More recently, J. W. Johnson (2000a) presented a measure of relative importance
that was based on the Gibson (1962) technique of transforming predictors to their

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

242

ORGANIZATIONAL RESEARCH METHODS

orthogonal counterparts but allowed importance weights to be assigned to the original


correlated variables. Despite being based on entirely different mathematical models,
Johnsons epsilon and Budescus dominance measures provide nearly identical results
when applied to the same data (J. W. Johnson, 2000a; LeBreton, Ployhart, & Ladd,
2004 [this issue]). The convergence between these two mathematically different
approaches suggests that substantial progress has been made toward furnishing meaningful estimates of relative importance among correlated predictors.
The relative importance of predictors has been of interest in organizational research
(e.g., Dunn, Mount, Barrick, & Ones, 1995; Hobson & Gibson, 1983; Zedeck &
Kafry, 1977), although importance has typically been determined by examining
regression coefficients (e.g., Lehman & Simpson, 1992), by examining path coefficients (e.g., Borman, White, & Dorsey, 1995), or by creating uncorrelated variables in
paper-people studies and interpreting the correlations or standardized regression coefficients (e.g., Hobson, Mendel, & Gibson, 1981; Rotundo & Sackett, 2002). Recent
symposia at the annual conference of the Society for Industrial and Organizational
Psychology have introduced the dominance (Budescu, 1993) and epsilon (J. W. Johnson, 2000a) indices to an audience of organizational researchers (J. W. Johnson,
2000b; LeBreton & Johnson, 2001, 2002). This special issue of Organizational
Research Methods represents the most visible and current attempt to communicate
how relative importance indices can be used in organizational research.

Alternative Measures of Importance


Numerous measures of variable importance in multiple regression have been proposed. Each measure can be placed into one of three broad categories. Single-analysis
methods use the output from a single regression analysis, either by choosing a single
index to represent the importance of the predictors or by combining multiple indices to
compute a measure of importance. Multiple-analysis methods compute importance
indices by combining the results from more than one regression analysis involving different combinations of the same variables. Variable transformation methods transform
the original predictors to a set of uncorrelated variables, regress the criterion on the
uncorrelated variables, and either use those results as a proxy for inferring the importance of the original variables or further analyze those data to yield results that are
directly tied to the original variables. In this section, we review the methods that fall
into these three categories, presenting the logic behind them and the benefits and
shortcomings of each.
Single-Analysis Methods

Zero-order correlations. The simplest measure of importance is the zero-order correlation of a predictor with the criterion (rxy) or the squared correlation (rxy2). Importance is then defined as the direct predictive ability of the predictor variable when all
other variables in the model are ignored. Individual predictor rxy2s sum to the full
2
model R when the predictors are uncorrelated, but Darlington (1990) argued that
importance is proportional to rxy, not rxy2. In fact, whether rxy or rxy2 more appropriately
represents importance depends on how importance is defined. If importance is defined
as the amount by which a unit increase in the predictor increases the criterion score,

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

243

importance is proportional to rxy. If importance is defined as the extent to which variation in the predictor coincides with variation in the criterion, importance is proportional to rxy2. Neither measure, however, adequately reflects importance when predictors are correlated because they fail to consider the effect of each predictor in the
context of the other predictors.
Thompson and Borrello (1985) and Courville and Thompson (2001) recommended examining the structure coefficients in a multiple regression analysis, along
with the standardized regression coefficients, to make judgments about variable
importance. A structure coefficient is the correlation between a predictor and the predicted criterion score. Because structure coefficients are simply zero-order correlations divided by the model R2, examining structure coefficients is really no different
than examining zero-order correlations (Thompson & Borrello, 1985).
Standardized regression coefficients. Standardized regression coefficients (or beta
weights) are the most common measure of relative importance when multiple regression is used (Darlington, 1990). When predictors are uncorrelated, betas are equal to
zero-order correlations, and squared betas sum to R2. When predictors are correlated,
however, the size of the beta weight depends on the other predictors included in the
model. A predictor that has a large zero-order correlation with the criterion may have a
near-zero beta weight if that predictors predictive ability is assigned to one or more
other correlated predictors. In fact, it is not unusual for a predictor to have a positive
zero-order correlation but a negative beta (Darlington, 1968), making interpretation of
the beta impossible in terms of importance.
Unstandardized regression coefficients. Policy capturing is a method for statistically describing decision-making processes by regressing quantitative judgments on
the cue values representing the information available to the judge. The cues are typically designed to be uncorrelated so that regression coefficients can be used to unambiguously interpret importance, but this threatens the construct validity of the results if
uncorrelated cues do not represent the real-world situation (Hobson & Gibson, 1983).
Lane, Murphy, and Marques (1982) argued that unstandardized regression coefficients are the most appropriate measure of importance because they are invariant
across changes in cue intercorrelations. This is because, assuming the linear model
holds, changes in cue intercorrelations lead to changes in the standard deviations of the
judgments and in the correlations between judgments and cues. Lane et al. conducted a
study in which 14 participants rated 144 profiles that were generated from three different intercorrelation matrices (48 profiles from each). They found no significant differences in mean unstandardized regression coefficients across cue structure but significant differences for zero-order correlations, betas, and semipartial correlations. It is
not clear how generalizable these results are, however, because (a) only three predictors were included, (b) larger differences in cue intercorrelations could have a larger
effect, and (c) they are limited to the policy-capturing paradigm.
Usefulness. The usefulness of a predictor is defined as the increase in R2 that is associated with adding the predictor to the other predictors in the model (Darlington,
1968). Like regression coefficients, this measure is highly influenced by multicollinearity. For example, if two predictors are highly correlated with each other and
with the criterion, and a third is only moderately correlated with the criterion and has

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

244

ORGANIZATIONAL RESEARCH METHODS

low correlations with the other two predictors, the third predictor will have the largest
usefulness simply because of its lack of association with the other predictors.
Semipartial correlation and t statistic. The semipartial correlation is equal to the
square root of the usefulness (Darlington, 1990). Darlington (1990) and Bring (1994)
showed that this measure is superior to the standardized regression coefficient as a
measure of relative importance. Rather than being based on the variables standard
deviation, the semipartial correlation is based on the variables standard deviation conditional on the other predictors in the model. This measure is proportional to the t statistic used to determine the significance of the regression coefficient, so t can be used
as an easily available measure of relative importance (Bring, 1994; Darlington, 1990).
This is still not an ideal measure, however, as it is affected by multicollinearity to the
same extent as are regression coefficients, and it can take on small or negative values
even when predictors have large zero-order correlations with the criterion.
The product measure. Hoffmans (1960) measure (xrxy), termed the product measure by Bring (1996), has been criticized extensively because it shares the disadvantages of both measures of which it is composed (Bring, 1996; Darlington, 1968; Green
& Tull, 1975; Ward, 1962, 1969). Like regression coefficients, this measure can easily
be zero or negative even when a variable contributes substantially to the prediction of
the criterion (Darlington, 1968). Pratt (1987) presented a theoretical justification for
this index as a measure of relative importance and showed that it has a number of desirable properties. Three of these properties are particularly noteworthy:
2

1. The sum of the importance weights is equal to R .


2. When all predictors are equally correlated and have equal regression coefficients, the
importance of the sum of a subset of predictors is equal to the sum of the importance
weights of the subset.
3. When a subset of predictors is replaced by a linear combination of those predictors, importance weights for the remaining predictors remain unchanged.

Bring (1996) noted that variables that are uncorrelated with the criterion but add to
the predictive value of the model (i.e., suppressor variables; Cohen & Cohen, 1983)
have no importance according to the product measure. He described this result as
counterintuitive. Thomas et al. (1998) argued that suppressor variables should be
treated differently from nonsuppressors and that the contribution of the suppressors
should be assessed separately by measuring their contribution to R2. Treating suppressors and nonsuppressors separately, however, ignores the fact that the two types of
variables are complexly intertwined. The importance of the nonsuppressors depends
on the presence of the suppressors in the model, so treating them separately may also
be considered counterintuitive.
By the same token, variables that have meaningful correlations with the criterion
but do not add to the predictive value of the model also have no importance under the
product measure. This is counter to our definition of relative importance, which suggests that a measure of importance should consider both the effect a predictor has in
isolation from the other predictors (i.e., the predictor-criterion correlation) and in conjunction with the other predictors (i.e., the beta weight). The product measure essentially ignores the magnitude of one of its components if the magnitude of the other
component is very low.

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

245

Another aspect of the product measure that limits its utility considerably is the fact
that negative importance values are possible and not unusual. Pratt (1987) stated that
both x and rxy must be of the same sign for this measure to be a valid measure of importance. Thomas et al. (1998), however, argued that negative importance occurs only
under conditions of high multicollinearity. They note that negative importance of
largemagnitude can occur only if the variance inflation factor (a standard measure of
multicollinearity) for the jth variable is large (p. 264). The variance inflation factor
(VIF) is given by VIFj = 1/(1 R2( j)), where R2( j) is the squared multiple correlation
from the regression of variable xj on the remaining xs. Removing the variable(s) with
the large VIF would eliminate redundancy in the model and leave only positive importance values. It is relatively easy, however, to identify situations in which a variable has
negative importance even when multicollinearity is low. For example, consider a scenario in which five predictors are equally intercorrelated at .30 and have the following
criterion correlations:
ryx1 = .40
ryx 2 = .50
ryx 3 = .20
ryx 4 = .40
ryx 5 = .50.

In this case, the beta weight for x3 is .104, so x ryx 3 = .021. Although the extent
to which this could be considered of large magnitude is debatable, it is a result that is
very difficult to interpret. Because the predictors are all equally intercorrelated, each
predictor has the same VIF. There is no a priori reason for excluding x3 based on multicollinearity, so the only reason for excluding it is because the importance weight does
not make sense. An appropriate measure of predictor importance should be able to
provide an interpretable importance weight for all variables in the model and should be
able to do this regardless of the extent to which the variables are intercorrelated. Pratt
(1987) showed that the product measure is not arbitrary, but we believe it still leaves
much to be desired as a measure of predictor importance.
Multiple-Analysis Methods

Average squared semipartial correlation. When the predictors have a relevant,


known ordering, Lindeman et al. (1980) recommended using the squared semipartial
correlation of each predictor as it is added to the model as the measure of importance.
In other words, if a theoretically meaningful order was x1, x2, x3, the importance measures would be ry2x1 , ry2(x 2 x1 ) , and ry2(x 3 x1 x 2 ) , respectively. This is simply the progression of usefulness indices. Lindeman et al. pointed out that a relevant ordering of predictors rarely exists, so they suggested the average of each predictors (p) squared
semipartial correlation across all p! possible orderings of the predictors as a more general importance index. This defines predictor importance as the average contribution

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

246

ORGANIZATIONAL RESEARCH METHODS

to R across all possible orderings. This index has several desirable properties, including (a) the sum of the average squared semipartial correlations across all predictors is
2
equal to R , (b) any predictor that is positively related to the criterion will receive a positive importance weight, and (c) the definition of importance is intuitively meaningful.
2

Average squared partial correlation. Independent of Lindeman et al. (1980),


Kruskal (1987) suggested averaging each predictors squared partial correlation over
2
all p! possible orderings. This measure does not sum to R and is not as intuitive as
2
Lindeman et al.s (1980) average increase in R . Theil (1987) and Theil and Chung
(1988) built on Kruskals (1987) approach by suggesting using a function from statistical information theory to transform the average partial correlations to average bits of
information provided by each variable. This does allow an additive decomposition of
the total information, but it does little to add to the understanding of the measure for the
typical user.
Dominance analysis. The approach taken by Budescu (1993) differs from previous
approaches in that it is not assumed that all variables can be ordered in terms of importance. His dominance analysis is a method of determining whether predictor variables
can be ranked. In other words, Budescu contended that there may well be situations in
which it is impossible to determine an ordering, so a dominance analysis should be
undertaken prior to any quantitative analysis. For any two predictor variables, xi and xj,
let xh stand for any subset of the remaining p 2 predictors in the set. Variable xi dominates variable xj if, and only if,
Ry2x i x h Ry2x j x h

(1)

for all possible choices of xh. This can also be stated as xi dominates xj if adding xi to
each of the possible subset models always results in a greater increase in R2 than would
be obtained by adding xj. If the predictive ability of one variable does not exceed that of
another in all subset regressions, a dominance relationship cannot be established and
the variables cannot be rank ordered meaningfully (Budescu, 1993).
The idea behind dominance analysis is attractive, but Budescus (1993) definition
of importance is very strict. Consequently, it is usually not possible to order all predictor variables when there are more than a few predictors in the model. Recently, however, this strict definition of dominance has been relaxed somewhat (Azen & Budescu,
2003). Azen and Budescu (2003) defined three levels of dominance: (a) complete, (b)
conditional, and (c) general. Complete dominance corresponds to the original definition of dominance. Conditional dominance occurs when the average additional contribution within each model consisting of the same number of variables is greater for one
predictor than for another. General dominance occurs when the average additional
contribution across all models is greater for one predictor than for another.
The general dominance measure is the same as the quantitative measure Budescu
(1993) suggested be computed if all p(p 1)/2 pairs of predictors can be ordered (i.e.,
the average increase in R2 associated with a predictor across all possible submodels:
C x j ). The C x j s sum to the model R2, so the relative importance of each predictor can
be expressed as the proportion of predictable criterion variance accounted for by that

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

247

predictor. The measure is computationally equivalent to Lindeman et al.s (1980) average squared semipartial correlation over all p! orderings.
The methods developed by Budescu (1993) and Lindeman et al. (1980) seem to be
effective in quantifying relative importance. The average increase in R2 associated
with the presence of a variable across all possible models is a meaningful measure that
fits our definition of relative importance presented earlier. This measure averages a
variables direct effect (considered by itself), total effect (conditional on all predictors
in the full model), and partial effect (conditional on all subsets of predictors; Budescu,
1993).
Although these methods are theoretically and intuitively appealing, they have at
least one major shortcoming: They become computationally prohibitive as the number
of predictors increases. Specifically, these methods require the computation of R2 for
all possible submodels. Computational requirements increase exponentially (with p
predictors, there are 2p 1 submodels) and are staggering for models with more than
10 predictors (Neter, Wasserman, & Kutner, 1985). Lindeman et al. (1980) stated that
their method may not be feasible when p is larger than five or six, although programs
have been written to conduct the analysis with as many as 14 predictors. Azen and
Budescu (2003) offer a SAS macro available for download that performs the dominance analysis calculations, but the maximum number of predictors allowed is 10.
Although dominance represents an improvement over traditional relative importance
methods, the computational requirements of the procedure make it difficult to apply to
many situations for which it is valuable.
Criticality. Azen, Budescu, and Reiser (2001) proposed a new approach to comparing predictors in multiple regression, which they termed predictor criticality. Traditional measures of importance assume that the given model is the best-fitting model,
whereas criticality analysis does not depend on the choice of a particular model. A predictors criticality is defined as the probability that it is included in the best-fitting
model given an initial set of predictors. The first step in determining predictor criticality is to bootstrap (i.e., resample with replacement; Efron, 1979) a large number of
samples from the original data set. Within each bootstrap sample, evaluate all (2p 1)
2
submodels according to some criterion (e.g., adjusted R ). Each predictors criticality
is determined as the proportion of the time that the predictor was included in the bestfitting model across all bootstrap samples. Criticality analysis has the advantage of not
requiring the assumption of a single best-fitting model, and it has a clear definition.
Research comparing predictor criticality to various measures of predictor importance
should be conducted to gain an understanding of how the two concepts are related.
Criticality analysis requires even more computational effort than dominance analysis
does, however, because 2p 1 submodels must be computed within each of 100 or
more bootstrap samples. This severely limits the applicability of criticality analysis to
situations in which only a few predictors are evaluated.
Variable Transformation Methods

Transform to maximally related orthogonal variables. Gibson (1962) and R. M.


Johnson (1966) suggested that the relative importance of a set of predictors can be
approximated by first transforming the predictors to their maximally related orthogonal counterparts. In other words, one creates a set of variables that are as highly related

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

248

ORGANIZATIONAL RESEARCH METHODS

as possible to the original set of predictors but are uncorrelated with each other. The
criterion can then be regressed on the new orthogonal variables, and the squared standardized regression coefficients approximate the relative importance of the original
predictors. This approach has a certain appeal because relative importance is unambiguous when variables are uncorrelated, and the orthogonal variables can be very
highly related to the original predictors. The obvious problem with this approach is
that the orthogonal variables are only approximations of the original predictors and
may not be close representations if two or more original predictors are highly
correlated.
Green, Carroll, and DeSarbos (1978) 2. Green et al. (1978) realized the limitations of inferring importance from orthogonal variables that may not be highly related
to the original predictors and suggested a method by which the orthogonal variables
could be related back to the original predictors to better estimate their relative importance. In their procedure, the orthogonal variables are regressed on the original predictors. Then the squared regression weights of the original predictors for predicting each
orthogonal variable are converted to relative contributions by dividing them by the
sum of the squared regression weights for each orthogonal variable. These relative
contributions are then multiplied by the corresponding squared regression weight of
each orthogonal variable for predicting the criterion and summed across orthogonal
variables to arrive at the importance weight, called 2. The sum of the 2s is equal to R2.
Green et al. (1978) showed that this procedure yields more intuitive importance
weights under high multicollinearity than do the Gibson (1962) and R. M. Johnson
(1966) methods. It has the further advantages of allowing importance to be assigned to
the original predictors and being much simpler computationally than dominance
analysis. It has a very serious shortcoming, however, in that the regression weights
obtained by regressing the orthogonal variables on the original predictors are still
coefficients from regressions on correlated variables (Jackson, 1980). The weights
obtained by regressing the orthogonal variables on the original predictors to determine
the relative contribution of each original predictor to each orthogonal variable are just
as ambiguous in terms of importance as regression weights obtained by a regression of
the dependent variable on the original variables. Green, Carroll, and DeSarbo (1980)
acknowledged this criticism but could respond only that their measure was at least
better than previous methods of allocating importance. Boya and Cramer (1980) also
pointed out that this method is not invariant to orthogonalizing procedures. In other
words, if an orthogonalizing procedure other than the one suggested by Gibson (1962)
and R. M. Johnson (1966) were used (e.g., principal components), the procedure
would not yield the same importance weights.
Johnsons relative weights. J. W. Johnson (2000a) proposed an alternative solution
to the problem of correlated variables. Green et al. (1978) attempted to relate the
orthogonal variables back to the original variables by using the set of coefficients for
deriving the orthogonal variables from the original correlated predictors. Because the
goal is to go from the orthogonal variables back to the original predictors, however, the
more appropriate set of coefficients are the coefficients that derive the original predictors from the orthogonal variables. In other words, instead of regressing the orthogonal variables on the original predictors, the original predictors are regressed on the
orthogonal variables. Because regression coefficients are assigned to the uncorrelated

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

249

variables rather than to the correlated original predictors, the problem of correlated
predictors is not reintroduced with this method. Johnson termed the weights resulting
from the combination of the two sets of squared regression coefficients epsilons ().
They have been more commonly referred to as relative weights (e.g., J. W. Johnson,
2001a), which is consistent with the original use of the term used by Hoffman (1960,
1962).
A graphic representation of J. W. Johnsons (2000a) relative weights is presented in
Figure 1. In this three-variable example, the original predictors (Xj) are transformed to
their maximally related orthogonal counterparts (Zk), which are then used to predict
the criterion (Y). The regression coefficients of Y on Zk are represented by k, and the
regression coefficients of Xj on Zk are represented by jk. Because the Zks are
uncorrelated, the regression coefficients of Xj on Zk are equal to the correlations
between Xj and Zk. Thus, each squared jk represents the proportion of variance in Zk
accounted for by Xj (J. W. Johnson, 2000a). To compute the relative weight for Xj, multiply the proportion of variance in each Zk accounted for by Xj by the proportion of variance in Y accounted for by each Zk and sum the products. For example, the relative
weight for X1 would be calculated as
1 = 21112 + 212 22 + 213 23 .

(2)

Epsilon is an attractive index that has a simple logic behind its development. The
relative importance of the Zks to Y, represented by k2, is unambiguous because the Zks
are uncorrelated. The relative contribution of Xj to each Zk, represented by jk2, is also
unambiguous because the Zks are determined entirely by the Xjs, and the jks are regression coefficients on uncorrelated variables. The jk2s sum to 1, so each represents
the proportion of k2 that is attributable to Xj. Multiplying these terms ( 2jk k2 ) yields
the proportion of variance in Y that is associated with Xj through its relationship with
Zk, and summing across all Zks yields the total proportion of variance in Y that is associated with Xj.
Relative weights have an advantage over dominance analysis in that they can be
easily and quickly computed with any number of predictors. As noted earlier, dominance analysis requires considerable computational effort that typically limits the
number of predictors to 10 or fewer. A possible criticism is that Boya and Cramers
(1980) point about Green et al.s (1978) measure not being invariant to orthogonalizing procedures also applies to relative weights.
Illustration and Interpretation of Relative Importance Methods

To illustrate the use and interpretation of relative importance methods, we generated a correlation matrix and analyzed it using several of the indices described above.
Although this is a contrived correlation matrix, the range and magnitude of the values
are consistent with the research literature. The upper portion of Table 1 contains the
correlation matrix; the lower portion contains the results of several importance analyses. In this example, organizational commitment was the criterion variable predicted
by job satisfaction, leader communication, participative leadership, and worker motivation. It is important to note that this correlation matrix was intentionally designed to
yield identical rank orders of the predictors across all importance methods. This was

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

250

ORGANIZATIONAL RESEARCH METHODS

11

X1

Z1

12
13

21
22

X2

23
31

32
33

X3

Z2

Z3

Figure 1: Graphic Representation of J. W. Johnsons (2000a) Relative Weights for Three


Predictors

Table 1
Example Correlation Matrix and Relative Importance Weights
Calculated by Different Methods
Variable
1. Organizational commitment
2. Job satisfaction
3. Leader communication
4. Participative leadership
5. Worker motivation

1.00
.50
.40
.35
.25

1.00
.40
.40
.40

1.00
.60
.30

1.00
.40

1.00

Model R = .301.

Predictor
Job satisfaction
Leader communication
Participative leadership
Worker motivation
Sum

r yi2

2j

jryj

Cj

jryj (%)

Cj (%)

j (%)

.250
.160
.123
.063

.151
.040
.005
.000

.195
.080
.025
.001
.301

.163
.075
.045
.020
.301

.163
.075
.044
.020
.301

64.6
26.5
8.5
0.5
100.0

54.0
24.8
14.8
6.5
100.0

54.1
24.9
14.5
6.5
100.0

done to illustrate the advantages of the newer statistics, even when the rank orders
were identical.
Importance indexed via squared correlations indicated that all four predictors were
important. Examination of the magnitude of the estimates indicated that job satisfaction was approximately twice as important as leader communication and participative
leadership, and job satisfaction was approximately 4 times as important as worker
motivation. However, the conclusions drawn using squared beta weights and the product measure were radically different. Using squared betas, only job satisfaction and

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

251

leader communication emerged as important predictors, with job satisfaction being


approximately 4 times as important as leader communication. Using the product measure, job satisfaction and leader communication again emerged as important predictors, but now job satisfaction was approximately twice as important as leader communication. Two sets of discrepancies are illustrated in these analyses. The first set of
discrepancies was that different predictors emerged as important depending on which
statistics were applied to the correlation matrix. The second set of discrepancies was
that the magnitude of relative importance shifted dramatically depending on which
statistics were applied to the correlation matrix. Job satisfaction was less than twice as
important as leader communication according to the squared correlations, but job satisfaction was nearly 4 times as important according to the squared betas. In contrast to
the ambiguous and inconsistent conclusions obtained using traditional methods of
importance, the clearer and more consistent conclusions were obtained using dominance and epsilon. These estimates were nearly identical and indicated that job satisfaction was approximately twice as important as leader communication and 4 times as
important as participative leadership. Worker motivation was a relatively unimportant
predictor of commitment.
Table 1 shows that importance weights computed using the product measure, dominance, and epsilon sum to the model R2. Therefore, when these importance estimates
are rescaled by dividing them by the model R2 and multiplying by 100, they may be
interpreted as the percentage of the model R2 associated with each predictor. For example, job satisfaction accounted for 54% of the predictable variance in organizational
commitment according to both dominance and epsilon but 65% according to the product measure. The characteristic of importance weights summing to R2 greatly
enhances the interpretability of these weights, making them much easier to present to
people who do not possess a great understanding of statistics.
Summary

Considering all the relative importance indices just reviewed, we suggest that the
preferred methods among those currently available are Budescus (1993) dominance
analysis and J. W. Johnsons (2000a) relative weights. These indices do not have logical flaws in their development that make it impossible to consider them as reasonable
measures of predictor importance. Both methods yield importance weights that represent the proportionate contribution each predictor makes to R2, and both consider a
predictors direct effect and its effect when combined with other predictors. Also, they
yield estimates of importance that make conceptual sense. This is of course highly
subjective, but it is relatively easy to eliminate other indices from consideration based
solely on this criterion.
Both indices yield remarkably similar results when applied to the same data. J. W.
Johnson (2000a) computed relative weights and the quantitative dominance analysis
measure in 31 different data sets. Each index was converted to a percentage of R2, and
the mean absolute deviation between importance indices computed using the two
methods was only 0.56%. The fact that these two indices, which are based on very different approaches to determining predictor importance, yield results that differ only
trivially provides some impressive convergent validity evidence that they are measuring the same construct. Either index can therefore be considered equally appropriate as

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

252

ORGANIZATIONAL RESEARCH METHODS

a measure of predictor importance. Relative weights can be computed much more


quickly than can dominance analysis weights, however, and are the only available
choice when the number of predictors is greater than 15.

Applications of Relative Importance Methodologies


in Organizational Research
Although dominance analysis and relative weights are fairly recent developments,
they have been applied in several studies relevant to organizational research. For
example, James (1998) used dominance analysis to determine the relative importance
of cognitive ability, self-report measures of achievement motivation, and conditional
reasoning tests of achievement motivation for predicting academic performance indexed via college grade point average. Similarly, Bing (1999) used dominance analysis to gain a better understanding of the relative importance of three personality attributes to the criteria of academic honors and college grade point average.
Dominance analysis has also been applied to the predictors of a wide range of job
attitudes and job behaviors. Behson (2002) used dominance analysis to assess the relative importance of several work-family and organizational constructs to a variety of
employee outcome variables (e.g., job satisfaction, work-to-family conflict). Similarly, LeBreton, Binning, Adorno, and Melcher (2004 [this issue]) used dominance
analysis to test the relative importance of job-specific affect and the Big Five personality traits for predicting job satisfaction, withdrawal cognitions, and withdrawal behavior. Baltes, Parker, Young, Huff, and Altmann (2004 [this issue]) used dominance
analysis to identify which climate dimensions were most important for predicting criteria including job satisfaction, intentions to quit, and job motivation. Eby, Adams,
Russell, and Gaby (2000) used dominance analysis to gain a better understanding of
how various employee attitudes and contextual variables relate to employees perceptions of their organizations readiness for large-scale change. Whanger (2002) analyzed meta-analytic correlation matrices using dominance analysis to explore the relative importance of job autonomy, skill variety, performance feedback, supervisor
satisfaction, and pay satisfaction in the prediction of affective organizational commitment, general job satisfaction, and intrinsic motivation. These studies illustrate how
relative importance indices can be used to examine how various types of predictors
relate to a wide range of organizational criteria.
One of the most frequent uses of relative importance indices has been to model the
cognitive processes associated with evaluating employees. J. W. Johnson (2001b)
used relative weights to evaluate the extent to which supervisors within each of eight
job families consider different dimensions of task performance and contextual performance when making overall evaluations. Lievens, Highhouse, and De Corte (2003)
used relative weights to evaluate the relative importance of applicants Big Five and
general mental ability scores to managers hirability decisions. Some studies have
examined different types of raters who have rated the same ratees to determine if characteristics of the rater influence perceptions of importance. For example, J. W. Johnson and Johnson (2001) used relative weights to evaluate the relative importance of 24
specific dimensions of performance to overall ratings of executives made by their
supervisors, peers, and subordinates. They found that the type of performance that was
important to supervisors tended to be very different from the type of performance that
was important to subordinates. Cochran (1999) examined how characteristics of the

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

253

rater and the ratee interact to influence the relative importance of performance on specific dimensions to overall evaluations. Using relative weights, she found that male
and female raters had similar perceptions of what is important to advancement
potential when the ratee was male but that perceptions differed when the ratee was
female.
Some studies have investigated differences in relative importance across cultures.
Using relative weights, J. W. Johnson and Olson (1996) found that the relative importance of individual supervisor attributes to overall performance was related to differences between countries in Hofstedes (1980) cultural value dimensions of power distance, uncertainty avoidance, individualism, and masculinity. Similarly, Robie,
Johnson, Nilsen, and Hazucha (2001) used relative weights to examine differences
between countries in the relative importance of 24 performance dimensions to ratings of overall performance. Suh, Diener, Oishi, and Triandis (1998) used dominance
analysis to investigate the relative importance of emotions, cultural norms, and extraversion in predicting life satisfaction across cultures.
There are many ways to apply relative importance indices when analyzing survey
data. Relative importance analysis can reveal the specific areas that contribute the most
to employee or customer satisfaction, which helps decision makers set priorities for
where to apply scarce organizational resources (Lundby & Fenlason, 2000; Whanger,
2002). It can also shorten surveys by eliminating the need for direct ratings of importance. Lundby and Fenlason (2000) compared relative weights to direct ratings of
importance and employee comments. An examination of employee comments supported the notion that relative weights better reflected the importance employees
placed on different issues because a greater proportion of written comments were
devoted to those issues that received higher relative weights. Most direct ratings of
importance tend to cluster around the high end of the scale, with very little variability.
Especially with employee opinion surveys, respondents would be likely to rate every
issue as being important for fear that anything that is not given high importance ratings
will be taken away from them. Relative weights allow decision makers to allocate
scarce resources to the issues that are actually most highly related to respondent
satisfaction.
There are, of course, many other substantive questions that can be addressed by relative importance methods across a broad spectrum of organizational research
domains. Some examples include employee selection (e.g., which exercises in an
assessment center are most important for predicting criteria such as job performance,
salary, and promotion?), training evaluation (e.g., what are the most important predictors of successful transfer of training?), culture and climate (e.g., how important are
the various dimensions of culture and climate in predicting organizationally valued
criteria such as job satisfaction, turnover, organizational commitment, job performance, withdrawal cognitions, and/or perceived organizational support?), and leader
effectiveness (e.g., which dimensions of transactional and transformational leadership
are most predictive of subordinate ratings of leader effectiveness or overall firm
effectiveness?).

Conclusion
We believe that research on relative importance methods is still in its infancy but
has progressed tremendously in recent years. Additional work is needed on refining

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

254

ORGANIZATIONAL RESEARCH METHODS

the existing methods as well as developing multivariate extensions of methods such as


dominance analysis and relative weights. Furthermore, myriad avenues exist for integrating relative importance methods into a wide range of substantive and methodological areas of research. We hope that the research reviewed in this article and the results
of the articles presented in this feature topic act as a catalyst for additional work using
relative importance methods.
References
Achen, C. H. (1982). Interpreting and using regression. Beverly Hills, CA: Sage.
Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors
in multiple regression. Psychological Methods, 8, 129-148.
Azen, R., Budescu, D. V., & Reiser, B. (2001). Criticality of predictors in multiple regression.
British Journal of Mathematical and Statistical Psychology, 54, 201-225.
Baltes, B. B., Parker, C. P., Young, L. M., Huff, J. W., & Altmann, R. (2004). The practical utility
of importance measures in assessing the relative importance of work-related perceptions
and organizational characteristics on work-related outcomes. Organizational Research
Methods, 7, 326-340.
Behson, S. J. (2002). Which dominates? The relative importance of work-family organizational
support and general organizational context on employee outcomes. Journal of Vocational
Behavior, 61, 53-72.
Bing, M. N. (1999). Hypercompetitiveness in academia: Achieving criterion-related validity
from item context specificity. Journal of Personality Assessment, 73, 80-99.
Borman, W. C., White, L. A., & Dorsey, D. W. (1995). Effects of ratee task performance and interpersonal factors on supervisor and peer performance ratings. Journal of Applied Psychology, 80, 168-177.
Boya, . ., & Cramer, E. M. (1980). Some problems in measures of predictor variable importance in multiple regression. Unpublished manuscript, University of North Carolina at
Chapel Hill.
Bring, J. (1994). How to standardize regression coefficients. American Statistician, 48, 209213.
Bring, J. (1996). A geometric approach to compare variables in a regression model. American
Statistician, 50, 57-62.
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114, 542-551.
Cochran, C. C. (1999). Gender influences on the process and outcomes of rating performance.
Unpublished doctoral dissertation, University of Minnesota, Minneapolis.
Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavior
sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
Courville, T., & Thompson, B. (2001). Use of structure coefficients in published multiple regression articles: is not enough. Educational and Psychological Measurement, 61, 229248.
Darlington, R. B. (1968). Multiple regression in psychological research and practice. Psychological Bulletin, 69, 161-182.
Darlington, R. B. (1990). Regression and linear models. New York: McGraw-Hill.
Dunn, W. S., Mount, M. K., Barrick, M. R., & Ones, D. S. (1995). Relative importance of personality and general mental ability in managers judgments of applicant qualifications.
Journal of Applied Psychology, 80, 500-509.
Eby, L. T., Adams, D. M., Russell, J. E. A., & Gaby, S. H. (2000). Perceptions of organizational
readiness for change: Factors related to employees reactions to the implementation of
team-based selling. Human Relations, 53, 419-442.

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

255

Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7, 1-26.
Englehart, M. D. (1936). The technique of path coefficients. Psychometrika, 1, 287-293.
Gibson, W. A. (1962). Orthogonal predictors: A possible resolution of the Hoffman-Ward controversy. Psychological Reports, 11, 32-34.
Goldberger, A. S. (1964). Econometric theory. New York: John Wiley & Sons.
Green, P. E., Carroll, J. D., & DeSarbo, W. S. (1978). A new measure of predictor variable importance in multiple regression. Journal of Marketing Research, 15, 356-360.
Green, P. E., Carroll, J. D., & DeSarbo, W. S. (1980). Reply to A comment on a new measure of
predictor variable importance in multiple regression. Journal of Marketing Research, 17,
116-118.
Green, P. E., & Tull, D. S. (1975). Research for marketing decisions (3rd ed.). Englewood Cliffs,
NJ: Prentice Hall.
Grisaffe, D. (1993, February). Appropriate use of regression in customer satisfaction analyses:
A response to William McLauchlan. Quirks Marketing Research Review, 11-17.
Healy, M. J. R. (1990). Measuring importance. Statistics in Medicine, 9, 633-637.
Heeler, R. M., Okechuku, C., & Reid, S. (1979). Attribute importance: Contrasting measurements. Journal of Marketing Research, 16, 60-63.
Hobson, C. J., & Gibson, F. W. (1983). Policy capturing as an approach to understanding and improving performance appraisal: A review of the literature. Academy of Management Review, 8, 640-649.
Hobson, C. J., Mendel, R. M., & Gibson, F. W. (1981). Clarifying performance appraisal criteria. Organizational Behavior and Human Performance, 28, 164-188.
Hoffman, P. J. (1960). The paramorphic representation of clinical judgment. Psychological Bulletin, 57, 116-131.
Hoffman, P. J. (1962). Assessment of the independent contributions of predictors. Psychological Bulletin, 59, 77-80.
Hofstede, G. (1980). Cultures consequences: International differences in work-related values.
Beverly Hills, CA: Sage.
Jaccard, J., Brinberg, D., & Ackerman, L. J. (1986). Assessing attribute importance: A comparison of six methods. Journal of Consumer Research, 12, 463-468.
Jackson, B. B. (1980). Comment on A new measure of predictor variable importance in multiple regression. Journal of Marketing Research, 17, 113-115.
James, L. R. (1998). Measurement of personality via conditional reasoning. Organizational Research Methods, 1, 131-163.
Johnson, J. W. (2000a). A heuristic method for estimating the relative weight of predictor variables in multiple regression. Multivariate Behavioral Research, 35, 1-19.
Johnson, J. W. (2000b, April). Practical applications of relative importance methodology in I/O
psychology. Symposium presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.
Johnson, J. W. (2001a). Determining the relative importance of predictors in multiple regression: Practical applications of relative weights. In F. Columbus (Ed.), Advances in psychology research (Vol. 5, pp. 231-251). Huntington, NY: Nova Science.
Johnson, J. W. (2001b). The relative importance of task and contextual performance dimensions
to supervisor judgments of overall performance. Journal of Applied Psychology, 86, 984996.
Johnson, J. W., & Johnson, K. M. (2001, April). Rater perspective differences in perceptions of
executive performance. In M. Rotundo (Chair), Task, citizenship, and counterproductive
performance: The determination of organizational decisions. Symposium conducted at the
16th annual conference of the Society for Industrial and Organizational Psychology, San
Diego, CA.
Johnson, J. W., & Olson, A. M. (1996, April). Cross-national differences in perceptions of supervisor performance. In D. Ones & C. Viswesvaran (Chairs), Frontiers of international I/O

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

256

ORGANIZATIONAL RESEARCH METHODS

psychology: Empirical findings for expatriate management. Symposium conducted at the


11th annual conference of the Society for Industrial and Organizational Psychology, San
Diego, CA.
Johnson, R. M. (1966). The minimal transformation to orthonormality. Psychometrika, 31,
61-66.
Kruskal, W. (1987). Relative importance by averaging over orderings. American Statistician,
41, 6-10.
Kruskal, W., & Majors, R. (1989). Concepts of relative importance in recent scientific literature.
American Statistician, 43, 2-6.
Lane, D. M., Murphy, K. R., & Marques, T. E. (1982). Measuring the importance of cues in policy capturing. Organizational Behavior and Human Performance, 30, 231-240.
LeBreton, J. M., Binning, J. F., Adorno, A. J., & Melcher, K. M. (2004). Importance of personality and job-specific affect for predicting job attitudes and withdrawal behavior. Organizational Research Methods, 7, 300-325.
LeBreton, J. M., & Johnson, J. W. (2001, April). Use of relative importance methodologies in organizational research. Symposium presented at the 16th annual conference of the Society
for Industrial and Organizational Psychology, San Diego, CA.
LeBreton, J. M., & Johnson, J. W. (2002, April). Application of relative importance methodologies to organizational research. Symposium presented at the 17th annual conference of the
Society for Industrial and Organizational Psychology, Toronto, Canada.
LeBreton, J. M., Ployhart, R. E., & Ladd, R. T. (2004). A Monte Carlo comparison of relative
importance methodologies. Organizational Research Methods, 7, 258-282.
Lehman, W., & Simpson, D. (1992). Employee substance use and on-the-job behaviors. Journal
of Applied Psychology, 77, 309-321.
Lievens, F., Highhouse, S., & De Corte, W. (2003, April). The importance of traits and abilities
in managershirability decisions as a function of method of assessment. Poster presented at
the 18th annual conference of the Society for Industrial and Organizational Psychology,
Orlando, FL.
Lindeman, R. H., Merenda, P. F., & Gold, R. Z. (1980). Introduction to bivariate and
multivariate analysis. Glenview, IL: Scott, Foresman and Company.
Lundby, K. M., & Fenlason, K. J. (2000, April). An application of relative importance analysis
to employee attitude research. In J. W. Johnson (Chair), Practical applications of relative
importance methodology in I/O psychology. Symposium conducted at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.
McLauchlan, W. G. (1992, October). Regression-based satisfaction analyses: Proceed with caution. Quirks Marketing Research Review, 11-13.
Myers, J. H. (1996). Measuring attribute importance: Finding the hot buttons. Canadian Journal
of Marketing Research, 15, 23-37.
Neter, J., Wasserman, W., & Kutner, M. H. (1985). Applied linear statistical models (2nd ed.).
Homewood, IL: Irwin.
Pratt, J. W. (1987). Dividing the indivisible: Using simple symmetry to partition variance explained. In T. Pukilla & S. Duntaneu (Eds.), Proceedings of second Tampere conference in
statistics (pp. 245-260). University of Tampere, Finland.
Robie, C., Johnson, K. M., Nilsen, D., & Hazucha, J. (2001). The right stuff: Understanding cultural differences in leadership performance. Journal of Management Development, 20,
639-650.
Rotundo, M., & Sackett, P. R. (2002). The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy capturing approach.
Journal of Applied Psychology, 87, 66-80.
Suh, E., Diener, E., Oishi, S., & Triandis, H. C. (1998). The shifting basis of life satisfaction
judgments across cultures: Emotions versus norms. Journal of Personality and Social Psychology, 74, 482-493.

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE

257

Theil, H. (1987). How many bits of information does an independent variable yield in a multiple
regression? Statistics and Probability Letters, 6, 107-108.
Theil, H., & Chung, C. F. (1988). Information-theoretic measures of fit for univariate and
multivariate linear regressions. American Statistician, 42, 249-252.
Thomas, D. R., & Decady, Y. J. (1999). Point and interval estimates of the relative importance of
variables in multiple linear regression. Unpublished manuscript, Carleton University,
Ottawa, Canada.
Thomas, D. R., Hughes, E., & Zumbo, B. D. (1998). On variable importance in linear regression.
Social Indicators Research, 45, 253-275.
Thompson, B., & Borrello, G. M. (1985). The importance of structure coefficients in regression
research. Educational and Psychological Measurement, 45, 203-209.
Ward, J. H. (1962). Comments on The paramorphic representation of clinical judgment. Psychological Bulletin, 59, 74-76.
Ward, J. H. (1969). Partitioning of variance and contribution or importance of a variable: A visit
to a graduate seminar. American Educational Research Journal, 6, 467-474.
Whanger, J. C. (2002, April). The application of multiple regression dominance analysis to organizational behavior variables. In J. M. LeBreton & J. W. Johnson (Chairs), Application of
relative importance methodologies to organizational research. Symposium presented at
the 17th annual conference of the Society for Industrial and Organizational Psychology,
Toronto, Canada.
Zedeck, S., & Kafry, D. (1977). Capturing rater policies for processing evaluation data. Organizational Behavior and Human Performance, 18, 269-294.
Jeff W. Johnson is a senior staff scientist at Personnel Decisions Research Institutes (PDRI). He received his
Ph.D. in industrial/organizational psychology from the University of Minnesota. He has directed and
carried out many applied organizational research projects for a variety of government and privatesector clients, with a particular emphasis on the development and validation of personnel assessment and
selection systems for a variety of jobs. His primary research interests are in the areas of personnel selection,
performance measurement, research methods, and statistics.
James M. LeBreton is an assistant professor of psychology at Wayne State University in Detroit, Michigan.
He received his Ph.D. in industrial and organizational psychology with a minor in statistics from the University of Tennessee. He also received his B.S. in psychology and his M.S. in industrial and organizational psychology from Illinois State University. His research focuses on application of social cognition to personality
theory and assessment, applied psychometrics, and the application and development of new research methods and statistics to personnel selection and work motivation.

Downloaded from http://orm.sagepub.com at UNIV OF TEXAS AUSTIN on January 25, 2010

You might also like