You are on page 1of 23

of management evaluations are conducted in the federal government each

information exists about the nature or effectiveness of these evaluations even


though they cost more than $200 million annually. This article explores the relationships
between kinds of evaluations, analytic methods, and interpersonal processes and the
acceptance of recommendations by decision makers. Detailed typologies based on a
review of the literature provide the basis for quantitatively testing concepts about the
utilization of evaluations against empirical data. Two-hour interviews were conducted
with 50 evaluators and decision makers about randomly selected management evaluations.
Selected characteristics of the nature, methodology, and process of evaluations were
found to be related to acceptance. Some factors are structural and beyond the control of
the evaluator, while others are behavioral and within the power of the evaluator to
influence.
Thousands

year. Little

MANAGEMENT EVALUATION STUDIES


Factors Affecting the Acceptance
of Recommendations
RAY C. OMAN

Department of the Army

STEPHEN R. CHITWOOD
George Washington University

federal

ver the past decade there has been widespread criticism of the
government for poor management. In response to the
growing censure, there have been attempts, in the past several years, to
reform basic decision processes

including planning, budgeting, evaluation, personnel management, procurement, and contracting. Federal
agencies have also been subject to numerous personnel ceilings, hiring
freezes, and special budget controls in areas including travel expenditure,
paperwork, and ADP. Recently, of course, the sizable budget reductions
proposed by the administration and approved by Congress have created
an austere, cutback environment in most civilian agencies.
EVALUATION REVIEW, Vol 8 No. 3, June 1984 283-305
@ 1984 Sage Publications, Inc

283

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

284

These top-down efforts to affect decision making and management


processes in the executive branch and the opposition that these efforts
have encountered in the bureaucracy have been widely publicized,
studied, and documented. However, the efforts of agencies own
analytic and evaluative staffs to affect management decision making
and to bring about organization change and improvement is unexplored, especially in relation to &dquo;management evaluation.&dquo; Organizations in the federal government, much like large organizations in
other sectors of society, employ specialists who analyze management,
organization, and program problems, develop recommendations, and
implement solutions. The problems dealt with typically fall into the two
general categories of operating efficiency and program effectiveness.
Thousands of specialists conduct studies to provide information and
analysis for management decisions in the federal government. These
include 30,000 economists, industrial engineers, management analysts,
operations research analysts, and program analysts. Management
analysts constitute the largest of these job series, numbering more than a
third of the total. There are nearly 10,000 program analysts, about 6,000
economists and smaller numbers of operations research analysts and
industrial engineers (U.S. Office of Personnel Management, 1978).
Management analysts are formally charged with improving organization
efficiency and effectiveness and their studies are the focus of this
research.

MANAGEMENT ANALYSIS AND EVALUATION


There are over 10,000 management analysts (MAs) in the federal
government and nearly 4,000 in the Washington, D.C., Standard MetStatistical Area (SMSA). Management analysis efforts are
concerned with making decisions about improving management practices.
The Office of Management and Budget (OMB, 1978b) defines management analysis activities to include:

ropolitan

(1)

(2)
(3)

Planning, developing, assessing, and modifying organizational structures and


relationships, operating procedures, internal regulations, instructions, delegations
of authority, and management information and control systems;
Conducting or guiding assessments of operatmg efficiency and effectiveness, and
analyses of specific administrative needs;
Assessing worker productivity, achievement of performance objectives, and
other quantitative measures of operational efficiency and effectiveness.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

285

The Office of Personnel


Standards states:

Management (OPM 1972: 1) Qualification

Management analysts provide and service management in such areas as planning


policy development; work methods and procedures; manpower utilization;
organizational structures; distribution of assignments; delegating of authority;
information management; or similar areas with the objective of improving
managerial effectiveness.
The purpose of management

analysis

studies is to

provide input to
making.
analysis function is
to
in
supposed bring expertise management, analysis, and evaluation to
complex management decisions. The OPM standards (1972b: 1) state:
management decision

The management

The paramount

qualifications (for management analysis)


a practical and theoretical knowledge
pnnciples of management.

analytical ability and


processes, and

OPM Position-Classification Standards

(1972a: 3)

are a high order of


of the functions, and

notes:

Management Analysis occupation rests on the concept that certain functions


responsibilities of managers are susceptible to analysis and improvement by
specialists who are experts in these functions, in the principles that underlie their

The
and

administration, and in the techniques for their analysis.

Thus, according to the personnel standards, the management analysis


function is based on a rational analytic approach to problem solving and
decision making.
When one considers that management analysis is just one of several
analytic and evaluative job series devoted to internal management
improvement efforts, the high total cost of these efforts becomes
apparent. A survey by the Office Management and Budget (1978a)
reported total annual obligations of $276 millions for management
analysis activities in 1978. A similar OMB survey (1977) showed that
$245 million was spent on program evaluation activities in the 1977
fiscal year.
Although management analysts are involved in hundreds of evaluations to improve decision making and to alter management practices
each year, a search of the literature has revealed little information about
the kinds of topics studied, the methods used in the conduct of the
studies, or the degree of acceptance and implementation of study
findings. Thus, this sizable group of analysts charged with assessing
operating efficiency and effectiveness represents an untapped source of

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

286

information about making decisions on management practices and


processes and about implementing change in the federal government.

FACTORS AFFECTING ACCEPTANCE


OF RECOMMENDATIONS
A review of literature concerning factors affecting the acceptance of
has revealed four subject areas containing relevant
sources. These areas are program evaluation, management science,
applied behavioral science, and organization theory. The three areas
that are most closely related to the management analysis function in the
federal government are management science, program evaluation, and
applied behavioral science.
In the program evaluation literaure, Cox (1977) uses Mintzbergs
theory of managerial behavior to relate managerial style to the problem
of implementing study findings. These problems include:

study findings

styles of agency personnel and evaluators


msufficiently rigorous to be convincing
methodological sophistication and too little concern with concrete

mismatch between the roles and

studies that

too much

results

are

aspects of the program


~

~
~

available

time to be useful for decision

making
provided questions for which managers have no interest
political and fundmg issues are often more important than evaluation results, and
evaluation findmgs are not commumcated to agency personnel m ways that they
are not

answers are

can

to

be used.

The article by Cox is particularly relevant to management analysis


studies because it relates the perspective of the manager (decision
maker) to that of the evaluator.
Leviton and Hughes in a recent study (1981) discuss five major
concepts that existing research consistently relates to utilization. These
concepts are: (1) relevance to program and policy concerns, (2) good
communication between the producers and consumers of evaluation, (3)
the consumers discussion and serious consideration of the results and
their implications, (4) the credibility and trustworthiness of the evaluation, and (5) user involvement and advocacy.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

287

Carlotta Young (1978), in a research paper on evalution utilization


developed for the General Accounting Office, identified six factors in
the literature associated with utilization. They are as follows:
~
~

the

political decision-making environment


orgamzation aspects of the management environment

the commitment and involvement of decision makers and evaluators

the

~
~

appropriateness of the research questions


methodology, and
dissemmation and reportmg

issues

An examination of the evaluation literature reveals many sources


affect utilization. Few of the hypothesized factors,
however, are examined in any detail. As Patton (1977: 142) argues,

listing factors that

The issue at this time is not the search for a single formula of utilization success, nor
the generation of ever-longer lists of possible factors affecting utilization. The task
for the present is to identify and define a few key vanables that may make a major
difference in a significant number of evaluation cases.

Following Pattons suggestion, the approach in this research effort is


on a few key factors-namely, the kind of study, the nature of
the recommendations, and the methodology used in the conduct of the
to focus

study.
There

number of sources in program evaluation literature that


pertain directly to the research questions in this proposal. Adams, in
developing a prescriptive package for evaluation research in corrections, found that &dquo;weaker&dquo; research designs were more successful in
effecting organization change. Adams concludes that the biggest
payoffs come from &dquo;weak&dquo; or nonrigorous research designs, such as case
studies and surveys. Adams (1975: 115) goes on to note:
are a

Although the reasons for this are not fully understood, some hypotheses may be
stated: (a) these non-rigorous styles better fit the decisionmakmg styles and needs
of administrators; (b) there is greater pressure on corrections for system improvement than for client improvement, and these studies provide adequate rationales
for system change; (c) in times of rapid change, conditions are not favorable for the
use of strong research designs; and (d) correctional administrators have not yet
supported ngorous designs to the extent required to make them generally effective.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

288

In addition to the research design aspect of methodology, the degree


of decision-maker participation in the evaluation is another area to be
explored in the dissertation research. Although participation often is
listed as one of many factors affecting the utilization of evaluation
findings, the degree or nature of participation is seldom defined or
examined. A study by Waller et al. ( 1979: 11), however, appears to place
user involvement in the forefront. The report concludes that,

only characteristic of an evaluation system associated


degree of involvement of the user in an evaluation activity.
the

with

utility

was

the

Mark Van de Vall has done research about the effect of methodology
and user participation on utilization of applied social science and social
policy findings. Van de Vall and Bolas (1979) have concluded that,
the impact of social policy research upon organizational decisions is higher when
the research sponsor and research consumer are identical or closely linked, rather
than consistmg of two separate and independent organizations; and

Projects of social policy research accompanied by a steering committee consisting


representatives from the research team, the research sponsor and researcher
consumer(s) tend to score higher on policy impact than projects lacking a steering
of

committee.

Van de

Vall, Bolas, and Kang (1976: 172-173) examined the effect of

methodology on the use of research findings in the area of industrial and


labor relations in the Netherlands. Some of their conclusions that
provide useful background information for this research effort are:
The

use

of qualitative methods in applied social research leads to a higher impact


policy making than using quantitative methods, particularly tabu-

upon industrial
lar analysis.

The more the methodological mixture of applied social research favors qualitative
methods, the more intensively the projects are utilized m company policies.

Factors affecting the acceptance of recommendations may be grouped


follows: (1) small scale, micro factors that are partially in the realm of
the analysts control, and (b) larger organization or macro factors which
are outside of the control of the analyst performing the study. The
emphasis of this research effort is on the small scale, micro factors.
Macro factors in the larger organization environment influencing the
acceptance of study findings include budget cuts or increases, changes in
personnel ceilings and hiring freezes, reorganizations, and changes in
as

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

289

management personnel. Factors in the larger organization environment


For example, in times
recommendations
dealing with costbudget cuts,
the
and
cost
reduction,
effectiveness,
streamlining of procedures, are
more
to
viewed
be
favorably than they would in times of
likely

affect the
of personnel

can

acceptability of recommendations.
or

expansion.
Micro factors, the focus of this effort, include dimensions such as
methodology, the way decisions were made about how the study would
be conducted, the kind of information (data) collected, who collected
the information, how the information (data) was analyzed and interpreted, and how recommendations were developed. Micro factors also
include the characteristics of the kind of study conducted such as the
purpose of the study, the topical content, the initiator, and the retrospective or prospective emphasis. The analyst may be in a position to
control

or to

alter

number of these factors.

FRAMEWORK FOR ANALYSIS


This research

explores the relationship among the kinds of managethey are conducted, and the acceptance
of recommendations by decision makers. To examine this question
three typologies, or classification schemes, have been established relating to (1) the nature of the study, (2) the methodology used, and (3) the
interpersonal processes used in the effort.
ment evaluations, the ways that

The first classification scheme deals with the nature of management


evaluation studies. The framework is based on the following factors: (1)
purpose of the study, (2) topical content area of the study, (3) initiator of
the study, (4) organizational location of the unit studied, (5) organizational location of the MA unit, and (6) the prospective or retrospective
nature of the study. The six factors and several subfactors that constitute the typology allow management evaluation studies to be placed in
a framework that allows examination of some of the relationships (see

Figure 1).
The conduct of management evaluations may be viewed in terms of
&dquo;content&dquo; and &dquo;process.&dquo; The second and third typologies deal with
these areas. For the purposes of this analysis, &dquo;content activities&dquo; are the
analytic methodologies employed in the study. &dquo;Process&dquo; pertains to the
interpersonal processes used and is concerned with who is involved in
making decisions about how to conduct the evaluation and who carries

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

290

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

291

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

292

study activities. Typologies


methodology or &dquo;content activities&dquo;
the

have been developed for both


and &dquo;interpersonal processes.&dquo;
Information about how these &dquo;content&dquo; and &dquo;process&dquo; activities were
carried out provides a basis for classifying studies by two sets of
factors-scientific-political and solitary-participatory.
Study methodology usually is based on the steps of the scientific
method-that is, problem definition, data collection, data analysis and
interpretation, and development of conclusions and recommendations.
A wide range of quantitative and qualitative techniques can be used to
carry out these generic processes. The classification scheme for study
methods is based on these various analytic techniques (see Figure 2).
Interpersonal processes refers to the ways the methodological, or
&dquo;content,&dquo; activities of the study are carried out. &dquo;Process&dquo; deals with
who was involved in making decisions about and in carrying out the
various study activities. The typology for process aspects of the
evaluation consists of eight factors (see Figure 3).
out

METHODOLOGY
An exploratory, descriptive approach was selected for this research
because little theory has been developed about factors affecting study
acceptance. The research methodology applied involves a series of
structured, multiple case studies. Information for the cases was gathered
by interviews, questionnaires, and examination of study reports.
The twelve cabinet-level civilian departments (with all of their
bureaus) and the independent agencies of the General Services Administration (GSA), the National Aeronautics and Space Administration
(NASA), and the Veterans Administration (VA) were the organizational
entities from which particular MA units were chosen. GSA, NASA, and
VA were included from among the independent agencies because they
have relatively large numbers of management analysts.
LIMITATIONS OF THE RESEARCH

There

are a

number of limitations of the research that need to be

recognized. Three important limitations deal with the study methods


and approach. First, although &dquo;acceptance&dquo; is related to a number of
factors, the quality, timeliness, and the relevance of recommendations
were not examined. Second, &dquo;acceptance&dquo; was based on the opinions
(Continued on page 299)

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

293

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

294

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

295

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

296

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

297

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

298

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

299

and judgments of those interviewed. There was written documentation


about acceptance for some, but not all, of the evaluations. Third, many
interrelated factors contribute to the acceptance or nonacceptance of
evaluations. The method employed in this research examines primarily
each factor individually in its relationship to acceptance (see Figures 1,2,
and 3). The way multiple factors affect acceptance is not explored.
Further, the level of acceptance shown in Figures 1, 2, and 3 is expressed
in percentages rather than in number of recommendations. Most of the
evaluations showing 100% acceptance had relatively few recommendations.

SELECTION OF A SAMPLING OF MANAGEMENT ANALYSIS UNITS

The distribution of MA units by agency is shown in Table 1. A


stratified random sample was used to select the particular MA units for
study. Within each of the agencies, one MA unit was randomly selected
for study, for a total of fifteen. Thus, about 20% (15 of 73) of MA units
were

sampled.

Each MA unit within an agency was assigned a unique number. Then


a random number table was used to select units for study. If one of the
units selected chose not to participate in the research, another MA unit
was selected from the same agency using the random number table.
SELECTION OF STUDIES FOR EXAMINATION

Several criteria were used in selecting the particular MA studies to be


examined. The criteria for selecting studies are based on two more
general factors-relevance to the research question and accessibility of
information. The criteria are as follows:
(I)
(2)

(3)
(4)

(5)
(6)
(7)

the purpose of the study should have been to assist in decision making;
the study should be completed and sufficient time should have elapsed for
mformation about the acceptance of recommendations to be available;
the study should have been of a substantive subject large enough in scope or

depth-that is, would require a considerable amount of time and effort;


the study should have been completed within the last three years;
the analyst who conducted the study and who is familiar with the details should
be available for interview;
a written report should have been developed as part of the study; and
the MA umt head and the analyst who conducted the study should be willing to
identify a study recipient responsible for accepting the recommendations.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

300

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

301

The largest study, in terms of analysts time meeting these criteria, was
selected for examination.
A minimum of three interviews were conducted in each MA unit. The
first interview was conducted with the head of the MA unit to get an
overview of the unit and to select one study for detailed examination.
Second, a structured interview with the analyst who conducted the study
was carried out. In most cases, it was necessary to conduct two
interviews with each analyst. A copy of the study report was reviewed
before the interview with the analysts(s) who conducted the study.
Finaly, a decision maker responsible for accepting the recommendation
was interviewed.

FACTORS INFLUENCING STUDY ACCEPTANCE


BASED ON INTERVIEW DATA
AND A TABULAR ANALYSIS
Information about factors that influenced the acceptance of recommendations is based on facts about each study gathered in interviews,
and on a tabular analysis of study characteristics. The more general
factors based on interview information are presented first, followed by a
discussion of factors based primarily on the tabular analyses. Many
factors can influence the acceptance of study findings. Sometimes these
factors can operate individually, but often one factor operates in conjunction with other factors.
Larger studies that took a long period of time had a lower level of
recommendation acceptance. A few of the studies took two or more
years to complete. These studies had a low level of acceptance. Two of
these studies also were begun in one political administration and completed in another. Interviewees noted that studies bridging a change of
administration often find the acceptance problematic.
Studies that were done in isolation from the unit studied had a low
level of acceptance. One study was conducted in isolation because it was
felt that disclosure of the study would result in its early demise. When
the study was presented it had a low level of acceptance. Other studies
were not done in complete isolation, but there was not much interaction
between the analyst and the decision makers in the unit studied. These
studies had lower-than-average levels of acceptance. This finding is
corroborated in the program evaluation utilization literature. John
Waller et al. (1979) found that the involvement of the user was the
primary factor associated with utilization.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

302

Some studies were done for more than one decision maker at more
than one organization level. For example, a study was initiated by an
associate director of administration but final approval was the responsibility of the bureau administrator. Studies done for more than one
decision maker were usually initiated by a lower-level decision maker
while final approval rested with the higher-level decision maker. These
studies appeared to have a lower-than-average level of acceptance.
Studies of larger scale questions that were often complex and difficult to
define had a lower level of acceptance than studies of more focused

issues.
Studies

by teams of analysts appeared

to

have much lower levels of

acceptance than those studies conducted by individual analysts working


alone. This observation appears to hold whether the study was conducted by an experienced senior analyst or a more junior analyst. A
search of the literature has not revealed other research that examines the
relationship between acceptance and whether one analyst or a group of
analysts conducted the study.
Some studies were done by individual analysts, while others were
done by teams of analysts-still others were conducted by interunit
study teams composed of analysts as well as personnel from the

organizations being studied. Studies by interunit study teams appeared


have the highest level of recommendation acceptance. A review of the

to

literature has not revealed research comparing the level of acceptance


for studies conducted by individual analysts to groups of analysts or
interunit study teams. However, Van de Vall and Bolas (1979) found
that the use of a steering committee composed of researchers and
sponsors resulted in higher utilization in social policy impact studies.
Studies that had strong leadership, whether by an individual analyst
or by an interunit study team appeared to have higher-than-average
levels of acceptance. It appears that levels of acceptance are higher when
there is one person who is very committed to the effort, spearheading a

study.
Studies that used advanced quantitative methodologies such as inferential statistics, correlation, regression, or experimental designs,
appeared to have lower levels of acceptance if more qualitative methods
were not also used. When quantitative methods were combined with
descriptive approaches, levels of acceptance appear to be at least
average. It should be noted that our sample of studies using advanced
quantitative methods was small. Adams (1975) and Van de Vall et al.
(1976) have also found that the use of simple data analysis techniques
can result in higher acceptance.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

303

FACTORS AFFECTING ACCEPTANCE BASED ON TABULAR ANALYSIS

The

about kind of study, study methods, and


examined
to establish factors related to acceptance
study process
and
These
(see Figures 1, 2,
3).
typologies contained both structural and
behavioral factors. Structural factors include, for example, the organization location of the MA unit and of the unit studied. These factors are
generally outside of the control of the analyst conducting the study.
Behavioral factors refer to those things over which the management
analyst has more control, such as who is involved in the study process
and the nature of involvement. Other factors such as methods of information collection and data analysis are partly within the control of the
management analyst. A number of structural and behavioral factors
were found to be associated with acceptance.
The kind of study influences the level of acceptance (see Figure 1).
For example, methods and procedures studies have a higher level of
acceptance than organization placement and policy studies. Studies
initiated by top agency or bureau management have a higher acceptance
than studies initiated by top administrative management. Retrospective
evaluations have a lower level of acceptance than prospective studies.
Some study methods are associated with higher or lower levels of
acceptance (see Figure 2). For example, management evaluations that
did not structure or model but simply described management problems,
had a higher level of acceptance than studies that attempted to structure
the problem and develop quantitative relationships among variables but
did not present descriptive information. Similarly, Van de Vall et al.
(1976: 173-174) have observed that &dquo;the use of qualitative methods in
applied social research leads to a higher impact upon industrial policy
making than using quantitative methods. &dquo; Studies that combined quantitative methods with descriptive approaches appeared to have average
or above-average acceptance. Our sample of studies using advanced
quantitative methods was very small.
There was some variation in acceptance based on the methods of
information collection used in the study. For example, studies that set
up experiments and attempted to gather statistical data empirically had
lower-than-average levels of acceptance, especially if qualitative methods
were not used also. Studies that used structured and unstructured
interviews appeared to have a higher-than-average level of acceptance.
Study process, referring to who was involved in making decisions
about how to conduct the study as well as the actual carrying out of the
study, also appears to be associated with levels of acceptance of

typologies developed
were

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

304

(see Figure 3). For example, studies in which the


management analyst together with personnel in the unit studied defined
the problem, and studies in which the problem was defined by an
interunit study team, had higher-than-average levels of acceptance.
Conversely, studies defined by the person requesting the study individually and by the management analyst individually had lower-thanaverage levels of acceptance. The data suggest that studies defined by an
interactive process between the analyst and unit chief or personnel in the
unit studied have higher-than-average levels of acceptance.
Studies in which information collection was done by interunit study
teams or by the analyst and people in the unit studied had above-average
levels of acceptance. Those studies where teams of analysts handled
recommendations

information collection had lower levels of recommendation acceptance.


In general, the data suggest that for studies in which the analyst and
personnel in the units studied participated in problem definition, data
collection, analysis, and the development of conclusions and recommendations, acceptance was well above average. The level was much
lower when these processes were handled by teams of analysts rather
than individual analysts. Further, the data suggest that the involvement
of people from the unit studied in interunit study teams produces a
higher-than-average level of acceptance. Among others, Leviton and
Hughes ( 1981) and Waller et. al. (1979) have found positive relationships
between the level of user involvement and utility of program evaluations.

REFERENCES
ADAMS, S. (1975) Evaluative Research in Corrections: A Practical Guide. U.S.
Department of Justice, Washington, DC: Government Printing Office.
COX, G. B. (1977) "Managerial Style: implications for the utilization of program
evaluation information." Evaluation Q. 1(August): 499-509.
LEVITON, L. C. and E.F.X. HUGHES (1981) "A review and synthesis of research on
utilization of evaluations." Evaluation Rev. 5: 525-545.

PATTON, M. Q., P. S. GRIMES, K. M. GUTHRIE, N. J. BRENNAN, B. D. FRENCH,


and D. A. BLYTHE (1977) "In search of impact: an analysis of the utilization of
federal health evaluation research," in Carol H. Weiss (ed.) Using Social Research in
Public Policy Making. Lexington, MA: D. C. Heath.
U.S. Office of Personnel Management [OPM] (1978) Federal Civilian Workforce
Statistics: Occupations of Federal White Collar Workers. Pamphlet 56-14, October
31. Washington DC: Government Printing Office.
———(1972a) Position-Classification Standards (TS 9) February. Washington DC:
Government Printing Office.
———(1972b) Qualifications Standards (TS 141), February. Washington DC Government Printing Office.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

305
U.S. Office of

Management and Budget [OMB] (1978a) Bulletin No. 78-12, April.


Washington DC: Government Printing Office.
———(1978b) Resources for Management Analysis, Washington DC: Government
Printing Office.
———(1977) Resources for Program Evaluation, Washington DC: Government Printing
Office.
VAN de VALL, M. D. and C. BOLAS (1979) "The utilization of social policy research: an
empirical analysis of its structure and functions." 74th Annual Meeting of the
Sociological Society, Boston, Massachusetts, August 27-31.
———and T. S. KANG (1976) "Applied social research in industrial organizations: an
evaluation of functions, theory, and methods." J. of Applied Behavioral Sci. 12, 2

(April/May/June): 172-173.
WALLER, J. D. Developing Useful Evaluation Capability: Lessons from the Model
Evaluation Program, U.S. Department of Justice. Washington DC: Government
Printing Office.
YOUNG, C. J. (1978) "Evaluation utilization," Presented at the Evaluation Research
Society Second Annual Meeting, Washington, D. C., November 2-4.

Ray C. Oman, Ph. D., a supervisory management analyst, is a Branch Chief in the Plans
Policy Division of the Management Systems Support Agency, Department of the
Army. He has graduate degrees from Pennsylvania State University and George
Washington University, specializing in program analysis and evaluation, management,

and

and finance and economics. One of his current research interests is the relationship
between the methodology and processes used in management studies and the acceptance
of findings by decision makers.

Stephen R. Chitwood is a Professor of Pubhc Administration at George Washington


University. He holds a Ph.D. in public administration from the University of Southern
California and J. D. from George Washington University. One of his current research
interests is the relationship between the methodology and processes used in management
studies and the acceptance of findings by decision makers.

Downloaded from erx.sagepub.com at UCLA on September 28, 2015

You might also like