You are on page 1of 16

Political Psychology, Vol. 25, No.

4, 2004

What Voters Do: Information Search During Election Campaigns


David P. Redlawsk Department of Political Science, University of Iowa

Learning about political candidates before voting can be a cognitively taxing task, given that the information environment of a campaign may be chaotic and complicated. In response, voters may adopt decision strategies that guide their processing of campaign information. This paper reports results from a series of process-tracing experiments designed to learn how voters in a presidential primary election adopt and use such strategies. Different voters adopt different strategies, with the choice of strategy dependent on the campaign environment and individual voter characteristics. The adoption of particular strategies can have implications for how voters evaluate candidates.
KEY WORDS: information search, decision-making, process-tracing, voting

In order to cast a meaningful vote, voters presumably must learn something about the candidates. Normatively, more information is thought to be better than less; voters should have an extensive store of knowledge arrived at through comprehensive information search that considers all candidates on all attributes. Of course, most citizens do not actually do this, and this failure to learn very much is usually considered an impediment to good citizenship. Yet, to expect citizens to readily engage in extensive search ies in the face of known informationprocessing limits. Variously called cognitive misers (Taylor, 1981) or boundedly rational (Simon, 1957), humans simply cannot operate as purely rational calculators (Redlawsk, 2002). But, as Simon (1956) pointed out, they may not have to. Using various shortcuts, or heuristics, voters may be able to make good decisions even without learning all there is about the candidates (Lau & Redlawsk, 2001a). Sometimes good enough can be good enough. Despite the evidence that voters have little information at the ready, some political learning does in fact take place during campaigns, and voters use their knowledge, however little, to inform their decisions (Markus & Converse, 1979).
595
0162-895X 2004 International Society of Political Psychology Published by Blackwell Publishing. Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ

596

Redlawsk

Learning takes place in an often chaotic environment where information ows at an overwhelming pace. To tame this tide, voters can adopt information search and acquisition strategies based on both their own abilities and the complexity of the particular political environment. These strategies, some of which lead to limiting information search, are likely to have implications for how voters learn about and evaluate candidates. But although there has been debate over how much learning occurs, there has been little work directly examining how voters acquire information in the rst place. This paper reports on experiments examining how voters search for and acquire information during a campaign. Information search patterns can be identied that in turn dene the decision rules voters use to make sense of the campaign environment. Through the use of a dynamic process-tracing technique that tracks information search and acquisition as it happens (Lau, 1995; Lau & Redlawsk, 1997, 2001b), voters are presented with campaigns modeled on a realworld political environment. The type and amount of information acquired by voters can be tracked as it happens, allowing direct examination of the question What do voters do when they are learning about candidates? Theoretical Background Behavioral decision theory provides theoretical guidance for this paper and for the dynamic process-tracing methodology. In contrast to the normative focus of rational choice, behavioral decision theory takes as its primary goal the understanding of how people actually make decisions (Payne, Bettman, & Johnson, 1993). No study of decision-making has ever shown people actually processing information according to the omniscience that seems to be required of most normative models. Instead, people often settle for good enough once they learn an adequate amount about the choices they face. Value-maximizing behavior simply does not occur in complicated decision environments (see Dawes, 1988). However, decision-makers generally want to do a good job and thus may develop strategies to overcome cognitive limits (Payne et al., 1993). But these strategies can result in failure to make normatively correct decisions because decision-makers often face competing goals: to make good decisions, but to use the minimum necessary cognitive resources to do so (Lau, 2003; Stroh, 1995). For some decisions, the competition between these goals is minimal. Perhaps the alternatives are few, the number of attributes limited, and the decision relatively unimportant. Making an accurate decision under such circumstances may not be taxing. But when alternatives are many or indistinct, information is overwhelming, and the decision is important, decision-makers may run up against their cognitive limits. In any case, the search for information and its integration takes cognitive effort. Much of that effort goes into comparing alternatives on a range of attrib-

What Voters Do

597

utes (Rahn, 1995). People may generally adopt one of two sets of comparison rules (Payne et al., 1993). With compensatory rules, the different attributes of an alternative are explicitly compared to one another on some commensurate scale (like the economists utility), so that a low score on one attribute can be traded off or compensated for by a high score on another (Lau, 1995). This process is cognitively taxing, easily creating value conict if one alternative is preferred on one attribute while another is preferred on a different attribute (Ford, Schmitt, Schechtman, Hults, & Doherty, 1989). The difculty increases when the trade-off is between incommensurate attributes, such as the apples-and-oranges comparison of a stand on abortion versus a stand on the Israeli-Palestinian conict. Preferring one candidate on one of these may have to be balanced against preferring another candidate on the other. How to make the trade-off? The rational answer is to compute the utility of each attribute for each candidate and then summarize into an overall utility for each candidate. Once this is done, the decisionmaker simply chooses the candidate who maximizes utility. These calculations are cognitively difcult and in many cases may be abandoned for an alternative approach. The alternative is a noncompensatory rule. Generally speaking, people try to avoid making the value trade-offs typically required by compensatory rules (Hogarth, 1987; Lau, 1995). Rather than make direct comparisons on multiple attributes, voters may simply use a rule that considers alternatives serially, one attribute at a time. Alternatives that do not meet a minimum expectation level on an attribute are immediately discarded, thus eliminating trade-offs. The decision rule may well entail simply choosing the rst candidate who meets the minimum requirements on the most important attributes. Noncompensatory rules are clearly less taxing because they rely on incomplete search (Lau, 2003). Instead of making trade-offs, a decision-maker simplies the environment by dropping alternatives as soon as possible. Yet in doing so, important information about some alternatives may never be examined and thus never considered in the decision calculus. It is certainly possible that a candidate who fails to meet one requirement may yet be the best alternative on every other requirement. Thus, the use of a noncompensatory rule may easily result in failure to choose the utility-maximizing alternative. Why should we care about what rules voters use to acquire information whether they make value trade-offs or simply limit search? The process may be conscious; that is, a voter may decide ahead of time to focus on only one candidate or on a limited set of issues. Or it may be that the information environment becomes so richthink nine Democratic candidates in the 2004 Iowa caucuses that full information search is not possible, no matter how conscientious the voter. In either case, the strictures of a rational decision-making process are not going to be met. If information search varies in systematic ways, then it becomes important to understand the conditions under which it does so, because decisions made

598

Redlawsk

with limited information may be different from those made by more fully informed voters (Lau & Redlawsk, 1997). Thus, it is important to understand how and when voters actually adopt specic decision rules. Information Search and Decision Rules The decision rules used by voters can be determined by examining the information search undertaken during a campaign. Particular information search patterns imply specic decision rules and can be identied by three key search process measures (Lau, 1995, 2003). The rst, depth of search, refers to the amount of available relevant information actually considered in making a decision. Search can be deep, examining nearly all attributes available for every relevant candidate. Or the focus may be on a limited set of attributes and/or a limited set of candidates. Deep search suggests a compensatory rule, whereas shallow search suggests little effort to compare candidates and few trade-offsthe hallmarks of noncompensatory rules. Next is the comparability of alternatives under consideration, indicating the extent to which a voter gathers the same information about all relevant candidates. High intercandidate comparability suggests that consideration of each candidate is roughly equal. When combined with a deep search, this provides further evidence of a compensatory rule. Alternatively, low intercandidate comparability occurs when information acquisition varies substantially between candidates and suggests the use of a noncompensatory rule, especially with shallow search.1 Finally, sequence of searchthe order in which information is accessed also provides insight into decision rules. Voters may access information randomly, or they may use one of two systematic approaches. The rst, intra-attribute search, describes making transitions from an attribute for one candidate to the same attribute for another candidate. The second pattern is intra-candidate, where voters search within a single candidate to learn multiple attributes before switching to another candidate. This focus on transitions is a particular strength of information board methodologies (Jacoby, Chestnut, Weigl, & Fischer, 1976). All decision rules suggest that voters should use one or the other of the two systematic search sequences.2

It is possible for shallow search to be highly comparable; a voter might examine all candidates on only one or two issues. This is indicative of single-issue (or limited-issue) voting more than anything. Alternatively, one could focus exclusively on a single candidate and learn much about that candidate, showing deep search but little comparability. Even so, by denition such search could not be as deep as search that includes multiple candidates, because there is only so much information available about each candidate. Analyses here will exclude these nonstandard strategies. One can make a transition from viewing one candidate on an attribute to viewing another candidate on a different attribute. Such search is not systematic, making comparisons along neither candidate nor attribute dimensions.

What Voters Do

599

Taken together, these measures can be used to identify a specic decision strategy. Adoption of a strategy is a function of the decision environmentfor example, the number of alternativesand the cognitive abilities of the decisionmaker.3 Given that normative models argue that voters should have a great deal of information about all alternatives, the choice of search strategy (and the decision rule implied thereby) clearly has implications for the ability of a voter to meet normative expectations. Process Tracing Information search studies generally use laboratory-based process-tracing techniques to track decision-making. The most common technique is the static information board, presenting subjects with a matrix of information arranged so that accessing information about alternatives (candidates, in this case) is done by clicking on a box on the computer screen.4 The participant learns about candidates and issues by choosing them in any sequence desired; thus, search is completely controllable. Information is always available and easy to access. This static board models a nearly ideal-world environment. But the real political world is not static, and such studies do not give a very good feeling for what happens in the relative chaos of a real election. A new process-tracing technique, the dynamic information board (Lau, 1995; Lau & Redlawsk, 1997, 2001a, 2001b; Redlawsk, 2001, 2002), radically revises the static technique to better model campaigns, creating a more complicated environment where information ows over time, coming and going as the campaign progresses. In choosing to examine one piece of information, a voter may forgo the opportunity to learn something else, because the available information is always changing. As with the static board, attribute labels include a candidates name and the information to be revealed if the label is accessed. But unlike the static board, only a small subset of a very large database is available at any one time, making the task of processing the campaign much less manageable. The relative likelihood of any piece of information becoming available is controlled, so that some information (such as party identication) appears much more often than other information (such as an obscure policy position). This design creates a closer analog to a real-world campaign than can be achieved with a traditional static board.
3

Rahn (1995) looked at how campaign information is organized in memory and found that the propensity is to organize it by attribute, except when the information is specically presented in a candidate-salient display. Thus, the search sequence appears to directly relate to the way in which information is organized in memory. There are relatively few examples of the use of a static information board to study candidate evaluation and voting. Hersteins (1981) study was the rst; more recently Rahn (1995), Riggle and Johnson (1996), Mintz, Geva, Redd, and Carnes (1997), Huang (2000), and Huang and Price (2001) have all reported studies using some variant of this technique.

600

Redlawsk

Hypotheses The information board frameworks, both static and dynamic, provide a platform for examining information search and acquisition during political campaigns. It is certainly possible that search is essentially random, driven by the order in which information is available rather than by the adoption of any particular strategy. Alternatively, voters may adopt a structured search strategy whether compensatory or noncompensatoryin an effort (which may not be successful) to best make sense of things. Whether search is something other than random may be a factor of the information environment itself. When information is easily managed, a structured strategy may not be needed. Hypothesis 1: In a more difcult decision environment, information search will be more structured, as evidenced by a greater likelihood that voters will use an identiable decision rule. Because decision-makers are motivated to make good decisions with minimal effort (Lau, 2003), the type of decision rule used should be a function of the difculty of the decision task and the expertise of the decision-maker. Compensatory rules require careful attention to a wide range of information, whereas noncompensatory rules are far less cognitively taxing. Thus, if a particular decision involves relatively few alternatives differing on only a handful of attributes, decision-makers should be more likely to use a compensatory rule. Similarly, a person with greater cognitive resources may be better positioned to use a compensatory rule than a person operating with little prior knowledge or experience. Hypothesis 2: Compensatory decision rules are more likely to be used in the easiest decision environments, whereas noncompensatory rules will predominate in more complex environments. Those with greater cognitive resources will be more likely to use compensatory rules. Are there consequences associated with the adoption of certain decision rules? Clearly, different search strategies will result in different levels of knowledge about the available candidates. These differences may in turn affect candidate evaluation. In particular, voters using a compensatory rule and learning a lot about multiple candidates could be expected to moderate their evaluation of the preferred candidate because they are likely to learn things they like about otherwise rejected candidates. In searching more deeply, voters are also likely to learn a greater mix of good and bad about the candidate they prefer. Conversely, voters focused mostly on only one candidate and learning little about the others may rate their preferred candidate more highly. After all, there would be little comparison to other candidates, and therefore less countervailing information to depress the evaluation of the vote choice. Hypothesis 3: Relative to voters using noncompensatory rules, voters using compensatory rules will give their chosen candidate a lower global evaluation while giving rejected candidates higher evaluations. Overall, the use of compensatory rules should moderate candidate evaluations relative to those reached by voters using noncompensatory rules.

What Voters Do

601

Data and Method The Process-Tracing Studies Data were collected through a series of process-tracing experiments, most of which used the dynamic information board, with one using a static board. The experiments have been described in detail elsewhere (Lau, 1995; Lau & Redlawsk, 1997) and will be only briey described here. The total subject pool is a nonprobability sample of 656 adult citizens, 599 using the dynamic board and 57 using the static, recruited in central New Jersey between 1994 and 1997. All subjects were eligible U.S. voters, although they did not have to be registered to vote. Subjects could not be attending college. The studies began with the completion of a political attitudes questionnaire. Subjects then participated in a simulated presidential primary election, and in most studies, in a general election campaign.5 The candidates, although ctitious, represented a realistic spectrum of ideologies across both major political parties. Before the primary, subjects registered with a party and were subsequently constrained to vote only for candidates from that party in the primary, although information about candidates from both parties was always available. After completing the primary (which lasted about 22 minutes), subjects voted and then rated all candidates on a 101-point feeling thermometer. Next they answered questions about the difculty of their decision and, for all but one study, learned which candidates were running in the general election and began that election. After the general election campaign, subjects again voted, evaluated candidates, and answered questions about their decision. Next, an unexpected memory test was given about the general election. Subjects were then debriefed, paid, and dismissed. This paper focuses on the primary campaigns only.6 Each experiment involved several manipulations. Of theoretical interest here is the manipulation of the number of candidates available in the voters party during the primary election. Subjects were randomly placed into one of two conditions: one with two ideologically distinct candidates in their party, the other with four candidates arrayed along their partys ideological spectrum. This latter condition should be signicantly more difcult. In addition, in one study the information board itself was manipulated, so that some subjects worked on a static board while others used the dynamic board. This manipulation also varies the

5 6

See Lau and Redlawsk (1997, p. 588, g. 2) for a summary of the typical procedure. The data set merges data from four experiments. Clearly, variation from one experiment to another may inuence the adoption of different strategies. In addition to a number of candidates manipulation discussed below, the experiments included manipulations of campaign advertising tone, the amount of campaign resources, and variations in candidate ideologies. To control for these effects, dummy variables were used indicating the experiment in all multivariate analyses, although the coefcients are not reported here. The full models are available from the author.

602

Redlawsk

difculty of the task, with the static board presenting the more manageable idealworld environment. Key Measures Of the three measures of information search described earlier, two are used to determine whether compensatory or noncompensatory rules were used. The third, sequence of search, can be used to further differentiate the specic rule within these types, but such ne-grained analysis is not the purpose of this paper. Depth of search. Depth is computed as the mean of two search measures: the number of nonredundant attributes, and the number of distinct pieces of information examined for all relevant (i.e., in-party) candidates. The rst component includes only the unique attributes considered, regardless of how many times the same attribute was considered for different candidates. The second component is the total number of unique, nonredundant attributes accessed across all in-party candidates.7 These two measures were standardized and averaged to create a Depth of Search score computed on a scale from 0 to 100. A high score indicates deep search. Comparability of search. The depth measure indicates how much relevant information was accessed, but it does not tell us anything about comparisons voters might make between candidates. The standard measure of comparability is the variance in the number of unique items accessed per alternative (Payne et al., 1993). Greater variance indicates a search focused more on one alternative, whereas smaller variance indicates a more balanced search. However, this standard measure actually fails to indicate how directly candidates were compared. It is clearly possible to learn 10 things about one candidate and 10 different things about a second candidate, resulting in low variance but no true comparative search. A more accurate measure is to consider the proportion of all distinct attributes considered that were examined for all candidates in the choice set. The higher this proportion, the more the same attributes were examined across multiple candidates. This score was also scaled from 0 to 100. Although these two measures give specic information about how subjects searched for information, they cannot be directly compared across the static and dynamic information boards. Measures for the dynamic board subjects are computed separately from those for static board subjects, and the standardization of the variables is within information board type. This was done because the nature
7

Subjects in the four-candidate condition examined more in-party information than those facing two candidates, a reasonable response to the presence of more candidates (two = 41.8, four = 65.9; t = 12.204, p < .001). Given four candidates and limited time, the mean number of attributes examined per candidate should be lower in the four-candidate condition, as it is (two candidates = 20.5, four candidates = 16.5; t = 5.820, p < .001). Adjusting for the number of candidates would obscure the tendency of subjects with four candidates to spend more time focused on in-party candidates, an important response to the more complicated environment, so no adjustment is made.

What Voters Do

603

of the boards is so different as to preclude direct comparison. Even so, the measures can be used to determine the search strategies for each board, and the resulting decision rules can be compared between the two types of information boards. What Do Voters Do? Table 1 describes the key search measures based on the number of candidates in the primary election for both the static and dynamic information board environments, along with the total information viewed for all candidates. Subjects in the static information board looked at only about half as much information as those using the dynamic board, but within each type of information board there is no difference in overall information search between the two- and fourcandidate conditions. Turning to the search measures, in the static environment there is no difference in depth of search based on the number of candidates. But there is some difference in the comparability of search, as subjects facing only two candidates in their party tended to search for comparable information across the candidates more than did subjects facing four candidates (t = 2.068, p < .05). Such results would be expected if those in the more difcult four-candidate condition limited their search to fewer than the full set of candidates. A similar pattern exists for comparability in the dynamic board, where again there is greater comparability in the easier two-candidate condition (t = 13.229, p < .01). But although these subjects engaged in more comparable search, they searched less deeply than did those in the four-candidate dynamic condition (t = -2.679, p < .01). Facing fewer candidates makes direct comparison easier while less information is needed

Table 1. Search Strategy Components Static environment Two candidates Total number of items examined Depth of search 35.00 (15.69) n = 29 49.00 (21.95) n = 29 56.47 (35.77) n = 28 Four candidates 30.96 (18.36) n = 28 46.40 (27.70) n = 28 38.96 (26.99) n = 28 t 1.070 Dynamic environment Two candidates 74.56 (27.81) n = 395 46.83 (19.19) n = 395 57.93 (13.72) n = 392 Four candidates 72.08 (25.02) n = 204 50.39 (17.95) n = 204 42.85 (12.07) n = 203 t 0.893

0.633

-2.198**

Comparability of search

2.068**

13.229***

Note. Table entries are means; standard deviations are in parentheses. Scale for measures is 0 to 100. **p < .05, ***p < .01.

604

Redlawsk

to differentiate between preferred and nonpreferred options. Thus, there is initial evidence that the decision environment has noticeable effects on search strategies. No standard exists for how deep or comparable search must be to be rational. Although these measures do indicate the specic decision rule pursued (Lau, 2003; Payne et al., 1993), the actual cut points are arbitrary. For this analysis, the search measures are each stratied at the median so as to place subjects into compensatory, noncompensatory, or nonstructured search. The process is an anding one; for example, to be placed into the compensatory rule, a subject must evidence greater depth AND higher comparability, whereas the noncompensatory rule requires shallower search AND lower comparability. With two dichotomous dimensions, there are of course 2 2 = 4 different possibilities. However, only two have been identied in the behavioral decision theory literature as structured strategies. The other two (deep search of limited comparability and shallow search of high comparability) are considered unstructured search (see footnote 1). Overall, a structured strategy can be identied for just over 50% (29) of subjects in the static information board condition and 66.8% (400) in the dynamic information board condition. Hypotheses 1 and 2 suggest conditions under which information search strategies may be used. Hypothesis 1 suggests that voters in complicated decision environments are more likely to adopt a structured search strategy than those facing simpler decisions. This is easily tested by examining the cross-tabulation between structured search and the two manipulations of difculty: the static/dynamic and two-candidate/four-candidate manipulations. As Table 2 shows, and as expected, structured search is greater in the dynamic environments two-candidate condition, relative to the static environment (73.6% vs. 41.4%; c21 = 12.714, p < .01, one-tailed). Yet the pattern does not hold for the four-candidate condition, where no signicant difference exists (55.4% vs. 60.7%; c21 = 0.283, n.s.). Turning to results within each information board environment, the ndings are again mixed.

Table 2. Structured Information Search Search type Static environment Two candidates Structured search Compensatory Noncompensatory Unstructured search Total n 41.4% (12) 27.6% (8) 13.8% (4) 58.6% (17) 29 Four candidates 60.7% (17) 25.0% (7) 35.7% (10) 39.3% (11) 28 Dynamic environment Two candidates 73.6% (287) 42.8% (167) 30.8% (120) 27.3% (108) 395 Four candidates 55.4% (113) 16.7% (34) 38.7% (79) 44.6% (91) 204

Note. Structured versus unstructured search one-tailed tests: static two-candidate versus dynamic two-candidate, c21 = 12.714, p < .01; static four-candidate versus dynamic four-candidate, c21 = 0.283, n.s.; within static environment, c21 = 2.131, p < .10; within dynamic environment, c21 = 18.077, p < .01.

What Voters Do Table 3. Factors Inuencing the Use of Compensatory Search Election characteristics Number of candidates (1 = four) Information board (1 = dynamic) Voter characteristics Political expertise (1 = expert) Education Age in years

605

-1.370*** (.302) -0.407 (.542) 0.225 (.262) 0.343*** (.084) -0.056*** (.008)

Note. Table entries are logistic regression coefcients; standard errors are in parentheses. Dependent variable coded 1 = compensatory search. Model includes dummy variables to control for each separate experiment and a constant, not shown. n = 421, c29 = 157.187 (p < .001); pseudo-r2 = .312. ***p < .01.

In the static environment, there is more structured search in the four-candidate condition (60.7% vs. 41.4%; c21 = 2.131, p < .10, one-tailed). But in the dynamic environment, the number of candidates has the opposite effect, as those in the more difcult four-candidate condition are less likely to engage in structured search (55.4% vs. 73.6%; c21 = 18.077, p < .01, one-tailed). Support for Hypothesis 1 is mixed at best. Hypothesis 2 considers the conditions under which compensatory or noncompensatory rules are used. Because compensatory rules are cognitively taxing, they should be used when the environment is easily managed or when sufcient cognitive resources and motivation to make the necessary comparisons exist. On the other hand, when the environment is complex, the use of a simplifying noncompensatory rule should be more likely. Table 2 provides some evidence in support of this. In both the static and dynamic environments, subjects facing four candidates were more likely to use a noncompensatory rule. In addition, subjects in the more demanding dynamic environment appeared less likely to use a compensatory rule than those in the easily managed static environment. To explore this further, I developed a logistic regression model predicting use of a compensatory rule. The model includes the static/dynamic environment and the number of candidates as indicators of difculty. Individual cognitive capacity is measured by age and education, whereas political expertise is included to test for domainspecic knowledge. The results are presented in Table 3. The model is reasonably strong, correctly classifying more than 76% of cases. In support of Hypothesis 2, both measures of the difculty of the information environment are in the expected direction, although only the two-candidate/four-candidate manipulation is signicant. The small number of cases (29) in the static condition probably has some effect here. But in general, when the information environment is more difcult, voters adopt a simplifying search strategy. Interestingly, political expertise has no effect; experts are no more likely than non-experts to use a compensatory strategy. But both age and education are signicant. Older people are less likely to use compensatory strategies, which makes sense given the tendency for cognitive function to decrease with age (Riggle & Johnson, 1996). Education also predicts

606

Redlawsk

the use of compensatory search, with better-educated voters making greater use of such search under all conditions.8 Hypothesis 3 argues that the use of a particular information search strategy has politically relevant effects. Voters who use a compensatory rule should rate their preferred candidate somewhat lower and rate their rejected candidates somewhat higher than those who use a noncompensatory rule. Two repeated-measures analysis of variance models were constructed, with the feeling thermometer for the chosen candidate and the mean feeling thermometer for all rejected candidates as the dependent variables.9 One model tested the dynamic information board condition; the other tested the static information board. Entered into the models were the type of decision rule used, the number-of-candidates manipulation, subject expertise, andto adjust for differences in the use of the feeling thermometer scale (Brady, 1985)the mean feeling thermometer rating for all (real) politicians evaluated in the initial political attitudes questionnaire. Political expertise is used, rather than education, because the specic task is rating candidates, which should implicate the components of expertise (political knowledge, interest, involvement).10 The resulting models show support for the hypothesis. The expected pattern is found in the static environment, as subjects using a compensatory decision rule rate their preferred candidate lower and their rejected candidates higher. However, the small number of cases again means that the difference does not reach statistical signicance. In the dynamic environment, signicant effects for decision rules on candidate evaluation are found (F = 8.770, p < .01) but the effects are limited to rejected candidates. As Figure 1 shows, the rule used makes no difference in evaluation of the preferred candidate. However, in evaluating rejected candidates, subjects using a compensatory rule rate them more than 6 points higher (on the 101-point scale) than those using noncompensatory rules (b = 6.099, t = -10.231, p < .001). Taken together, these results support the notion that as voters learn more about candidates, they may nd more to like about their less preferred options. Those who choose to focus mostly on their preferred candidate never learn much to like about other choices. But as they learn more about their preferred choice, they either do not learn disliked information or they fail to take into account any negatives that they do learn.11
8

10

11

Interaction terms for both education and expertise with the difculty manipulations were included in an initial model but were neither signicant nor substantial and were dropped from the nal model. In the two-candidate condition there is only one rejected candidate, and that candidates feeling thermometer is used. In the four-candidate condition there are three rejected candidates, however, so the mean of those three individual evaluations is taken to represent evaluation of the rejected candidates. Education was initially entered into the analysis, along with age and gender. No effects were found for any of these controls and they were dropped. Recent research on the role of affect in decision-making (Lodge & Taber, 2000; Redlawsk, 2002) may support this latter possibility. Studies nd that encountering negative information about a liked alternative may result in an increased preference for the alternative; new negative information may not generate accurate preference updating.

Static Information Board


90

80

Mean FT Rating

70

60

50 Compensatory 40 Chosen Candidate Rejected Candidates Noncompensatory

Dynamic Information Board


90

80

Mean FT Rating

70

60

50 Compensatory 40 Chosen Candidate Rejected Candidates Noncompensatory

Figure 1. Effects of decision rules on candidate evaluation. For static information board, n = 26; for dynamic information board, n = 350. Differences in static information board are not statistically signicant (chosen, t = 0.615, n.s.; rejected, t = -1.263, n.s.). Differences in dynamic information board are signicant for rejected candidates (t = -10.231, p < .001) and not signicant for the chosen candidate (t = -1.106, n.s.).

608

Redlawsk

Discussion Two different process-tracing methodologies were used to capture information search as it occurred. Using these two different approaches allows the decision process to be examined in both a perfect-world analog, where information is easily obtainable, and a real-world analog, where information ows in a chaotic and uncontrollable manner. The resulting process information makes visible the search strategies voters use and the rules these strategies imply. What is found is instructive. Identiable decision rules can indeed be seen in most subjects, especially in the more realistic dynamic campaign environment. Some people appear to follow the normative prescription that all (or most) attributes for all candidates should be examined to make the best decision. Compensatory rules are particularly evident when the information environment is relatively simple and the decision is relatively easy. On the other hand, when the environment is complicated, search strategies often adjust as voters use noncompensatory rules to make sense of the world by simplifying it. These simplifying strategies have implications for candidate evaluation. Those engaged in deep, highly comparable information search may nd that they learn things they do not like about their preferred candidate and things they do like about others. Learning such information may make the decision environment even more difcult to manage. It certainly is easier to just assume that one will like a preferred candidates position on all issues without bothering to look. But when voters do look at other options, they revise their evaluations accordingly, which in some circumstances may well increase ambivalence, as evaluations between preferred and less preferred candidates become less distinct. Studying information search and acquisition may seem somewhat esoteric. Yet ultimately campaigns are about information, as candidates try to get voters to pay attention and learn something. How voters actually go about learning something is of importance to the decisions they make in the voting booth. All political science models of the vote assume the acquisition of information, but none specify how variations in information acquisition might well implicate the vote itself. As a rst step, this paper has shown that there are indeed variations in how voters acquire information and, more important, that those differences are a function of both the way in which information is structured (Rahn, 1995) and the complexity of the decision task. Only when the task is simple do voters come close to the normative rational prescription of full information search. But in what would seem to be a more realistic campaign environment, simplifying noncompensatory rules come to the fore, and voters do not learn the same amount of information about all candidates. Many years ago Simon (1956, 1957) argued that the use of simplifying strategies, although not optimizing, results in decisions that are good enough most of the time. The question remaining is whether the same is true for voters who fail to learn everything about everyonewhich is to say, of course, most voters. It is clear that information search matters, but more research is needed to establish exactly how information learned becomes a decision made.

What Voters Do

609

ACKNOWLEDGMENTS Data collection was partially supported by National Science Foundation grants SBR-9411162 and SBR-9221236. I gratefully acknowledge the support of the Obermann Center for Advanced Study at the University of Iowa, and the helpful comments of Rick Lau, Jason Humphrey, Joanne Miller, Jamie Druckman, and participants in the second Minnesota Symposium on Political Psychology: Campaigns and Elections, 79 November 2002. Correspondence concerning this article should be sent to David P. Redlawsk, Department of Political Science, University of Iowa, Iowa City, IA 52242. E-mail: david-redlawsk@uiowa.edu REFERENCES
Brady, H. E. (1985). The perils of survey research: Inter-personally incomparable responses. Political Methodology, 11, 269291. Dawes, R. M. (1988). Rational choice in an uncertain world. New York: Harcourt Brace Jovanovich. Ford, J. K., Schmitt, N., Schechtman, S. L., Hults, B. M., & Doherty, M. L. (1989). Process tracing methods: Contributions, problems, and neglected research questions. Organizational Behavior and Human Decision Processes, 43, 75117. Herstein, J. A. (1981). Keeping the voters limits in mind: A cognitive process analysis of decisionmaking in voting. Journal of Personality and Social Psychology, 40, 843861. Hogarth, R. M. (1987). Judgment and choice (2nd ed.). New York: Wiley. Huang, L.-N. (2000). Examining candidate information search processes: The impact of processing goals and sophistication. Journal of Communication, 59, 93114. Huang, L.-N., & Price, V. (2001). Motivations, goals, information search, and memory about political candidates. Political Psychology, 22, 665692. Jacoby, J., Chestnut, R. W., Weigl, K. C., & Fischer, W. (1976). Pre-purchasing information acquisition: Description of a process methodology, research paradigm, and pilot investigation. Advances in Consumer Research, 5, 546554. Lau, R. R. (1995). Information search during an election campaign: Introducing a process-tracing methodology for political scientists. In M. Lodge & K. M. McGraw (Eds.), Political judgment: Structure and process (pp. 179206). Ann Arbor, MI: University of Michigan Press. Lau, R. R. (2003). Models of decision-making. In D. O. Sears, L. Huddy, & R. L. Jervis (Eds.), Oxford Handbook of political psychology (pp. 1959). New York: Oxford University Press. Lau, R. R., & Redlawsk, D. P. (1997). Voting correctly. American Political Science Review, 91, 585599. Lau, R. R., & Redlawsk, D. P. (2001a). An experimental study of information search, memory, and decision-making during a political campaign. In J. Kuklinski (Ed.), Citizens and politics: Perspectives from political psychology (pp. 136159). Cambridge: Cambridge University Press. Lau, R. R., & Redlawsk, D. P. (2001b). Advantages and disadvantages of cognitive heuristics in political decision-making. American Journal of Political Science, 45, 951971. Lodge, M., & Taber, C. S. (2000). Three steps toward a theory of motivated political reasoning. In A. Lupia, M. McCubbins, & S. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 183213). New York: Cambridge University Press. Markus, G. B., & Converse, P. E. (1979). A dynamic simultaneous equation model of electoral choice. American Political Science Review, 73, 10551070. Mintz, A., Geva, N., Redd, S. B., & Carnes, A. (1997). The effect of dynamic and static choice sets on political decision-making: An analysis using the decision board platform. American Political Science Review, 91, 553566.

610

Redlawsk

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision-maker. New York: Cambridge University Press. Rahn, W. M. (1995). Candidate evaluation in complex information environments: Cognitive organization and comparison process. In M. Lodge & K. M. McGraw (Eds.), Political judgment: Structure and process (pp. 4364). Ann Arbor, MI: University of Michigan Press. Redlawsk, D. P. (2001). You must remember this: A test of the online model. Journal of Politics, 63, 2958. Redlawsk, D. P. (2002). Hot cognition or cool consideration? Testing the effects of motivated reasoning on political decision-making. Journal of Politics, 64, 10211044. Riggle, E. D. B., & Johnson, M. M. S. (1996). Age differences in political decision-making: Strategies for evaluating political candidates. Political Behavior, 18, 99118. Simon, H. A. (1956). Rational choice and the structure of the environment. American Psychologist, 63, 129138. Simon, H. A. (1957). Models of man. New York: Wiley. Stroh, P. K. (1995). Voters as pragmatic cognitive misers: The accuracy-effort trade-off in the candidate evaluation process. In M. Lodge & K. M. McGraw (Eds.), Political judgment: Structure and process (pp. 207228). Ann Arbor, MI: University of Michigan Press. Taylor, S. E. (1981). The interface of cognitive and social psychology. In J. Harvey (Ed.), Cognition, social behavior, and the environment (pp. 189210). Hillsdale, NJ: Erlbaum.

You might also like