You are on page 1of 21

Int J of Sci and Math Educ

DOI 10.1007/s10763-016-9733-y

Teachers Roles, Students Personalities, Inquiry


Learning Outcomes, and Practices of Science
and Engineering:The Development and Validation
of the McGill Attainment Value for Inquiry Engagement
Survey in STEM Disciplines
Ahmed Ibrahim 1,2
Bruce M. Shore 1

& Mark

W. Aulls 1 &

Received: 31 December 2015 / Accepted: 18 February 2016


# Ministry of Science and Technology, Taiwan 2016

Abstract Inquiry engagement is a newly defined construct that represents the participation
in carrying out practices of science and engineering to achieve learning outcomes and is
influenced by learners personalities and teachers roles. Expectancy value theory posits
that attainment values are important components of task values that, in turn, directly
influence students achievement-related choices and performance. The current paper
developed and validated the McGill Attainment Value for Inquiry Engagement Survey
(MAVIES) with undergraduate students in STEM disciplines. The MAVIES is a 67-item,
learner-focused survey that addresses four components that are theoretically important for
engaging in scientific inquiry: (a) teachers roles, (b) students personalities, (c) inquiry
learning outcomes, and (d) practices of science and engineering. Evidence for internal
consistency and construct, content, and criterion validity was obtained from 85 undergraduates who had experience with scientific inquiry in diverse STEM fields. Confirmatory
factor analyses confirmed factors that were consistent with role theory, Big Five personality
traits, revised Blooms learning outcomes, and the Next Generation Science Standards. The
MAVIES instrument is a reliable and valid instrument for measuring undergraduate
students attainment values for scientific inquiry engagement in STEM disciplines.
Keywords Attainment value . Inquiry engagement . NGSS . Practices of science and
engineering . Scientific inquiry . STEM

* Ahmed Ibrahim
ahmed.ibrahim@ucr.edu

Department of Educational and Counselling Psychology, McGill University, Montreal,


Quebec, Canada

Department of Pyschology, University of California, Riverside, USA

Ibrahim et al.

The word inquiry used to be standard terminology in science standards (American


Association for the Advancement of Science, 1993; National Research Council
[NRC], 1996, 2000). Inquiry was used almost exclusively to describe science, scientific
activity, and scientific pedagogy. It was described as a rich and complex process that is
at the heart of social and scientific achievement (Aulls & Shore, 2008, p. viii). In order
to elucidate this concept, several authors and organizations proposed definitions, conceptualizations, frameworks, and policy documents about the meaning of inquiry
(Barrow, 2006). However, the different definitions led to more confusion than agreement. Hanauer, Hatfull, and Jacobs-Sera (2009) wrote, while there is a broad agreement
over the potential significance of scientific inquiry for science education, the definition
of scientific inquiry has to a certain extent been quite elusive and on a practical level
difficult to implement (p. 12). The goal of these definitions and conceptualizations was
to delineate what Hanauer et al. (2009) called the theoretical construct of the components of scientific inquiry (p. 11). Despite different attempts to define and describe
inquiry, delineating a set of components of scientific inquiry has not led to expert
consensus, but rather to divergent pedagogic perspectives (Abd-El-Khalick et al.,
2004), to the tendency to reduce scientific practice to a single set of procedures
(NRC, 2012, p. 43) over other aspects of science, and to the teaching of experimental
procedures over the teaching of scientific concepts (Millar & Driver, 1987). In 2012, the
NRC proposed a remedy of the situation by proposing a conceptual framework that
included three dimensions for science education, which are: (a) Practices of Science and
Engineering (PSEs), (b) Disciplinary Core Ideas (DCIs), and (c) Cross Cutting Concepts
(CCCs). The NRC document emphasizes the concept of engagement in inquiry and
engagement in practices of science without giving a specific definition for what
exactly is meant by engagement. Indeed, although engagement is essential to inquiry,
it has also been a very widely misused and overgeneralized construct in the educational
and psychological literature (Azevedo, 2015). Research on engagement has primarily
considered it from the motivational, behavioral, and emotional perspectives and, to a
lesser extent, from a cognitive perspective (Chi & Wylie, 2014). From a motivational
perspective, engagement can refer to different things depending on theoretical stance
(Greeno, Collins, & Resnick, 1996) and, thus, can refer to: (a) a decision based on
expected utilities of outcomes (p. 24) from the behaviorist view, (b) a persons intrinsic
interest in a domain of cognitive activity (p. 25) from the cognitivist view, or (c) a
participation in communities where learning is valued from the situative view (p. 26).
From a behavioral view, engagement refers to participation and attendance, and from an
emotional perspective, engagement refers to reactions to the school environment (Chi &
Wylie, 2014). Cognitive engagement refers to thoughtfulness and willingness to exert
effort to understand concepts and learn skills (Blumenfeld, Kempler, & Krajcik, 2006;
Chi & Wylie, 2014; Fredricks, Blumenfeld, & Paris, 2004).
In this paper, we introduce and define inquiry engagement as a multidimensional
construct that involves affective, behavioral, cognitive, epistemic, motivational,
metacognitive, personal, and social dimensions. Adding engagement to inquiry
highlights the dynamic nature of inquiry through the participation in carrying out
scientific and engineering practices. To simplify the construct and operationalize it,
we define inquiry engagement as the participation in carrying out the practices of
science and engineering to understand disciplinary knowledge. This participation
includes the learner as well as the mentor or teacher who can take different roles in

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

scaffolding learning and motivating the learner. Accordingly, in this paper, we defined
four dimensions for inquiry engagement, which are: (a) teachers roles, (b) students
personalities, (c) practices of science and engineering, and (d) inquiry learning outcomes. We found that engagement in inquiry was previously used (Peterson, 2012),
however with the emphasis on inquiry phases and the inquiry cycle, which has been
widely criticized (e.g. Moeed, 2013; Rudolph, 2005; Talanquer, Tomanek, &
Novodvorsky, 2013; Tang, Coffey, Elby, & Levin, 2010; Windschitl, 2004), and
without including the student or the teacher as part of the engagement in inquiry.
The objective of this paper is to develop and validate an instrument that measures
students assigned importance to the dimensions of inquiry engagement through the
participation in carrying out the practices of science and engineering to understand
disciplinary knowledge.

Attainment Value for Inquiry Engagement


Expectancy-value theory (Wigfield & Eccles, 2000) postulates four major values for a
task: (a) attainment value or importance, (b) intrinsic value, (c) utility value or usefulness, and (d) cost. Attainment value, from the expectancy-value model of Eccles et al.
(1983), is value an activity has because engaging in it is consistent with ones selfimage (Eccles, 2005, p. 109). Tasks are important when individuals view them as
central to their own sense of themselves, or allow them to express or confirm important
aspects of self (Wigfield & Cambria, 2010, p. 44). Intrinsic value is the enjoyment one
gains from doing a task, utility value or usefulness refers to how a task fits into an
individuals future plans, and cost refers to what the individual has to give up to do a
task (Wigfield, Tonks, & Klauda, 2009, p. 58). Eccles, ONeill, and Wigfield (2005)
showed that one can create reliable and valid measures of ability self-perceptions and
subjective task values in various domains and that these beliefs predict subsequent
achievement-related behaviors (p. 246), and Wigfield et al. (2009) reviewed a number
of instruments that were developed to measure ability beliefs, expectancies for success,
and subjective values in a different academic and nonacademic domains (Eccles et al.,
1983; Eccles, Wigfield, Harold, & Blumenfeld, 1993; Wigfield & Eccles, 2000).
According to the Expectancy Value Theory (EVT), subjective importance (also referred
to as Attainment Value; AV) assigned to tasks is a main component of subjective task
values that have a direct effect on achievement-related choices and performance
(Wigfield, Tonks, & Klauda, 2009). For example, Battle and Wigfield (2003) found
that AV positively predicted college students intentions to attend graduate school. Cole,
Bergin, and Whittaker (2008) showed that AV significantly predicted test-taking effort
and performance, and Johnson and Sinatra (2013) found that AV contributed to perceived engagement and conceptual change.
In the context of scientific inquiry, measuring students attainment values for inquiry
engagement is a new contribution. Several instruments exist to assess different aspects of
scientific inquiry. For example, instruments measured inquiry capabilities (Zachos, 2005),
inquiry skills (Wenning, 2007), perceptions of inquiry (Campbell, Abd-Hamid, &
Chapman, 2010), strategic demands of inquiry (Shore, Chichekian, Syer, Aulls, &
Frederiksen, 2012), and understanding of inquiry (Talanquer et al., 2013). However, a
new instrument that measures STEM undergraduate students attainment value for inquiry

Ibrahim et al.

engagement through the four dimensions (to be explained in more details in the next
sections) and especially with the emphasis on the practices of science and engineering is
timely and needed.
Teachers Roles
The concept of role is one of the most popular ideas in the social sciences and can be
defined as characteristic behavior patterns (Biddle, 1986, p. 67). Role theory deals
with the organization of social behavior at both the individual and the collective
levels (Turner, 2001, p. 233). In inquiry- and learner-centered settings, teachers (and
learners) take on different and diverse roles (Aulls, Kaur Magon, & Shore, 2015;
Crawford, 2000; Tudor, 1993; Walker & Shore, 2015). The teachers roles of motivator
and encourager were among those identified as common between effective instructors
and effective inquiry instructors, whereas the teachers role as a model was only found
among effective inquiry instructors (Aulls & Ibrahim, 2012).
Students Personalities
Engaging in science involves being inventive, creative, systematic, reflective, sharing,
and collaborating (Aulls & Shore, 2008). The widely cited Big Five personality traits
theory contains five broad dimensions (Openness, Conscientiousness, Extraversion,
Agreeableness, and Neuroticism) that can be used to describe the human personality
(Goldberg, 1993). In a meta-analytic study of sample sizes ranging over 70,000 mainly
in tertiary education, Poropat (2009) found that academic performance was significantly
correlated with agreeableness, conscientiousness, and openness. In the current study, we
focused on openness, conscientiousness, and extraversion as student personalities (SPs)
that are important to assess how much they are valued when engaging in scientific
inquiry.
Openness to experience refers to engagement in idea-related behavior such as
intellectuality, creativity, unconventionality, and innovation (Ashton & Lee, 2001,
2007), propensity for innovation and astuteness in solving problem (Buss, 1991,
1996), permeability, breadth, and depth of consciousness, the need to enlarge and
examine experience (McCrae & Costa, 1996, 1997), intellectuality and creativity (van
Lieshout, 2000), creativity and unusual thinking (Nettle, 2006), imagination (McCrae,
1993), and intrinsically motivated curiosity and creativity (MacDonald, 1995, 1998).
Openness to experience is strongly related to scientific creativity (Simonton, 2004).
Conscientiousness refers to engagement in task-related behaviors such as being
organized and disciplined (Ashton & Lee, 2001, 2007) and adherence to plans,
schedules, and requirements (McCrae & Costa, 1996, 1997). Komarraju, Karau,
Schmeck, and Avdic (2011) found that conscientiousness and agreeableness were
positively related with four learning styles (synthesis and analysis, methodical study,
fact retention, and elaborative processing), in a sample of 308 undergraduates.
Extraversion refers to engagement in socially related behaviors, such as expressiveness and sociability (Ashton & Lee, 2001) and is considered important for engagement
in authentic scientific inquiry. Sharing, collaborating, and contributing are essential
components of constructing scientific arguments and engaging in a scientific community as shown by the ethnographic and historical study of laboratory groups and

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

research domains (Collins & Pinch, 1993; Latour & Woolgar, 1986; Mody, 2015; NRC,
2012; Pickering, 1995).
Inquiry-Learning Outcomes
Inquiry-learning outcomes (ILOs) include skills, dispositions, motivation, and development
of expertise (Saunders-Stewart, Gyles, & Shore, 2012; Shore et al., 2009). Other learning
outcomes that could result from engagement in scientific inquiry are related to educational
objectives such as comprehension, analysis, applying knowledge, evaluation, and synthesis. Saunders-Stewart et al. (2012) distilled from the literature a list of 23 student outcomes
from participation in inquiry, among which three addressed different kinds of understanding, application of knowledge was explicit, two specifically included self-regulation and
metacognition (although the word evaluation was not used), and another mentioned
authentic products (related to synthesis). Using principal components analysis with the 23
items, Saunders-Stewart, Gyles, Shore, & Bracewell (2015) identified learning competencies, motivation, student roles, and teacher roles as four components for these 23 outcomes,
supporting our present focus on instructors roles, student personalities, and inquiry
outcomes.
Practices of Science and Engineering
Using the idea of practices represents a major turn toward ontology in science (Woolgar &
Lezaun, 2013). The Next Generation Science Standards (NGSS) emphasized that this
approach is considered a major improvement over previous views of describing science
(NRC, 2012). The NRC proposed eight Practices of Science and Engineering (PSE; Next
Generation Science Standards Lead States [NGSS], 2013; NRC, 2012), which were: (a)
asking questions (for science) and defining problems (for engineering), (b) developing and
using models, (c) planning and carrying our investigations, (d) analyzing and interpreting
data, (e) using mathematics and computational thinking, (f) constructing explanations (for
science) and designing solutions (for engineering), (g) engaging in argument from evidence, and (h) obtaining, evaluating, and communicating information.
Objectives
Based on the above review, our specific goals were to: (a) develop a learner-focused
survey instrument that measures STEM undergraduate students attainment values for
inquiry engagement, and (b) validate this instrument based on theoretical and conceptual
grounds by examining its construct, content, and criterion validity.

Method
Item Development of the McGill Attainment Value for Inquiry Engagement
Survey (MAVIES) in STEM
We searched the literature for instruments and items that could provide an item pool for
a survey we intended to create to measure STEM students self-assessment of how

Ibrahim et al.

much they value (or assign importance to) different components of inquiry engagement. For example, Pedaste et al. (2015) listed 32 articles describing instruments and
frameworks that contained and described several items and components potentially
useful to address different components of the instrument we intended to create.
We decided to use the McGill Strategic Demands of Inquiry Questionnaire (MSDIQ;
Shore et al., 2012) as a basis for constructing the item pool for the survey of STEM
students attainment values for four components of scientific inquiry. These components
were (a) teachers roles, (b) students personalities, (c) learning outcomes, and (d)
practices of science and engineering. The MSDIQ is a 79-item, criterion-referenced,
learner-focused questionnaire that addressed the strategic demands of inquiry, in order
to emphasize the presence of relevant knowledge, skills, or predispositions that are
deemed curricular imperatives in the inquiry literature (p. 319). We chose the MSDIQ
because it contained relevant and sufficient items that represented each of the four
components that we sought to measure. More specifically, it contained items addressing
teachers roles, students personalities, learning outcomes, and practices of science and
engineering. We acknowledge that the MSDIQ instrument did not address all the PSE in
the NGSS; however, the MSDIQ represented a rich repertoire of items that addressed six
of the eight PSE in the NGSS (see Tables 1, 2, 3, and 4).
An expert panel reviewed the MSDIQ items to assess the adequacy of the items as a
basis for a survey of STEM students attainment values of the teachers roles, students
personalities, inquiry-learning outcomes, and PSE in the context of scientific inquiry.
Twelve items (including two distractors) variables were excluded from the analysis of the
original 79-item MSDIQ instrument because they did not discriminate among students.
After confirming the relevance of the remaining 67 items, the survey was constructed and
administered with an 11-point Likert-type scale, as in the original MSDIQ. In order to make
sure that the students answered the questions in the way we intended them to (i.e. to rate the
importance of engaging in each item in inquiry-based learning and teaching), we stated
specifically Engaging in an inquiry task has several possible elements. We would like to
know how you rate the importance of the following items. Each item is prefaced
by the question: how important is it in inquiry-based learning and teaching to
Thus, students answered the questions in reference to engagement in inquirybased learning and teaching. Also, because inquiry has several meanings, we
highlighted to the respondents that: the pedagogical approach known as inquiry
Table 1 Item loadings for teachers roles (TRs)
Factor

Item

How important is it in inquiry-based learning and


teaching (for the teacher) to

Encouraging and Motivating

Address his or her needs and students needs

.66

Tap into the students and his or her interests

.57

18

Encourage honest criticism of ideas

.48

19

Encourage creative risk-taking

.33

Provide a mentor

.95

Model skills needed for the inquiry

.62

Give the amount of time needed, be flexible with time

.52

Modeling and Mentoring

Loading

Attainment Value for Inquiry Engagement Survey in STEM Disciplines


Table 2 Item loadings for students personalities (SPs)
Factor

Extraversion

Openness

Conscientious-ness

Item

How important is it in inquiry-based


learning and teaching (for the student) to

Loading

Share the construction of the curriculum

.75

Share decision-making

.73

Have co-ownership of the questions

.66

28

Share emotions, feelings, ideas, and opinions

.51

44

Assist others to make observations

.43

42

Keep an open mind to change

.86

50

Seek different viewpoints

.54

25

Feel free to use imagination

.45

59

Accept that more than one solution may be appropriate

.22

24

Have back up plans at the end should the project stall

.94

23

Have different plans in advance to accomplish the task

.79

21

Set aside preparation time

.50

10

Organize time and space

.28

has many possible meanings. Our intention was to make the students focus on the
tasks that constitute inquiry across several possible definitions of inquiry that they may
hold. In other words, we acknowledged the fact that students may have different
definitions and conceptions of inquiry, which is very plausible given that inquiry has
several definitions in the literature and by experts (NRC, Council 2012). Our focus was
to measure students ratings of importance of the content of the items across the
Table 3 Item loadings for inquiry learning outcomes (ILOs)
Factor

Understanding conceptual knowledge

Item How important is it in inquiry-based l


earning and teaching (for the student) to

Loading

15

Understand key concepts

.73

20

Connect old and new knowledge

.58

13

Make a concept map or web or cluster

.40

Understanding metacognitive
knowledge

40

Understand how preconceptions affect learning

.70

41

Be aware of how the inquiry affects me personally .70

43

Address doubts directly

.40

Application

39

Apply previous knowledge to new concepts

.72

Extend inquiry beyond the classroom

.66

60

Apply new knowledge to future experiences

.60

Evaluation

Creating

66

Evaluate the inquiry experience

.83

64

Reflect upon the inquiry experience

.74
.64

46

Value personal judgment

53

Interact with or manipulate surroundings

.88

52

Construct new knowledge

.73

67

Follow-up the project with new questions

.41

Ibrahim et al.
Table 4 Item loadings for practices of science and engineering (PSE)
Factor

Obtain and evaluate information

Define the problem

Plan investigation

Carry out Investigation

Analyze and interpret data

Construct explanations and engage in


argument from evidence

Communicate knowledge

Item How important is it in inquiry-based learning and Loading


teaching (for the student) to
38

Separate relevant and irrelevant information

37

Search the Internet and World Wide Web

.69
.67

36

Search for resources beyond textbooks

.65

12

Divide the task into a coherent sequence of doable .75


steps

11

Understand the goal of the task

.73

16

Understand instructions

.59

26

Restate or reformat the problem

.30

17

Describe problem-solving strategies

.86

22

Make a plan

.72

14

Foresee possible outcomes of the activity

.65

27

Make suggestions

.52

29

Develop expectations of what will happen next

.41

32

Identify where to obtain data

.24

34

Record data

.94

35

Classify data

.90

31

Make careful observations

.50

47

Verify data or information

.78

45

Find patterns in data

.67

51

Test ideas and hypotheses

.65

33

Recognize hidden meanings in data

.60

63

Question the findings

.75

62

Explain results

.70

61

Record methods, results, and conclusions

.68

49

Anticipate and respond to arguments in opposition .64


to ones views

48

Compare data with someone elses

65

Discuss what has been learned compared to what .49


was known before

.60

30

Offer hypotheses about outcomes

56

Organize the presentation of the project

.83

55

Consider diverse means of communication

.71

.35

54

Communicate ones learning with others

.70

58

Use vocabulary appropriate to the audience and


topic

.65

57

Present data in tables and graphs

.63

variability of inquiry. For example, when students were asked to rate the importance of
understanding key concepts, the focus was on the rating of importance assigned to
understanding across multiple possible different views of the meaning of inquiry
because the different ratings (despite the possible different views of the meaning of

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

inquiry) would be related to the single domain of inquiry-based learning and teaching.
The main point was to make sure that students responses were accurate enough as they
referred to a single domain of inquiry, and not multiple domains of inquiry, and by
acknowledging the possible variability in precision given the different conceptions of
inquiry. Acknowledging these facts and stating them in the survey gave us strong
validity for the instrument. For a discussion of the difference between accuracy and
precision, please refer to Bevington and Robinson (2003).
In our conceptualization of the MAVIES instrument, we envisaged an instrument
that could be used to assess how much students assign value (importance) to components of engagement in inquiry. The PSE in MAVIES are treated as separate and
independent activities that correspond to the PSE in the NGSS framework. We did not
impose any sequence in respect of critiques that in the literature about stepwise,
discrete, sequential, stage, or phase organization of different components involved in
scientific inquiry. Sequences do not represent authentic inquiry, can distract students,
and can give them an unrealistic view about engaging in scientific inquiry or practices
of scientists and engineers (Moeed, 2013; Rudolph, 2005; Talanquer et al., 2013; Tang,
Coffey, Elby, & Levin, 2010; Windschitl, 2004). We combined constructing explanations and engaging in argument from evidence because we took a pluralistic
approach to explanation and reasoning (Lombrozo, 2009, 2010). This nonsequential
theoretical stance influenced our analytic steps in proposing and analyzing factors of
attainment values separately.
Instrument Administration
Typical case (Kuzel, 1999; Patton, 1990) sampling strategy was used, which highlights what is normal or average and increases confidence in conclusions (Miles &
Huberman, 1994, p. 28). We sought to include a large sample of undergraduates in as
many STEM disciplines as possible, so the instrument could be used with students
enrolled in any STEM field, not limited to a particular discipline, or university year. We
treated the sample as a homogenous group representing undergraduate students in
STEM fields. The sample comprised 85 undergraduate students in architecture, biochemistry, biology, chemical engineering, chemistry, computer engineering, computer
science, electrical engineering, environmental science, materials engineering, mathematics, mechanical engineering, and physics. This sample size (N = 85) relative to the
size of the subscale with the maximum number of items (q = 7) exceeds the minimum
suggested value for the ratio of sample size (N) to number of parameters to be estimated
(q) (Jackson, 2003; Kline, 2011) because we treated each subscale as a separate
instrument by assuming orthogonality between subscales based on the theoretical
grounding of using the different components of inquiry engagement orthogonally to
represent authentic realistic engagement in scientific inquiry (as discussed in the
previous section). Accordingly, the sample size was adequate and sufficient and
satisfied the N:q hypothesis within each subscale. Students in our sample indicated
that they had on average 4.1 years (SD = 2.6) experience in inquiry-based education,
ranging from 1 to 11 years; the median and mode were 5 years.
We used the Monte Carlo approach to determine the adequacy of the sample
size for statistical power (Brown, 2006). Because of our theoretical stance, we
treated all item clusters as belonging to single factors representing the four areas

Ibrahim et al.

of importance for engaging in scientific inquiry (teachers roles, students personalities, ILOs, and PSE). Each factorial model tested consisted of a single factor
and its underlying items. In the Monte Carlo population model, we used equal
loadings for the items on the single factor of each model tested. We used 10,000
replications and a random seed. The Monte Carlo power analysis with N = 85
supported that this sample size would give us 100 % power in each confirmatory
factor analysis (CFA) model tested.
Establishing Construct Validity
Construct validity refers to whether a given measure, or operational definition, actually
assesses the underlying conceptual variable, or construct, that the measure is intended to
represent (Bryant, 2000, p. 111), or degree of agreement with theoretical expectations
(Knapp & Mueller, 2010, p. 340). The factors of each of the four clusters of the MAVIES
represented constructs that are strongly supported theoretically and conceptually.
Teachers roles are supported in the inquiry literature (Aulls & Ibrahim, 2012;
Saunders-Stewart, Gyles, Shore, & Bracewell, 2015; Walker & Shore, 2015) and by
role theory (Turner, 2001) in social psychology. Students personalities (openness,
conscientiousness, and extraversion) match the three factors of the Five Factor Model
(FFM) (Goldberg, 1993). Students personalities are positive predictors of academic
performance (Poropat, 2009). The inquiry learning outcomes (understanding conceptual
knowledge, understanding metacognitive knowledge, application, evaluation, and creation) are based on the same cognitive learning objectives in Blooms taxonomy and its
revision (Anderson et al., 2001; Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956;
Krathwohl, 2002). Finally, the PSE are based on the NRCs (2012) NGSS conceptual
model and framework for science education.
Establishing Content Validity Using Confirmatory Factor Analysis
Content validity refers to expert judgments of the representativeness of items with
respects to skills, knowledge, etc. domain to be covered (Knapp & Mueller, 2010, p.
340). First, we used experts judgments of the representativeness of MAVIES items for
each of the factors in its four components. Three experts in scientific inquiry, three
experts in higher education, and three experts in science education agreed 100 % that
the items of the instrument represented corresponding factors. Second, we used CFA
for the psychometric evaluation of the instrument and for validation (Brown, 2006).
CFA requires theoretical or conceptual frameworks for specifying the item-factor
relations (Brown, 2006). In the four MAVIES components, we relied on adequate
theories to specify the CFA models.
CFA Model Specification. Item-factor relations were identified conceptually based on
experts judgments by referring to relevant theories before CFA testing. We used the
role theory for teachers roles, the FFM for students personality, and Blooms taxonomy of educational objectives (and its revision) for the ILOs and the NRC (2012)
framework for PSE. All factors in the models for the four components of the instrument
were considered latent rather than emergent. Latent factors explain variance in its
measured indicator variables and induces covariance among them whereas, in

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

emergent factors, measured variables are considered as causal indicators of constructs


(Mueller & Hancock, 2010, p. 373).
CFA Model-Fit Indices. Data-model fit indices were used to examine the goodness-of-fit. West, Taylor, and Wu (2012) reviewed fit indices for covariancestructure models (for a complete list and review of fit indies, please see
Table 13.1 in West et al., 2012) that are used for examining goodness-of-fit.
Some fit indices commonly cited include: (a) 2, (b) standardized root mean
square residual (SRMR), (c) root mean square residual (RMR), (d) RMSEA, (e)
comparative fit index (CFI), and (f) Tucker-Lewis Index (TLI). Several fit indices
have limitations such as dependence on sample size. West et al. concluded that fit
indices worthy of consideration (p. 219) are (a) the SRMR (given its standardized metric), (b) root mean square error of approximation (RMSEA) (for sample
sizes over 200), (c) TLI, and (d) CFI. Recommended cutoff criteria (Brown, 2006;
Geiser, 2012; Hu & Bentler, 1998; Schermelleh-Engel, Moosbrugger, & Mller,
2003; West et al., 2012) for determining goodness-of-fit have been provided;
however, these should be used with caution because appropriate cutoff standards
may be specific to particular models and data sets (West et al., 2012, p. 220).
Using acceptable goodness-of-fit indices to assess the model fit provides initial
(tentative) support for the notion that [a] model was properly specified (Brown,
2006, p. 113) because, even with acceptable goodness-of-fit indices, it may still
not be adequate due to parameter estimates that are unreasonable or out of range
(Heywood cases).
CFA Model MIs, EPCs, and Respecification. Goodness-of-fit was further verified
by examining: (a) modification indexes (MIs) and (b) expected parameter changes
(EPCs). The values of MIs together with EPCs could highlight a misspecified model
if the MIs were above the cutoff value. An initial model could be misspecified
(leading to unacceptable fit indices or residual error, MIs, or EPCs) due to (a) the
pattern of indicator factor loadings or (b) uncorrelated versus correlated measurement
errors (Brown, 2006). The recommended cutoff values are (a) 2.58 for normalized
residuals (corresponding to = .01) and (b) 3.48 (usually approximated to 4.00, the
critical value of 2 at p < .05 and 1 df) for MIs. These cutoffs are sensitive to sample
size, so they were interpreted with caution (Brown, 2006, p. 118). By evaluating
MIs in conjunction with standardized EPCs, and in light of the conceptual model, the
result could lead to the respecification of the model to provide better model-fit indices
and also remove the specific MIs that caused the alert. It is important that the
respecification is conceptually and statistically viable (Brown, 2006, p. 122) to avoid
the specification searches problem, revising a fitted model based on MIs
(MacCallum, Roznowski, & Necowitz, 1992). When correlated measurement errors
(error covariances) are not specified, (a) the covariation among indicators that load on
a specific factor is accounted for by the latent dimension only and (b) all measurement error is random (Brown, 2006). One of the ways to respecify a model is to force
errors to be correlated in the model specification to be assessed for its fit. In such
cases, correlated errors are specified based on the notion that some of the covariance
in the indicators not explained by the latent variable is due to another exogenous
common cause (Brown, 2006, p. 157). For example, in questionnaire analysis,

Ibrahim et al.

correlated errors can arise due to (a) items that are very similarly worded, (b) items
that are reverse worded, or (c) items that are differentially prone to social desirability (Brown, 2006, p. 181).
CFA Factor Loadings and Error Residuals. The standardized residual variance for
indicators (observed or manifest variables) is calculated as [1-(variance accounted for by
the latent factor)] and provides the proportion of variability in each manifest variable
that is not accounted for by the corresponding latent factor (Geiser, 2012, p. 50).
Factor Scores and Correlations. We computed scores for each latent factor by
calculating means of all indicator items that contributed to that factor. We used these
scores in regression analyses assessing criterion validity. Correlations among factors
were computed. Significance of each correlation was also calculated.

Establishing Criterion Validity Using Linear Regression


Criterion validity refers to how accurately an instrument predicts a well-accepted
indicator of a given concept, or a criterion (Bryant, 2000, p. 106), or degree of
agreement with a gold standard (Knapp & Mueller, 2010, p. 340). In order to investigate
the criterion validity, we assessed how well the factors in MAVIES could predict learning
strategies or approaches to learning (Entwistle, 1991). Approaches to learning has been
a popular area of research in educational psychology (Clinton, 2014). They represent a
qualitative aspect of learning that human experience of a learning task (Ramsden, 2003).
We hypothesized that the higher the importance assigned to SPs, ILOs, and PSEs, the less
likely students would engage in surface-learning strategies. We used the SurfaceLearning Strategies (SLSs) factor from the Learning Process Questionnaire (LPQ)
(Biggs, 1987) as our dependent variable and hypothesized negative slopes in the linear
regression between PSE, ILOs, SPs and the SLSs.
Establishing Reliability
For latent factors, internal consistency of an instrument refers to how well the items
that make up an instrument or one of its subscales fit together (Pett, Lackey, &
Sullivan, 2003, p. 175). Bandalos and Finney (2010) wrote, internal consistency
estimates for multidimensional instruments should be obtained for the dimensional
level at which the scale will be interpreted and used (p. 105). Accordingly, Cronbachs
alpha was computed for each latent factor.

Results
Content Validity using Confirmatory Factor Analyses (CFA)
All fit and modification indices were better than recommended cutoff criteria (Brown,
2006; Hu & Bentler, 1998; Schermelleh-Engel et al., 2003; West et al., 2012). See
Tables 1, 2, 3, and 4.

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

Criterion Validation
Evidence for criterion validity was supported through consistent prediction of the
learning strategies (Biggs, 1987) from MAVIES. Surface Learning Strategies (SLS)
were consistently correlated negatively with all AV of the SPs, ILOs, and PSE (See
Table 5).
Reliability
Factorial internal-consistency statistics estimated reliability using Cronbachs alpha
computed for each latent factor of the instrument. Cronbachs alpha was .93 for the
entire set and above .92 for all factors in the four components of MAVIES. These
reliability assessments indicated that the MAVIES factors are homogenous, which was
expected given the theoretical underpinning of the instruments components.

Discussion and Conclusions


The Development and Validation of the MAVIES Instrument
Our goal was to develop and validate a learner-focused survey instrument that measures
STEM students attainment values for four components of inquiry engagement: (a)
teachers roles, (b) students personalities, (c) inquiry learning outcomes, and (d)
practices of science and engineering in scientific inquiry. According to expectancy
value theory, the higher the importance assigned to each component, the higher the
achievement and performance for each of them. This theoretical perspective motivated
Table 5 The regression analysis for MAVIES factors and surface learning strategies
p value R2

Factor

Intercept Coefficient F value

Extraversion

18.2

0.5

F(1,62) = 2.2 p = .20 R2 = .03

Openness

23.5

1.1

F(1,61) = 4.9 p = .03 R2 = .07

Conscientiousness

18.8

0.6

F(1,62) = 2.2 p = .10 R2 = .03

Conceptual knowledge

22.4

1.0

F(1,62) = 7.9 p = .01 R2 = .11

Metacognitive knowledge

18.9

0.6

F(1,62) = 2.0 p = .16 R2 = .03

Application

21.0

0.8

F(1,62) = 2.9 p = .09 R2 = .05

Evaluation

17.2

0.4

F(1,62) = 0.9 p = .30 R2 = .02

Creation

16.7

0.3

F(1,61) = 0.7 p = .04 R2 = .01

Define the problem

20.5

0.7

F(1,62) = 2.1 p = .20 R2 = .03

Obtain and evaluate info

17.9

0.4

F(1,62) = 1.1 p = .29 R2 = .02

Plan investigations

19.8

0.7

F(1,61) = 1.7 p = .20 R2 = .01

Carry out investigations

18.4

0.5

F(1,61) = 1.7 p = .20 R2 = .03

Analyze and interpret data

18.0

0.4

F(1,62) = 1.1 p = .30 R2 = .02

Construct explanations and engage in argument 18.7

0.5

F(1,62) = 1.1 p = .30 R2 = .02

Present and communicate

0.7

F(1,62) = 2.8 p = .06 R2 = .05

19.4

Ibrahim et al.

creating an instrument that would predict students achievement and performance. The
CFA confirmed our hypothesized structures for MAVIES. Regarding teachers roles,
we chose items that addressed teachers roles as encouragers and motivators, as well as
their roles as models and mentors. Although instructors can have a multitude of roles,
especially in inquiry-learning environments, these two roles specifically addressed the
motivational aspect (encouragement and motivation) and scaffolding (modeling and
mentoring) of instruction for engaging in scientific inquiry.
Regarding students personality, we chose the openness, conscientiousness, and extraversion personalities based on the FFM through items that represented these characteristics
in scientific inquiry. Including factors representing the attainment values of teachers roles
and students personalities showed that we addressed how students value teachers roles as
well as their own views about what qualities they perceive as important during the
engagement in scientific inquiry. Regarding the learning outcomes, we chose the conceptual and metacognitive understanding, application, evaluation, and creation based on the
educational objectives and learning outcomes in the cognitive domain from the Blooms
original and revised taxonomies. Regarding the practices of science and engineering, we
chose the PSE based on the NGSS.
MAVIES is a reliable and valid instrument that can be used with undergraduate
students in STEM disciplines to assess their attainment values for inquiry engagement.
The instrument fills a gap in the literature by providing a measure of the motivational
construct of attainment value, which is, in turn, related to Expectancy Value Theory in
the context of scientific inquiry. The MAVIES instrument is also consistent with the
NGSS (NGSS, 2013).
Theoretical Implications
The MAVIES instrument is useful to measure STEM students self-assessment of
their valuing of four components of engaging in scientific inquiry: teachers roles,
students personalities, inquiry learning outcomes, and practices of science and
engineering. These four components constitute some of the essential building
blocks of a theory of scientific inquiry in higher education. Inquiry engagement
is defined and described in this paper. It represents a necessary contribution to link
inquiry and practices of science and engineering.
Teachers Roles. The CFA of teachers roles showed that these roles (for which
attainment values were assigned) included (a) teachers roles as encouragers and
motivators and (b) teachers roles as models and mentors. These roles are
specifically vital in inquiry. Aulls and Ibrahim (2012) found that, when comparing effective inquiry instructors to just effective instructors, students reported
a higher frequency for effective inquiry instructors roles as encouragers and
motivators than those for just effective instructors. Considering instructors roles
as encouragers and motivators is important in an instrument that seeks to
measure students assessment of the importance of teachers roles during the
engagement in scientific inquiry. In their qualitative case study, Aulls and
Ibrahims students also mentioned that effective inquiry instructors roles include models and mentors, but not those of instructors in general. These roles
are important because they represent similarities to two distinct leadership roles

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

that Bales and Slater (1955) hypothesized to emerge in task-oriented groups.


The task or instrumental (adaptive-instrumental) leader focuses on goal achievement, whereas the expressive (integrative-expressive) leader focuses on internal
integration and the expression of emotional tensions (Slater, 1955, p. 308).
Within inquiry, the teachers role as encourager and motivator could be seen as
more emotional and linked to the expressive functional role, whereas the
teachers role as mentor and model could be seen as related more to advancing
the goal of inquiry through mentoring and modeling. Indeed, modeling to
scaffold learning has been reported to be a key dimension of inquiry-based
instruction of undergraduate STEM students (Spronken-Smith, 2010; SpronkenSmith & Walker, 2010).
Although, in the context of inquiry, teachers normally change their roles (Tudor,
1993), assume unique new roles (Aulls & Ibrahim, 2012), or diversify their roles by
taking on a larger, more varied number of roles and roles that are not necessarily
traditional in nature (Walker & Shore, 2015, p. 4), it might be difficult for teachers not
to keep their roles of model and mentor or motivator and encourager, exchange them
with students, or give them up. Any theorization about the role of teachers roles in the
context of inquiry needs to take into consideration the two role categories of encourager and motivator and mentoring and modeling.
Personality. The CFA revealed that students personalities (for which attainment
values were assigned) included (a) openness, (b) conscientiousness, and (c) extraversion. Integrating these characteristics within the theory of scientific inquiry
offers a theoretical advance and contribution to linking the social and educational
psychology of learning in the context of scientific inquiry. Inquiry promotes
creativity, innovation, and curiosity, so it is natural that higher degrees of openness could lead to a more innovative and creative inquiry experience. Inquiry also
requires promptness and structure that guides exploration and experimentation, but
researchers have not yet studied the effects of STEM undergraduates personality
characteristics on their scientific practices and scientific inquiry. Finally, socialconstructivist theories center on collaboration and interaction, which also is related
to extraversion. Examining the importance assigned by students to the dimensions
of personality could shed further light on their practices of science and inquiry
learning outcomes.
Learning Outcomes. Learning outcomes are important for learning, teaching, and
assessment. Although performance assessment (Shavelson, Solano-Flores, &
Ruiz-Primo, 1998; Wilson & Sloane, 2000), procedural understanding (Gott &
Duggan, 2002; Roberts & Gott, 2003, 2004, 2006), and understanding of scientific
inquiry (Lederman et al., 2014) are needed in scientific inquiry-learning environments, assessment of learning according to learning goals or instructional objectives is also important for formative and summative assessments. Blooms taxonomy of educational objectives (and its revisions) offer a framework for classifying statements of what we expect or intend students to learn as a result of
instruction (Krathwohl, 2002, p. 212). These levels have been used in several
studies to assess the effects of instruction or the effects of levels of inquiry on
learning (e.g. Redden, Simon, & Aulls, 2007; Spronken-Smith, Walker, Batchelor,

Ibrahim et al.

OSteen, & Angelo, 2012). An integrative framework for assessment of inquiry


should include assessment of different aspects of inquiry including inquirylearning outcomes.
Practices. The NGSS introduced the idea of using practices as a way of engagement in scientific activity. Other related conceptual frameworks explicating the
dimensions of scientific practices are starting to emerge in the science education
literature. For example, Stroupe (2015) presented a representation of the dimensions of disciplinary work and descriptions of how novices learn to participate in
valued community activities to arrive at a conceptualization of scientific practices
that included four dimensions of science practice: conceptual, social, epistemic,
and material. Berland et al. (2015) described the Epistemologies in Practice (EIP)
framework for characterizing how students can engage meaningfully in scientific
practices, emphasizing students epistemic goals for their knowledge-construction
work and their epistemic understanding of how to engage in that work. Jaber and
Hammer (2015) introduced the notions of epistemic affect and epistemic motivation as part of engaging in science which is consistent with our overall definition and proposal of a multidimensional construct of inquiry engagement that
includes epistemic engagement, although in our instrument we did not include
epistemic affect or epistemic motivation. These models and frameworks contribute
to the development of theories of practice and theories of scientific practices that,
in turn, enhance theories of inquiry and scientific inquiry in higher education. The
discussion of practices seems to be in its early stages. Our contribution to this
discussion has shown that STEM undergraduate students: (a) can reliably assign
attainment values to PSE and (b) have lower likelihood of using surface approaches to learning when they have higher ratings of importance of PSE.

Practical Implications
MAVIES can be used to measure STEM students assignment of importance to components of engaging in scientific inquiry in experiments that assess the effect of instruction.
Instructors can also use MAVIES prior to instruction to guide curriculum preparation and
designing instructional activities. MAVIES can be used in program evaluation by asking
the question whether different STEM programs or courses are associated with significantly
different student ratings of importance to the components of scientific inquiry.
Limitations and Future Research
MAVIES included subscales for addressing six out of the eight PSE in the NGSS.
MAVIES did not include items assessing the NGSS practices of (a) developing and
using models and (b) using mathematics and computational thinking. In future revisions of the instrument, we shall add relevant items to represent the practices that were
not included and to increase the number of items for factors that are composed of three
items. Future research is also planned to validate the instrument in samples in the
humanities and social science disciplines and to examine the significance of differences
between disciplines regarding how undergraduates self-assessments of importance

Attainment Value for Inquiry Engagement Survey in STEM Disciplines

assigned to the different components of engagement in scientific inquiry, as identified


in the current paper.

References
Abd-El-Khalick, F., BouJaoude, S., Duschl, R. A., Lederman, N. G., Mamlok-Naaman, R., Hofstein, A.,
Tuan, H.-L. (2004). Inquiry in science education: International perspectives. Science Education, 88, 397
419.
American Association for the Advancement of Science. (1993). Benchmarks for scientific literacy. New York,
NY: Oxford University Press.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R.,
Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Blooms
taxonomy of educational objectives (complete edition) New York, NY: Longman.
Ashton, M. C. & Lee, K. (2001). A theoretical basis for the major dimensions of personality. European
Journal of Personality, 15, 327353. doi:10.1002/per.417.
Ashton, M. C. & Lee, K. (2007). Empirical, theoretical, and practical advantages of the HEXACO Model of
personality structure. Personality and Social Psychology Review, 11, 150166. doi:10.1177/
1088868306294907.
Aulls, M. W., & Ibrahim, A. (2012). Pre-service teachers perceptions of effective inquiry instruction: Are
effective instruction and effective inquiry instruction essentially the same? Instructional Science, 40, 119
139. doi:10.1007/s11251-010-9164-z.
Aulls, M. W., & Shore, B. M. (2008). Inquiry in education (Vol. 1): The conceptual foundations for research as
a curricular imperative. New York, NY: Erlbaum.
Aulls, M. W., Kaur Magon, J., & Shore, B. M. (2015). The distinction between inquiry-based instruction and
non-inquiry-based instruction in higher education: A case study of what happens as inquiry in 16
education courses in three universities. Teaching and Teacher Education, 51, 147161. doi:10.1016/j.
tate.2015.06.011.
Azevedo, R. (2015). Defining and measuring engagement and learning in science: conceptual, theoretical,
methodological, and analytical issues. Educational Psychologist, 50, 8494. doi:10.1080/00461520.2015.
1004069.
Bales, R. F. & Slater, P. E. (1955). Role differentiation in small decision-making groups. In T. Parsons & R. F.
Bales (Eds.), Family, socialization, and interaction processes (pp. 259306). Glencore, IL: Free Press.
Bandalos, D. L. & Finney, S. J. (2010). Factor analysis: Exploratory and confirmatory. In G. R. Hancock & R.
O. Mueller (Eds.), The reviewers guide to quantitative methods in the social sciences (pp. 93114). New
York, NY: Routledge.
Barrow, L. (2006). A brief history of inquiry: From Dewey to standards. Journal of Science Teacher
Education, 17, 265278. doi:10.1007/s10972-006-9008-5.
Battle, A. & Wigfield, A. (2003). College womens value orientations toward family, career, and graduate
school. Journal of Vocational Behavior, 62, 5675. doi:10.1016/S0001-8791(02)00037-4.
Berland, L. K., Schwarz, C. V., Krist, C., Kenyon, L., Lo, A. S. & Reiser, B. J. (2015). Epistemologies in
practice: Making scientific practices meaningful for students. Journal of Research in Science Teaching.
doi:10.1002/tea.21257.
Bevington, P. R. & Robinson, D. K. (2003). Data reduction and error analysis for the physical sciences.
Boston, MA: McGraw-Hill.
Biddle, B. J. (1986). Recent developments in role theory. Annual Reviews in Sociology, 12, 6792.
Biggs, J. D1987]. Learning Process Questionnaire manual: Student approaches to learning and studying.
Retrieved from http://www.eric.ed.gov/ERICWebPortal/Home.portal?_nfpb=true&ERICExtSearch_
SearchType_0=no&_pageLabel=ERICSearchResult&ERICExtSearch_SearchValue_0=no%
3Aed308199&spelling=yes.
Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. & Krathwohl, D. R. (1956). Taxonomy of educational
objectives: The classification of educational goals. New York, NY: McKay.
Blumenfeld, P. C., Kempler, T. M. & Krajcik, J. S. (2006). Motivation and cognitive engagement in learning
environments. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 475488).
Cambridge, England: Cambridge University Press.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford Press.

Ibrahim et al.
Bryant, F. B. (2000). Assessing the validity of measurement. In L. G. Grimm & P. R. Yarnold (Eds.), Reading
and understanding more multivariate statistics (1st ed., pp. 99146). Washington, DC: American
Psychological Association.
Buss, D. M. (1991). Evolutionary personality psychology. Annual Review of Psychology, 42, 459491.
Buss, D. M. (1996). Social adaptation and five major factors of personality. In J. S. Wiggins (Ed.), The five
factor model of personality: Theoretical perspectives (pp. 180207). New York, NY: Guilford Press.
Campbell, T., Abd-Hamid, N. & Chapman, H. (2010). Development of instruments to assess teacher and
student perceptions of inquiry experiences in science classrooms. Journal of Science Teacher Education,
21, 1330. doi:10.1007/s10972-009-9151-x.
Chi, M. T. H. & Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning
outcomes. Educational Psychologist, 49, 219243. doi:10.1080/00461520.2014.965823.
Clinton, V. (2014). The relationship between students preferred approaches to learning and behaviors during
learning: An examination of the process stage of the 3P model. Instructional Science, 42, 817837. doi:
10.1007/s11251-013-9308-z.
Cole, J. S., Bergin, D. A. & Whittaker, T. A. (2008). Predicting student achievement for low stakes tests with
effort and task value. Contemporary Educational Psychology, 33, 609624. doi:10.1016/j.cedpsych.2007.
10.002.
Collins, H. L. & Pinch, T. (1993). The golem: What everyone should know about science. Cambridge,
England: Cambridge University Press.
Crawford, B. A. (2000). Embracing the essence of inquiry: New roles for science teachers. Journal of
Research in Science Teaching, 37, 916937. doi:10.1002/1098-2736(200011)37:9<916::AID-TEA4>3.
0.CO;2-2.
Eccles, J. S. (2005). Subjective task value and the Eccles et al. model of achievement-related choices. In A. J.
Elliott & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 105121). New York, NY:
Guilford Press.
Eccles, J. S., Adler, T. F., Futteman, R., Goff, S., Kaczala, C. M. & Meece, J. (1983). Expectancies, values, and
academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motives (pp. 75146). San
Francisco, CA: Freeman.
Eccles, J. S., ONeill, S. A. & Wigfield, A. (2005). Ability self-perceptions and subjective task values in
adolescents and children. In K. A. Moore & L. H. Lippman (Eds.), What do children need to flourish?
(Vol. 3, pp. 237249). New York, NY: Springer.
Eccles, J. S., Wigfield, A., Harold, R. D. & Blumenfeld, P. D1993]. Age and gender differences in childrens
self- and task perceptions during elementary School. Child Development, 64, 830847. Retrieved from
http://www.jstor.org/stable/1131221.
Entwistle, N. (1991). Approaches to learning and perceptions of the learning environment. Higher Education,
22, 201204.
Fredricks, J. A., Blumenfeld, P. C. & Paris, A. H. (2004). School engagement: Potential of the concept, state of
the evidence. Review of Educational Research, 74, 59109. doi:10.3102/00346543074001059.
Geiser, C. (2012). Data analysis with Mplus. New York, NY: Guilford Press.
Goldberg, L. R. (1993). The structure of phenotypic personality traits. American Psychologist, 48, 2634. doi:
10.1037/0003-066X.48.1.26.
Gott, R. & Duggan, S. (2002). Problems with the assessment of performance in practical science: which way
now? Cambridge Journal of Education, 32, 183201. doi:10.1080/03057640220147540.
Greeno, J. G., Collins, A. M. & Resnick, L. B. (1996). Cognition and learning. In D. C. Berliner & R. C.
Calfee (Eds.), Handbook of educational psychology (pp. 1546). New York, NY: Prentice Hall.
Hanauer, D. I., Hatfull, G. F. & Jacobs-Sera, D. (2009). Assessing scientific inquiry. New York, NY: Springer.
Hu, L.-T. & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to
underparameterized model misspecification. Psychological Methods, 3, 424453. doi:10.1037/1082989x.3.4.424.
Jaber, L. Z., & Hammer, D. (2015). Engaging in science: A feeling for the discipline. Journal of the Learning
Sciences, 147. doi:10.1080/10508406.2015.1088441
Jackson, D. L. (2003). Revisiting sample size and number of parameter estimates: Some support for the N:q
hypothesis. Structural Equation Modeling, 10, 128141. doi:10.1207/S15328007SEM1001_6.
Johnson, M. L. & Sinatra, G. M. (2013). Use of task-value instructional inductions for facilitating engagement
and conceptual change. Contemporary Educational Psychology, 38, 5163. doi:10.1016/j.cedpsych.2012.
09.003.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY:
Guilford Press.

Attainment Value for Inquiry Engagement Survey in STEM Disciplines


Knapp, T. R. & Mueller, R. O. (2010). Reliability and validity of instruments. In G. R. Hancock & R. O.
Mueller (Eds.), The reviewers guide to quantitative methods in the social sciences (pp. 337342). New
York, NY: Routledge.
Komarraju, M., Karau, S. J., Schmeck, R. R. & Avdic, A. (2011). The Big Five personality traits, learning
styles, and academic achievement. Personality and Individual Differences, 51, 472477. doi:10.1016/j.
paid.2011.04.019.
Krathwohl, D. R. (2002). A revision of Blooms taxonomy: An overview. Theory Into Practice, 41, 212218.
doi:10.1207/s15430421tip4104_2.
Kuzel, A. J. (1999). Sampling in qualitative inquiry. In B. F. Crabtree & W. L. Miller (Eds.), Doing qualitative
research (2nd ed., pp. 3345). Thousand Oaks, CA: Sage.
Latour, B. & Woolgar, S. (1986). Laboratory life: The construction of scientific facts. Princeton, NJ: Princeton
University Press.
Lederman, J. S., Lederman, N. G., Bartos, S. A., Bartels, S. L., Meyer, A. A. & Schwartz, R. S. (2014).
Meaningful assessment of learners understandings about scientific inquiryThe views about scientific
inquiry (VASI) questionnaire. Journal of Research in Science Teaching, 51, 6583. doi:10.1002/tea.
21125.
Lombrozo, T. (2009). Explanation and categorization: How why? informs what?. Cognition, 110, 248
253. doi:10.1016/j.cognition.2008.10.007.
Lombrozo, T. (2010). Causal-explanatory pluralism: How intentions, functions, and mechanisms influence
causal ascriptions. Cognitive Psychology, 61, 303332. doi:10.1016/j.cogpsych.2010.05.002.
MacCallum, R. C., Roznowski, M. & Necowitz, L. B. (1992). Model modifications in covariance structure
analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490504. doi:10.1037/
0033-2909.111.3.490.
MacDonald, K. (1995). Evolution, the five-factor model, and levels of personality. Journal of Personality, 63,
525567. doi:10.1111/j.1467-6494.1995.tb00505.x.
MacDonald, K. (1998). Evolution, culture, and the five-factor model. Journal of Cross-Cultural Psychology,
29, 119149. doi:10.1177/0022022198291007.
McCrae, R. R. (1993). Openness to experience as a basic dimension of personality. Imagination, Cognition
and Personality, 13, 3955. doi:10.2190/h8h6-qykr-keu8-gaq0.
McCrae, R. R. & Costa, P. T. (1996). Toward a new generation of personality theories: Theoretical contexts for
the Five-Factor Model. In J. S. Wiggins (Ed.), The Five Factor Model of personality: Theoretical
perspectives (pp. 5187). New York, NY: Guilford Press.
McCrae, R. R. & Costa, P. T. (1997). Conceptions and correlates of openness to experience. In R. Hogan, J. A.
Johnson & S. Briggs (Eds.), Handbook of personality psychology (pp. 825847). San Diego, CA:
Academic.
Miles, M. B. & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand
Oaks, CA: Sage.
Millar, R. & Driver, R. (1987). Beyond processes. Studies in Science Education, 14, 3362. doi:10.1080/
03057268708559938.
Mody, C. C. M. (2015). Scientific practice and science education. Science Education, 99, 10261032. doi:10.
1002/sce.21190.
Moeed, A. (2013). Science investigation that best supports student learning: teachers understanding of science
investigation. International Journal of Environmental and Science Education, 8, 537559.
Mueller, R. O. & Hancock, G. R. (2010). Structural equation modeling. In G. R. Hancock & R. O. Mueller
(Eds.), The reviewers guide to quantitative methods in the social sciences (pp. 371384). New York, NY:
Routledge.
National Research Council. (1996). National Science Education Standards. Washington, DC: National
Academies Press.
National Research Council. (2000). Inquiry and the National Science Education Standards: A guide for
teaching and learning. Washington, DC: National Academies Press.
National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: National Academies Press.
Nettle, D. (2006). The evolution of personality variation in humans and other animals. American Psychologist,
61, 622631. doi:10.1037/0003-066X.61.6.622.
Next Generation Science Standards Lead States. (2013). Next Generation Science Standards: For states, by
states (Vol. 1). Washington, DC: National Academies Press.
Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park, CA: Sage.

Ibrahim et al.
Pedaste, M., Meots, M., Siiman, L. A., de Jong, T., van Riesen, S. A. N., Kamp, E. T., . Tsourlidaki, E.
(2015). Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research
Review, 14, 4761. doi:10.1016/j.edurev.2015.02.003
Peterson, C. A. (2012). Mentored engagement of secondary science students, plant scientists, and teachers in
an inquiry-based online learning environment. (Unpublished doctoral dissertation), Texas A&M
University, College Station, TX.
Pett, M. A., Lackey, N. R. & Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis
for instrument development in health care research. Thousand Oaks, CA: Sage.
Pickering, A. (1995). The mangle of practice: Time, agency, and science. Chicago, IL: University of Chicago
Press.
Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance.
Psychological Bulletin, 135, 322338. doi:10.1037/a0014996.
Ramsden, P. (2003). Approaches to learning. In P. Ramsden (Ed.), Learning to teach in higher education (2nd
ed., pp. 3961). London, England: Routledge.
Redden, K. C., Simon, R. A., & Aulls, M. W. (2007). Alignment in constructivist-oriented teacher education:
Identifying pre-service teacher characteristics and associated learning outcomes. Teacher Education
Quarterly, 34, 149164. Retrieved from http://www.jstor.org/stable/23478999.
Roberts, R. & Gott, R. (2003). Assessment of biology investigations. Journal of Biological Education, 37,
114121. doi:10.1080/00219266.2003.9655865.
Roberts, R. & Gott, R. (2004). A written test for procedural understanding: a way forward for assessment in
the UK science curriculum? Research in Science & Technological Education, 22, 521. doi:10.1080/
0263514042000187511.
Roberts, R. & Gott, R. (2006). Assessment of performance in practical science and pupil attributes. Assessment
in Education: Principles, Policy & Practice, 13, 4567. doi:10.1080/09695940600563652.
Rudolph, J. L. (2005). Epistemology for the masses: the origins of the scientific method in American
schools. History of Education Quarterly, 45, 341376. doi:10.1111/j.1748-5959.2005.tb00039.x.
Saunders-Stewart, K. S., Gyles, P. D. T., Shore, B. M., & Bracewell, R. J. (2012). Student outcomes in inquiry:
Students perspectives. Learning Environments Research, 18, 289-311. doi:10.1007/s10984-015-9185-2.
Schermelleh-Engel, K., Moosbrugger, H. & Mller, H. D2003]. Evaluating the fit of structural equation
models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological
Research, 8, 2374. Retrieved from http://www.dgps.de/fachgruppen/methoden/mpr-online/issue20/art2/
mpr130_13.pdf.
Shavelson, R. J., Solano-Flores, G. & Ruiz-Primo, M. A. (1998). Toward a science performance assessment
technology. Evaluation and Program Planning, 21, 171184. doi:10.1016/S0149-7189(98)00005-6.
Shore, B. M., Birlean, C., Walker, C. L., Ritchie, K. C., LaBanca, F., & Aulls, M. W. (2009). Inquiry literacy:
A proposal for a neologism. Learning Landscapes, 3, 139155. Retrieved from http://www.
learninglandscapes.ca/.
Shore, B. M., Chichekian, T., Syer, C. A., Aulls, M. W., &Frederiksen, C. H. (2012). Planning, enactment, and
reflection in inquiry-based learning: Validating the McGill Strategic Demands of Inquiry Questionnaire.
International Journal of Science and Mathematics Education, 10, 315337. doi:10.1007/s10763-0119301-4.
Simonton, D. K. (2004). Creativity in science: Chance, logic, genius, and Zeitgeist. Cambridge, England:
Cambridge University Press.
Slater, P. E. D1955]. Role differentiation in small groups. American Sociological Review, 20, 300310.
Retrieved from http://www.jstor.org/stable/2087389.
Spronken-Smith, R. D2010]. Undergraduate research and inquiry-based learning: Is there a difference? Insights
from research in New Zealand. CUR DCouncil on Undergraduate Research] Quarterly, 30, 2835.
Retrieved from http://www.cur.org/assets/1/7/Spronken-Smith.pdf.
Spronken-Smith, R. & Walker, R. (2010). Can inquiry-based learning strengthen the links between teaching
and disciplinary research? Studies in Higher Education, 35, 723740. doi:10.1080/03075070903315502.
Spronken-Smith, R., Walker, R., Batchelor, J., OSteen, B. & Angelo, T. (2012). Evaluating student perceptions of learning processes and intended learning outcomes under inquiry approaches. Assessment and
Evaluation in Higher Education, 37, 5772. doi:10.1080/02602938.2010.496531.
Stroupe, D. (2015). Describing science practice in learning settings. Science Education, 99, 10331040. doi:
10.1002/sce.21191.
Talanquer, V., Tomanek, D. & Novodvorsky, I. (2013). Assessing students understanding of inquiry: What do
prospective science teachers notice? Journal of Research in Science Teaching, 50, 189208. doi:10.1002/
tea.21074.

Attainment Value for Inquiry Engagement Survey in STEM Disciplines


Tang, X., Coffey, J. E., Elby, A. & Levin, D. M. (2010). The scientific method and scientific inquiry: tensions
in teaching and learning. Science Education, 94, 2947. doi:10.1002/sce.20366.
Tudor, I. (1993). Teacher roles in the learner-centred classroom. ELT Journal, 47, 2231.
Turner, R. H. D2001]. Role theory. In J. H. Turner DEd.], Handbook of sociological theory Dpp. 233254]. New
York, NY: Kluwer. Retrieved from http://www.social-sciences-and-humanities.com/pdf/Handbook-ofSociological-Theory.pdf.
van Lieshout, C. F. M. (2000). Lifespan personality development: Self-organising goal-oriented agents and
developmental outcome. International Journal of Behavioral Development, 24, 276288. doi:10.1080/
01650250050118259.
Walker, C. L., & Shore, B. M. (2015). Understanding classroom roles in inquiry education: Linking role
theory and social constructivism to the concept of role diversification. SAGE Open, 5, 113. doi:10.1177/
2158244015607584.
Wenning, C. J. (2007). Assessing inquiry skills as a component of scientific literacy. Journal of Physics
Teacher Education Online, 4, 2124.
West, S. G., Taylor, A. B. & Wu, W. (2012). Model fit and model selection in structural equation modeling. In
R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 209231). New York, NY: Guilford
Press.
Wigfield, A. & Cambria, J. (2010). Students achievement values, goal orientations, and interest: definitions,
development, and relations to achievement outcomes. Developmental Review, 30, 135. doi:10.1016/j.dr.
2009.12.001.
Wigfield, A. & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary
Educational Psychology, 25, 6881. doi:10.1006/ceps.1999.1015.
Wigfield, A., Tonks, S. & Klauda, S. L. (2009). Expectancy-value theory. In K. R. Wentzel & A. Wigfield
(Eds.), Handbook of motivation at school (pp. 5575). New York, NY: Routledge.
Wilson, M. & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied
Measurement in Education, 13, 181208. doi:10.1207/S15324818AME1302_4.
Windschitl, M. (2004). Folk theories of inquiry: how preservice teachers reproduce the discourse and
practices of an atheoretical scientific method. Journal of Research in Science Teaching, 41, 481512. doi:
10.1002/tea.20010.
Woolgar, S. & Lezaun, J. (2013). The wrong bin bag: A turn to ontology in science and technology studies?
Social Studies of Science, 43, 321340. doi:10.1177/0306312713488820.
Zachos, P. (2005). Pendulum phenomena and the assessment of scientific inquiry capabilities. In M. R.
Matthews, C. F. Gauld & A. Stinner (Eds.), The pendulum (pp. 349362). Dordrecht, The Netherlands:
Springer.

You might also like