You are on page 1of 10

Student Ratings

Frequently Asked Questions

Why evaluate teaching?


Where can data be collected to evaluate teaching?
Why use student ratings?
Why administer student ratings online?
What is the response rate for online student ratings?
How can the online student ratings support teaching?
Who has provided input into the development of the online student-rating system?
How were items on the student rating form developed?
Why are the two global items (rate the instructor, rate the course) included on the
form?
Why do the average ratings for global items (rate the instructor, rate the course)
sometimes differ from average ratings for other items on the rating form?
Can online student ratings be used for mid-course student feedback?
Are other universities using online student ratings?
Where can I learn more about student ratings of instruction?

Why evaluate teaching?

In general, the evaluation of teaching serves two broad purposes:

1. Provide feedback on facultys teaching from students, peers, and supervisors


to understand strengths and weaknesses and improve teaching.
2. Collect data to evaluate faculty and courses in regard to faculty rank and
status and decisions about courses, programs, and faculty assignments.

Where can data be collected to evaluate teaching?

There are three primary data sources for collecting data on teaching, each offering a
unique perspective:

1. Students are in the best position to report on the day-to-day functioning and
activities of a course and provide feedback on their own learning experiences
(Chism, 1999; Theall & Franklin, 2001).
2. Peers are in the best position to provide feedback on course content and design
and an instructor's subject matter expertise (Chism, 1999; Hutchings, 1994;
Johnson & Ryan, 2000).
3. Supervisors (e.g., dept. chairs, deans, university administrators) are in the best
position to synthesize and confirm student and peer feedback and evaluate
instructor performance in light of department, college, and university goals
(Chism, 1999; Diamond, 1994).

top
Why use student ratings?

Students are one of the three primary data sources in evaluating teaching: students,
peers, and supervisors (see "Where can data be collected to evaluate teaching?").
Students should not be expected to be the primary source of feedback on course
content or the overall contribution of a faculty member in a department or college. On
the other hand, neither peers nor supervisors are in a good position to know what goes
on day-to-day in the classroom and how the course is experienced by students.
Common sense, as well as research, reveals that students are the most valid and
reliable source for this type of information (McKeachie & Kaplan, 1996; Theall &
Franklin, 2001).

Data from students can be gathered in a number of ways including individual interviews,
focus groups, measures of student learning (assignments, exams), and student ratings
of instruction. Of these methods, student ratings are usually preferred. Student
ratings are more feasible (and typically more reliable) than individual interviews or focus
groups. In addition to student ratings, assessments of student learning may be an
important source of information in evaluating teaching. Some are making significant
progress in this area (Barr & Tagg, 1995; Johnson, 2009; North, 1999; Tagg, 2003).

Student ratings are the most researched method for gaining feedback on
teaching. There are over 1,500 published articles dealing with research on student
ratings of instruction (Cashin, 1995; McKeachie & Kaplan, 1996). This research shows
that student ratings are generally a reliable and valid method for gathering data on
teaching (Cashin, 1995; Marsh, 1997, Ory, 2001; Theall & Franklin, 2001)--much more
so than any other teaching evaluation method (McKeachie & Kaplan, 1996). However,
student ratings are certainly not a perfect measure of teaching. To help substantiate and
extend data from student ratings, the teaching evaluation process should include the
triangulation of results from student ratings, peer review, and supervisor evaluations
(Johnson & Ryan, 2000; Kahn, 1993; Marsh & Dunkin, 1992; Wagenaar, 1995).

top

Why administer student ratings online?

More helpful feedback for instructors--Online reporting provides more


complete, in-depth reports that are easy to access and interpret. It also allows
reports to include links to online resources on specific areas of teaching and
learning.
Quicker feedback to professors--The online system allows professors to view
student-rating results as soon as grades are submitted. This provides timely
feedback that can be used in preparation for the following semester.
Anonymity of student comments--Because student comments on the rating
forms are typed, professors cannot identify a student's response by his or her
handwriting.
Longer and more thoughtful student responses--Because forms are
completed outside of class, students don't feel pressured to complete the forms
quickly. In addition, students can easily type their comments rather than write
them by hand. This increases the number and length of student comments.
Class-time savings--When student ratings are done online, class time is not
needed to complete rating forms.
Widespread Evaluation--Online administration of the student-rating form
provides students the opportunity to rate all of their courses each semester. It
also provides faculty members with student feedback on every course they
teach.
Cost reduction--With online administration there is no need for paper forms,
thus, the costs of producing, distributing, and processing these forms are
eliminated. Over time, the costs of setting-up and maintaining the online rating
system are less than continuing to operate the current paper-pencil system.
Efficiency and accuracy--Online questionnaire administration and data
processing produces fewer errors; this is due to automation and the reduction of
manual steps in the process such as collecting forms, scanning, and distributing
reports.
FlexibilityForms and reports are more easily modified or customized to meet
various needs.

top

What is the response rate for online student ratings?

Response rate can be a challenge because students must take time outside of class to
complete online rating forms. The response rate for recent BYU semester ranged from
59% to 66%. In the pilots of the student rating system, several strategies for increasing
response rates were tested. It was clear that some strategies must be employed to
increase response rates; with no strategies, the response rates were low.

Some strategies to increase response rates have been identified in BYU pilot studies:

1. Response rates increase when students know about the student rating
system and how to use it. A number of strategies have been implemented
to inform students about the online rating system and its use.
2. Response rates increase when completing the rating form is given as a
class assignment. This is true regardless of whether or not actual points
are given for completing the rating forms.

4. Student-rating responses increase when students understand how rating


results are used. Various methods are used to help students understand
the different uses of student-rating results and that student responses do
make a difference. Response rates increase when students receive some
type of incentive for completing their ratings (e.g., seeing their grades
early online).
top

How can the online student ratings support teaching?

The online student rating system is designed to promote the improvement of teaching.
Resources to support that improvement are:

1. Links to resources and strategies for improving teaching and learning. Every
item/topic on the rating form currently has a page devoted to answering common
teaching concerns. These resources are currently located at
http://ctl.byu.edu/home/information/student-ratings-teaching-improvement/

2. Consultants. Research has shown that teaching improvement is greatly


enhanced when instructors discuss student-rating results with a colleague or
faculty development consultant (Brinko, 1993; Hoyt, 1999; McKeachie & Kaplan,
1996). Contact your CTL Consultant for an appointment.

top

Who has provided input into the development of the current online student-rating
system?

Faculty-During Winter Semester 2002, all faculty at BYU were sent email messages
directing them to a website with information on the proposed online student rating
system. This website included a copy of the new rating form, a list of Frequently Asked
Questions (FAQs), and an opportunity to provide feedback on the new rating form and
online student rating system. Fifty-seven faculty members responded. These responses
were analyzed and used in the development of the online student ratings.

Faculty Advisory Council-The Faculty Advisory Council has provided ongoing input to
the development of online student ratings at BYU. This council approved an early
version of the BYU rating form and has continued to provide periodic feedback since
that time. In Winter 2002, the Faculty Advisory Council helped in revising the online
rating form.

Department Chairs-During Winter Semester 2002, all BYU department chairs were
invited to meet with AAVP Richard Williams to discuss and provide feedback on online
student ratings. Sessions were held on multiple days to accommodate individual
schedules. Chairs received a description of how the form was developed and articles
summarizing the national research on student ratings of instruction. Department chairs
have also given feedback on online ratings in the Department Chair Seminars.

Deans and Associate Deans-In Deans Council, BYU deans provided


recommendations and approved current plans for the implementation of online student
ratings. Associate deans have repeatedly discussed online student ratings and given
recommendations in the University Faculty Development Council and the University
Learning, Teaching, and Curriculum Council.

Students- During Winter Semester 2002, all students at BYU were sent email
messages directing them to a website with information on the proposed online student
rating system. This website included a copy of the new rating form, a list of Frequently
Asked Questions (FAQs), and an opportunity to provide feedback on the new rating
form and online student rating system. Six-hundred-forty students responded. All
responses were analyzed and used in further revision of the online rating form and
system. In addition, students participating in online-student-rating pilots were asked to
give feedback. During the Fall 2000 pilot, over 1,800 students responded to a
questionnaire sent to pilot participants. In addition, 40 students participated in student
focus groups. Student feedback was analyzed and used in developing the online
student ratings.

BYU Student Association and Student Advisory Council-The BYU Student


Association (BYUSA) and the Student Advisory Council (SAC) have reviewed and given
feedback on the online student ratings. Representatives from the SAC were members of
the original Lee Hendricks student-ratings committee in 1996. BYUSA and SAC
representatives met in a series of meetings to discuss implementation of online student
ratings. They provided many ideas for, and their support of, the current rating system.

Student Ratings Task Force-Most recently (2009-2010) Associate Vice President, Jeff
Keith, has organized a task force with faculty representatives from each college to look
at ways to improve the BYU student rating system.

top

How were items on the student rating form developed?

In 1995, President Rex Lee commissioned a committee to begin work on a new BYU
student rating form. The committee was chaired by Lee Hendrix of the Statistics
Department and consisted of faculty, students, and administrators. Additional efforts
built on the work of this committee. An in-depth analysis was conducted on the research
on teaching and learning, research on student ratings of instruction, and specific BYU
needs. From this analysis, essential categories of student rating items were identified.
Within each category, items were chosen to best represent the category and align with
BYU needs. Categories that were most important to teaching and learning (as indicated
by the research) were given more items. (For more information on items and item
categories, click here.) Research was conducted on the form, including inter-item
correlations and factor analyses. Versions of the form were reviewed and approved two
separate times by the Faculty Advisory Council. Outside experts were consulted on the
content and layout of the form. Finally, the form was beta-tested with students to
examine their interpretations and perceptions. Throughout this process, the online
student-rating form was revised according to feedback and research results. The form
and rating process are currently being examined by a University task force to see where
further improvements can be made.

Why are the two global items ("rate the instructor," "rate the course") included on
the form?

Research shows that responses to overall items (e.g., rate the course, rate the
instructor) generally have a higher correlation to measures of student learning than do
individual items or groups of individual items on rating forms (Marsh, 1994; Theall,
Scannell, & Franklin, 2000). This has been replicated in numerous research studies and
in meta-analyses of multiple studies (Ali & Sell, 1998; Koon & Murray, 1995; Zong,
2000).

top

Why do the average ratings for global items ("rate the instructor," "rate the
course") sometimes differ from average ratings for other items on the rating
form?

Differences in global ratings and the average of individual item ratings on the form occur
for a number of reasons:

1. The global items on rating forms are intended to be normative (i.e.,


"compared to other courses you have taken"). The specific items are less
normative in that they focus on specific aspects of a course or actions of
an instructor. Therefore, the global and specific items are asking for
different types of responses.

2. Even though the number of points are the same on the global and specific
item rating scales, these points are labeled differently. A Likert scale
asking for agreement or disagreement to a given statement (on individual
items) is not the same as rating a course or instructor as good or poor (on
global items).

3. The individual items on the rating form are a sampling of important areas
of teaching; it is impossible to include all important areas of teaching on a
short student-rating form. When students provide an overall course or
instructor rating, they may consider aspects of teaching and learning that
are not represented in the individual items on the form. Therefore, results
of overall items and averages of specific rating items are usually different.
This phenomenon is observed on rating forms across the country. (For
more information on the validity of global items, see "Why are the two
global items included on the form?")

4. An average of the scores for all individual items on a rating form does not
take into account that some individual items are more important than
others to the overall quality of the course or instructor. To determine an
appropriate average of individual items, a weighting scheme for individual
item scores would be needed. If a weighting scheme were developed, it
would have to be adjusted for individual courses because the most
important aspects of teaching are not necessarily the same for every
course. Determining weighting schemes for individual courses would be a
very difficult process. Of course, all discussion about a weighting scheme
is based on the assumption that all important aspects of teaching are
represented in the individual items on the rating form, which is not
possible on a rating form of reasonable length.

top

Can online student ratings be used for mid-course student feedback?

The online rating form is only used for end-of-course evaluations. The form is designed
to elicit general feedback from students about the course as a whole.

However, instructors can use the Mid-course Evaluation Tool to receive feedback during
the course of the semester.

top

Are other universities using online student ratings?

Yes. Many institutions are using online rating systems for part or all of their courses. For
more information, see a partial list at http://OnSET.byu.edu.

top

Where can I learn more about student ratings of instruction?

Here are a list of references that can be used to study student ratings more in depth:

Information on the Web:

Student Ratings of Teaching: The Research Revisited


WIlliam E. Cashin, IDEA Center
http://www.idea.ksu.edu/papers/Idea_Paper_32.pdf

What's the Use of Student Ratings of Teaching Effectiveness?


University of Illinois
https://oira.syr.edu/Assessment/StudentRate/Use.htm

Ratings Myths and Research Evidence


Michael Theall
https://studentratings.byu.edu/info/faculty/myths.asp
Embracing Student Evaluations of Teaching: a Case Study
Timothy J. Gallagher, Kent State University
http://dept.kent.edu/fpdc/pdf_files/gallagher.PDF

Questions Frequently Asked About Student Rating Forms: Summary of Research


Finding
Matthew Kaplan, Lisa A. Mets, Constance E. Cook, University of Michigan
http://www.crlt.umich.edu/tstrategies/studentratingfaq.php

References:

Aleamoni, L. (1999). Student Rating Myths Versus Research Facts from 1924 to 1998,
Journal of Personnel Evaluation in Education, 13 (2), 153-166.

Ballantyne, C. (2003). Online evaluations of teaching: An examination of current


practice and considerations for the future. In T. D. Johnson & D. L. Sorenson (Eds.),
New directions for teaching and learning: Online student ratings of instruction, 96 (103-
112). San Francisco: Jossey-Bass.

Barr, R.B. & Tagg, J. (1995, November/December). From teaching to learning: A new
paradigm for undergraduate education. Change. 13-25.

Bernstein, D. (1995, August 21). Establishing effective instruction through peer review
of teaching. A distillation of a FIPSE proposal.

Braskamp, L.A., & Ory, J.C. (1994). Establishing the credibility of evidence. In
Assessing faculty work: Enhancing individual and institutional performance. pp. 95-104.
San Francisco: Jossey-Bass

Brinko, K.T. (1993, Sep.-Oct). The practice of giving feedback to improve teaching:
What is effective? Journal of Higher Education, 64, (5), 574-593.

Cashin, W.E. (1995, September). Student ratings of teaching: the research revisited.
IDEA paper no. 32 from the Center for Faculty Evaluation and Development at Kansas
State University.

Chism, N.V. (1999). Peer review of teaching. Bolton, MA:Anker Publishing.

Clark, S.J., Reiner, C. M., & Johnson, T.D. (2006). Online course-ratings and the
Personnel Evaluation Standards. In D. D. Williams, M. Hricko, & S. L Howell (Eds.),
Online Assessment, Measurement, and Evaluation: Emerging Practices, Volume III (pp.
61-75). Hershey, PA: Idea Group Publishing.

Diamond, R.M. (1994). Documenting and assessing faculty work. In Serving On


Promotion and Tenure Committees: A Faculty Guide, Syracuse University. Bolton:
Anker Publishing Company, Inc. (pp. 13-21).
Hoffman, K.M. (2003). Online course evaluation and reporting in higher education. In
T.D. Johnson & D.L. Sorenson (Eds.), New Directions for Teaching and Learning:
Online Student Ratings of Instruction. 96 , 25-30.

Hoyt, D.P. & Pallett, W.H. (November, 1999). Appraising teaching effectiveness:
Beyond student ratings. IDEA paper no. 36 from the Center for Faculty Evaluation and
Development at Kansas State University

Johnson, T.D. & Ryan, K.E. (2000, Fall). A comprehensive approach to the evaluation
of college teaching. New Directions for Teaching and Learning, 83, (pp. 109-123).
Jossey-Bass.

Johnson, T.D. & Sorenson, D.L. (eds.) (2003). New directions for teaching and
learning:Online student ratings of instruction, 96. San Francisco, CA: Jossey-Bass.

Kahn, S. (1993). Better teaching through better evaluation: A guide for faculty and
institutions. To Improve the Academy, 12, 111-127.

Koon, J. & Murray, H.G. (1995). Using multiple outcomes to validate student ratings of
overall teacher effectiveness. Journal of Higher Education, 66 (1), 61-81.

Marsh, H.W. (1994). Weighting for the right criteria in the instructional development and
effectiveness assessment (IDEA) system: global and specific ratings of teaching
effectiveness and their relation to course objectives. Journal of Educational Psychology,
86 (4), 631-648.

Marsh, H.W. & Dunkin, M.J. (1992). Students' evaluations of university teaching: A
multidimensional approach. In J.C. Smart (Ed.), Higher education: Handbook of theory
and research (Vol. 8, pp. 143-233). New York: Agathon Press.

Marsh, H.W. & Roche, L.A. (1997). Making students' evaluations of teaching
effectiveness effective. American Psychologist, 52 (11), 1187-1197.

McKeachie, W.J. & Kaplan, M. (1996a, February). Persistent problems in evaluating


college teaching. AAHE Bulletin, pp. 5-8.

North, J.D. (1999). Administrative courage to evaluate the complexities of teaching. In


Seldin, P. Changing practices in evaluating teaching. Bolton, MA: Anker Publishing

Ory, J.C. & Ryan, K.E. (2001) How do student ratings measure up to a new validity
framework? In Theall, M., Abrami, P.C., & Mets, L.A., editors. (2001). The student
ratings debate: Are they valid? How can we best use them? New Directions for
Institutional Research, no. 109, San Francisco: Jossey-Bass.
Sanders, W.L. (2000). Value-added assessment from student achievement data:
Opportunities and hurdles. Jason Millman Award Speech. CREATE National Evaluation
Institute, San Jose, CA. July 21.

Sorenson, D.L. & Johnson, T.D. (eds.) (2003). New directions for teaching and
learning:Online student ratings of instruction, 96. San Francisco, CA: Jossey-Bass.

Tagg, J. (2003). The Learning Paradigm College. Bolton, MA: Anker Publishing.

Theall, M. & Franklin, J. (2001). Looking for bias in all the wrong places: A search for
truth or a witch hunt in student ratings of instruction? New Directions for Institutional
Research, no. 109, 45-56.

Wagenaar, T.C. (1995). Student evaluation of teaching: Some cautions and


suggestions. Teaching Sociology, 23, 64-68.

Zong, S. (2000). The meaning of expected grade and the meaning of overall ratings of
instruction: A validation study of student evaluation of teaching with hierarchical linear
models. Dissertation Abstracts International, 61(11), 5950B. (UMI No. 9995461)

top

You might also like