Internet Learning Volume 5, Number 1, Fall 2016/Winter 2017 | Page 10
Online Graduate Course Evaluation from Both Students’ and
Peer Instructors’ Perspectives Utilizing Quality Matters TM
tuck, 2015). Regarding student perspectives,
researchers typically have
taken two approaches. One way is to investigate
student satisfaction of courses
and see whether the courses meet QM
standards (Aman, 2009). The other is to
compare students’ and peer reviewers’
evaluations and see how consistent they
are (You et al., 2014).
This study was conducted in Indiana
University’s graduate program in
adult education. That degree program
originated in 1947 as a community service
program providing off-campus,
non-credit courses in adult education
(Treff, 2008). Through a series of organizational
revisions, the program was restructured
within the academy, offering
both Doctor of Philosophy and Doctor
of Education degrees. Moving from the
School of Education to the School of
Continuing Studies in the 1980s, doctoral
students were no longer admitted;
the program became a Master of Science
in Adult Education administered
from the Indianapolis campus (IUPUI),
and was converted to an online format
in 1998. In 2012, the program returned
to the School of Education in Bloomington
as part of Instructional Systems
Technology.
9
In 2015, the program underwent
a self-study in an effort to improve
the quality of the program. That
study involved interviews and surveys
of alumni, currently enrolled students,
and program faculty. In our self-study,
we felt it important to gain the perspectives
of our students, faculty and
outside observers. This current study,
which focuses more directly on specific
online courses, is consistent with
our overall quality improvement effort.
Our objective was to improve our online
courses by comparing them to QM
standards, identify areas of strengths
and weaknesses in our courses, and
identify whether instructor perceptions
are congruent with student perceptions
of our courses and thereby improve the
quality of our graduate adult education
online program.
Methods
This study examined students’ evaluations
of online courses in comparison
to peer instructors’ evaluations of the
same online courses. The evaluations
followed the QM standards.
Measurement
The course evaluation data were collected
from two cohorts: students and
peer instructors. Evaluation items were
adopted from the QM standards. There
were 21 evaluation items organized by
8 categories: (1) course overview introduction,
(2) learning objectives, (3)
assessment and measurement, (4) instructional
materials, (5) course activities
and learner interaction, (6) course
technology, (7) learner support, and (8)
accessibility and usability. Each evaluation
item was rated with a 5-point
Likert scale (1 = strongly disagree, 5 =
strongly agree).
Although the two cohorts used
the same evaluation items, the organization
and procedure were different.
The peer instructors used an evaluation