Global Security and Intelligence Studies Volume 4, Number 1, Spring/Summer 2019 | Page 23
Global Security and Intelligence Studies
this was mostly done as an uncompensated activity, though some were given additional
compensation for work during the summer. As a result, several program
directors expressed a desire to keep the additional work associated with assessment
away from faculty. One program director’s philosophy was to keep assessment
as “a background process that faculty are not involved with, and perhaps not
even aware of.” However, one program director expressed a different motivation
for keeping faculty away from the assessment process. They did not want faculty
to feel a pressure to “teach to the test” (Background, Telephone interview with
author, January 24, 2018).
This dynamic, where the faculty are largely disconnected from the implementation
process, or uncompensated for the effort, explains the ambivalent perceptions
of assessment in the interviews for this study. Some see it as important
and want to get more involved, while others see it as taking faculty resources away
from teaching (Background, Telephone interview with author, January 24, 2018).
Some directors described the process as a burdensome requirement that often did
not yield useful information for program improvement. To the extent that the faculty
were supportive of the assessment effort, it was due to the institution’s persuasion
of the faculty that the process could provide information on student learning
that could help to improve the quality of the program (Carl Jensen, Telephone
interview with author, December 21, 2017).
So, the broader themes of assessment in intelligence studies programs appear
to be consistent with academia writ large as seen in the literature. Regional
accreditation is an important driver of institutional efforts and that impacts the
development of assessment tools within intelligence studies programs. Second,
faculty support for assessment in intelligence studies programs is ambivalent, as
the implementation of assessment tools can be seen more as burden than benefit.
However, this study also seeks to explore the degree of consistency of assessment
tools across intelligence studies programs to see if the actual educational experiences
of these programs are comparable. The first step to exploring that is to
ensure that they are all beginning with similar learning objectives at the outset.
Program Objectives
In the debate over the role of assessment, there is a concern that the measurement
of outcomes could subvert the management of curriculums. As Allen notes,
“assessment should not be the tail that wags the dog” (Allen 2004, 13). Ideally, a
program should be driven by the stated learning outcomes that are articulated in
advance. These should then be reflected in the curriculum and student learning
activities. Only then should assessment activities evaluate how well the activities
performed in meeting the initial program objectives. This “goal—treatment—assessment”
structure is a basic and intuitive model for managing a program (academic
or otherwise).
12