Global Security and Intelligence Studies Volume 4, Number 1, Spring/Summer 2019 | Page 29
Global Security and Intelligence Studies
The University of Arizona South employs these indirect assessment measures
at both ends of the process. In the National Security Policy course, the gateway
class for the Intelligence Studies major there, students take a survey that asks
them to assess their content knowledge related to the SLOs for the program. This
pre-survey reveals the initial preparedness of students in the program and provides
a baseline from which to assess the program objectives at the end of the
process. At the end of the curriculum, students take a post-survey which helps to
assess the growth in student learning. For instance, the survey asks the students
to identify the three most important things that they learned in the program. In
addition to student assessment of their learning, this post-survey is also an opportunity
to gather student input on the delivery of the curriculum (UAS n.d.).
That said, the utilization of indirect methods of assessment can vary widely.
Some programs, such as Coastal Carolina University or Angelo State, appear to
not utilize these measures at all. Others, such as the Citadel and Arizona, utilize
them in conjunction with direct measures. And while they are not included in
this study, some programs are primarily reliant for indirect measures for the data
which is utilized in their assessment systems (Background, E-Mail Message to Author,
February 23, 2018).
A final comparison for this study relates to the level of the degree program.
Michael Collier suggested that variations in intelligence studies programs should
be driven more according to the level of education and proficiency (e.g., graduate
versus undergraduate levels), rather than focusing on the issue of specialists versus
generalists (Collier 2005, 33). A natural extension of that argument could include
variations in assessment practices. While there is some variation among specific
methods, all programs in the sample tend to focus on direct, qualitative measures
of program assessment. It is possible that the criteria for assessing adequate progress
could vary by degree level, but there is not a substantial difference in assessment
methods.
Conclusions
To be sure, there are cross-pressures in the discussion of program assessment
in the development of a new field of study. On the one hand, as the field
evolves, questions of identity and larger purpose arise. What does it mean
to study “intelligence”? This leads to a desire for there to be common objectives
in the instruction that transcend the institution. Discussions of “model curricula”
or certification/accreditation by an organization that promotes the subject area
expertise are reflective of this position. Even with the understanding that intelligence
studies is a multi-disciplinary field with a variety of specializations, such as
law enforcement intelligence, competitive intelligence, or national security intelligence,
there are fundamental learning objectives that are common throughout the
field. For instance, any program that studies intelligence will attempt to advance
18