Global Security and Intelligence Studies Volume 4, Number 1, Spring/Summer 2019 | Page 30
Forging Consensus? Approaches to Assessment in Intelligence Studies Programs
a baseline of core expertise about how intelligence is developed and employed in
its given area.
On the other hand, while these should be this commonality among intelligence
studies programs, each program is housed within a larger academic institution
which espouses its own educational mission, which often includes maintaining
regional accreditation. A program that has objectives that run counter to the
larger mission of their college or university is likely to have problems in sustaining
itself. If nothing else, since the assessment process is often driven by the institution’s
pursuit of academic accreditation, the assessment program from the intelligence
studies program would need to support this institutional mission. While
it is always possible to pursue a subject-specific accreditation for the program in
additional to the institutional effort, the latter will always hold sway.
This leads to a question of purpose for the assessment program—who does
it primarily serve? Ideally, the assessment plan would serve both the program-level
objectives of department and the institutional-level objectives related to mission
and accreditation. However, invariably, the assessment plan will have to prioritize
one of these objectives over the other—and that typically means institutional needs
prevail. Other academic disciplines have found a way to integrate their assessment
of subject-matter expertise and the broader assessment objectives of a college or
university—engineering, law, and public administration just to name a few. Perhaps
the growth of professional associations in the field of intelligence studies will
influence these program-level assessment structures in the years to come.
This research reflects an initial attempt to understand the role of assessment
in intelligence studies programs. A prominent limitation of this study is the modest
sample size. Even in a field with a limited number of programs, a larger sample
would be useful in validating the generalizations that were noted in this paper.
Additionally, the inclusion of more online programs would be useful in exploring
the challenges related to the mode of delivery for the educational experience.
For instance, the graduate program at Angelo State University notes the difficulty
in evaluating oral communication skills in an online environment (Mullis 2017,
12). Notre Dame College noted a similar concern with their graduate program
(Gregory Moore, Telephone interview with author, December 20, 2017). With the
growth of online programs and improved technological tools, whether this will
continue to be an inhibiting factor in assessment remains unclear.
Another area for additional exploration would be on other areas of assessment.
This study’s focus has been on the area of SLOs. A related area of assessment
would be graduation measures related to placement. While the pedagogical issue
of accurately measuring student development is important, the issue of the practical
impact of our field of study is of great significance to the longevity of our
field of study. Some scholars such as Michael Landon Murray have begun exploring
this important, but methodologically challenging, question (Landon-Murray
2013, 771).
19