Global Security and Intelligence Studies Volume 4, Number 1, Spring/Summer 2019 | Page 14
Forging Consensus? Approaches to Assessment in Intelligence Studies Programs
That guide is an illustration of what assessment is supposed to produce.
Assessment is a systemic process in higher education that uses empirical data on
student learning to refine programs and improve student learning (Allen 2004,
5). In this case, the program at UTEP had a clearly defined standard with regard
to writing skills, data on student assignments across several courses that suggested
that this standard was not being met, and a desire to do something about it.
The program also altered the type and frequency of writing assignments in some
courses to give students more opportunities to develop their communication skills
(Larry Valero, Telephone interview with author, November 6, 2017). One program
director described program assessment with a “dip stick” metaphor—you do not
need a dip stick to make a car run, but you use it every now again to see how everything
is running (Background, Telephone interview with author, January 24,
2018).
This essential framework for assessment is widely understood. First, student
learning outcomes (SLOs) are clearly articulated for the program or major.
Then, data on relevant student activities are collected and evaluated from across
the courses in the program. Finally, programmatic change is instituted (if needed)
in response to the date in order to address any deficiencies. To be sure, it is important
that the process is well articulated. As Rodgers et al. note, “the quality of
assessment is important because influential decisions, such as curricular changes,
should be informed by quality information” (Rodgers et al. 2013, 384).
However, there are some important questions that underlie this assessment
activity. First, who is assessment for? While there is a clear benefit to an individual
program, assessment efforts are often driven by the institution that houses the program.
Their priorities surely overlap, but to what degree? Related to this question,
how much commonality is there between intelligence studies programs in how
they assess? If they largely share a common set of program objectives, do they also
share a common set of assessment tools? Assessment reflects the “ground truth” of
what a program’s objectives truly are. The former U.S. Vice President Joe Biden is
famous for saying, “Don't tell me what you value, show me your budget, and I'll tell
you what you value” (Goodreads, n.d.). Assessment reflects the true programmatic
objectives in a similar fashion. A last question relates to the level of the degree.
Many would agree that the SLOs in graduate-level programs in the field should
reflect a higher level of competency than undergraduate programs. However, is
that difference reflected in the assessment process?
By surveying a group of intelligence studies programs at civilian institutions
in the United States, this study addresses these questions. What it finds is that the
assessment process is typically driven from the institutional level with varying degrees
of input from the program. This naturally leads to an element of diversity of
assessment measures that are employed by different intelligence programs. And
while it is expected that graduate-level programs are requiring a higher level of
3