Global Security and Intelligence Studies Volume 4, Number 1, Spring/Summer 2019 | Page 18
Forging Consensus? Approaches to Assessment in Intelligence Studies Programs
States. Starting in the 1970s, there was heightened political and public pressure on
higher education institutions to explain what they were trying to do and to provide
evidence they were actually doing it (Angelo and Cross 1993). Accreditation has
become the standard credential for an institution to demonstrate the expected level
of academic quality in their efforts. Yet, how could these accrediting organizations
validate the quality of these institutions beyond graduation rates? Additional
evidence was needed. Hence, the “culture of assessment” arose as a way to provide
empirical evidence in support of institutional claims of student learning to achieve
accreditation.
This is not to suggest that colleges and universities were not attempting to
assess student learning before the growth of accreditation concerns. The initial
attempts to demonstrate student learning at the institutional level in the United
States were with standardized tests. The Carnegie Foundation led this movement
in the early portion of the twentieth century (Shavelson 2007, 6). Designed
to measure overall comprehension, some of these assessments could be quite
lengthy. For instance, a study of Pennsylvania students in 1928 used a test that
contained 3,200 questions and was over 12 hours long (ibid). However, these
tests seemed to demonstrate student learning and so the focus on testing grew.
By the late 1970s, there were a variety of test providers in operation. These organizations,
like the Educational Testing Service (ETS), provided a cost-effective
manner for institutions to evaluate prospective students, as well as to assess the
impact of their educational programs on the students who graduated from their
institution.
However, there was concern that these assessment tools were limited in
scope. While they measured knowledge-level objectives, these assessment tools did
not assess other important educational objectives such as communication, critical
thinking, and problem solving—things that were viewed as particular strengths
in American higher education. By the 1970s, many faculty saw objective testing
as too limited. For them, life was not a multiple-choice test (Shavelson 2007, 12).
Indeed, this continues to be a concern. In a 2009 study, Cole and De Maio noted
that “standardized tests are more efficient in measuring student learning outcomes
but lack contextual information” (Cole and De Maio 2009, 294).
This led to a focus on more holistic tests instruments such as the Collegiate
Learning Assessment. Developed by the Council for Aid to Education (CAE), this
assessment tool is a “test of reasoning and communication skills at the institutional
level to determine how the institution as a whole contributes to student development”
(National Institute for Learning Outcomes Assessment, n.d.). This more
holistic focus also led to more qualitative forms of assessment, such as portfolios,
that were harder to objectively assess in the aggregate. That said, the validation
from an accrediting authority provided a measure of proof that these assessment
tools were effective.
7