Are traditional assessment methods appropriate in contemporary higher education? Jun. 2014 | Page 4
However, whilst open book exams and case studies can take
students beyond just the task of recalling knowledge, the
authenticity is still limited. In reality when we are problem solving
we’re unlikely to do it in one go, we’re unlikely to read what little
information we have and make a decision then and there. We
would work out what information we didn’t have and go away and
try and find it first, often in real life the problem itself isn’t so clearly
articulated for us either. We would therefore interact with our
environment to investigate further to try things out, ask questions
and explore different avenues before making recommendations.
Yet often even though the concept of examination has moved on
from being a standard three hour essay based paper, the artificial
construct within which the student operates can often only test
what they think they would do not what they would do.
Students in their study believed that continuous assessment
was a better predictor and way of demonstrating their abilities.
The Objective Structured Clinical Exam ( OSCE) used in
healthcare provides opportunity for problem solving and
decision making and for some interaction with the environment.
The OSCE is made up of a series of tests similar to circuit
training in P.E. classes. Each of these tests, which are usually
referred to as stations, involve a different activity for example,
counselling a patient on a procedure, reading an x-ray or writing
a chart note. The student moves around the circuit completing
each test as they go (Yudkowsky 2009). This type of exam
provides more authenticity than traditional formats.
Miller (1990) states there are four areas of activity that
assessment needs to test. Knows, knows how, shows how, does.
Whilst he was writing primarily about clinical assessments
this is equally transferable to many disciplines, particularly
those related to specific professional practice. Examinations are
therefore important to test the Knows, and knows how elements
of assessment but cannot test the actual performance and
what the student actually does. This requires a different type
of assessment. Brown and Glasner argue that “exams can be
a useful element of a mixed diet of assessment, so we do not
want to throw the baby away with the bathwater” (1999: 9).
The argument herein therefore is not that exams are no longer
appropriate but that they need to be used for the right purpose,
their use needs to be authentic in that they are a valid form of
assessment for the intended learning outcomes and not used as
a default form of assessment based on issues of time, resources,
plagiarism or historical legacy.
What the students think
Flint and Johnson (2011) argue that students find exams unfair
and that exams are one of the most stressful or problematic
forms of assessment for students. In their research one of the
criteria that students assign to fair assessment is one that
enables them to evidence and demonstrate their capability.
Exams it is felt do not enable them to evidence their capabilities.
A BPP Business School working paper
Some of the key comments from students in the Flint and
Johnson study are summarised below:
•
Students believe exams are only testing memory,
•
There isn’t enough guidance on what is to be in the exam
•
It is unreasonable to expect a student to evidence their
learning from one module or academic term in a two hour
paper which is primarily based on what you can remember
rather than what you can do with that knowledge.
•
Exams were based on luck, depending on what you could
remember, what you had revised and what the questions were.
•
There is a general lack of feedback from exams so they didn’t
learn anything from them
•
They cause more stress or anxiety compared to other forms
of assessment.
In their research there was one student that preferred exams
whilst others understood that they were preferred by tutors
(as you were less likely to plagiarise). The majority of students
in the study felt exams were unfair and lacked the validity to
test capability. As has been shown, the research in the field is
overwhelmingly critical of exams and strongly argues for more
authentic assessments, yet if the research is largely in favour of
moving away from exams why is the Higher Education sector
still so dependent upon them?
The assimilate project (Brown 2012) identified some of the
challenges and restraints that lecturers have found in trying
to move away from exam based assessment. These included,
restrictions placed by existing learning outcomes in modules
that are not easy to change due to lengthy quality assurance
processes and cultural climate and conservative attitudes
among validating panels and professional bodies. They also
found the lecturers own experience and what they perceived
as appropriate assessment (because that’s what they did when
they were a student) has an impact too.
They identified that the Universities approach to innovation in
assessment was significant; if the institution had a conservative
attitude and was risk adverse they would likely stick with traditional
methods. Finally, the research identified the need for training and
faculty development to understand what tools are available to them
and how they can be used in designing more authentic assessment.
bpp.com