Journal of Academic Development and Education JADE Issue 9 | Page 8
8 | JADE
EDITORIAL | 9
GEMMA WITTON
higher education market where the mass capture of lectures is
happening all across the sector, do we have any choice but to do
the same?
I am a regular attendee at many of the conferences and events
with a focus on the use of capture technologies in education. I am
struck by the frequency that the discussion topic focuses on how
we can encourage reluctant members of staff to agree to a capture-
all approach. “Maybe if we make the recording and distribution of
sessions an automated and passive activity so that it will happen
without them really noticing?” “Maybe if we only record audio
and slides so that they don’t have to worry about what they look
like on camera?” “Is there some other way that we can encourage
the recording of more and more sessions so that we can report a
percentage increase back to senior management this year?” In the
current political climate, which places academics under intense
scrutiny through the National Student Survey (NSS) and the
Teaching Excellence and Student Outcomes Framework (TEF), is it
any wonder that some members of academic staff are suspicious
of the motives for recording all their classes when the pedagogical
evidence has yet to provide real evidence for the enhancement of
students’ learning experiences and improvements in attainment and
progression? (Franklin et al, 2011; Hadgu et al, 2016; Johnston et al,
2013; Leadbeater et al, 2013)
It is when we think about capture technologies in relation to
some of these externally-driven measures of success, that our
approach to using Panopto becomes a little muddied. For example,
despite having little or no positive effect on attainment or degree
classification traditional lecture capture can provide a big win on
the NSS for student satisfaction within the ‘Teaching on my course’
section.
If we also consider the TEF, one of the ways that this measures
teaching excellence is the HESA student continuation data. At the
University of Wolverhampton our student demographic is such
that, relative to the sector, a large proportion of our students are
mature, have caring responsibilities and/or are a contributor to their
household income. Given the other factors at play for this type of
student, the quality of teaching is unlikely to be the deciding factor
in the continuation of their studies. A capture-all policy for lectures
may potentially have a positive impact on progression rates for these
groups by increasing the amount of flexibility in their programme
of study and helping them fit their studies around their other
commitments. For me, this is perhaps the most convincing argument
for adopting a capture-all approach; nevertheless, I would also argue
that it would not make our teaching any more or less excellent.
There has been plenty of discourse, particularly around the TEF,
that the importance given to particular metrics might actually be
detrimental to the quality of teaching. Academics who feel that
they cannot experiment or take risks in an attempt to improve the
quality of their courses for fear of a backlash in their NSS score or
the loss of the gold logo on their prospectus will almost certainly
stifle innovation. It doesn’t seem conducive to an environment where
teaching staff are satisfied and motivated to provide anything other
than vanilla-flavoured courses for mass consumption.
On top of this, we have the ongoing reduction in the Disabled
Students’ Allowances (DSA) to consider. It becomes particularly
hard to promote an agenda of academic autonomy and purposeful
capture of meaningful educational media on the one hand, if you
also find yourself needing to provide a technological alternative to
note takers on an institution-wide scale on the other hand.
So, as our project matures and the operational and technical aspects
of the project become business-as-usual, I find myself asking what
our measures of success should really look like. If the University was
an institution with an opt-out policy and a capture-all approach then
this would be simple: the percentage of capture-enabled teaching
rooms; the total number of modules engaged; the quantity of hours
recorded and viewed are all easy data to gather; however, if we want
to continue to justify our current philosophy we need our metrics to
offer more than that and we need to demonstrate positive impacts
on the students’ learning experiences. We cannot place undue
importance on the data that are easy to gather, but we do need
to have something to demonstrate progress and ongoing value. If
we are promoting purposeful use of capture, then shouldn’t our
measures of success reflect that ideology too?
References
Bos, N., Groeneveld, C., van Bruggen, J., & Brand-Gruwel, S. (2015). The use
of recorded lectures in education and the impact on lecture attendance and
exam performance. British Journal of Educational Technology. DOI: 10.1111/
bjet.12300.
Franklin, D., Gibson, J., Samuel, J., Teeter, W., & Clarkson, C. (2011). Use of
lecture recordings in medical education. Medical Science Educator, 21(1), 21.
Hadgu, R.M., Huynh, S. & Gopalan, C. (2016) The Use of Lecture Capture and
Student Performance in Physiology. Journal of Curriculum and Teaching 5(1).
DOI: 10.5430/jct.v5n1p11
Johnston, A., Massa, H., & Burne, T. (2013). Digital lecture recording: a
cautionary tale. Nurse Education in Practice, 13(1), 40–47.
Leadbeater, W., Shuttleworth, T., Couperthwaite, J., & Nightingale,
K. (2013). Evaluating the use and impact of lecture recording in undergraduates:
evidence for distinct approaches for different groups of students. Computers
& Education, 61, 185–192.