I IN BRIEF
#14
#21
#39
#11
#34
#12
#27
#76
#60
#77
CREATING RESPONSIBLE ARTIFICIAL INTELLIGENCE
Researchers will tackle the problem of gender and ethnic bias in recruitment as part of a new £1m project.
The study, BIAS – Responsible AI for Labour Market Equality
will look at how artificial intelligence can lead to unintentional
bias in the processes of job advertising, hiring and professional
networking, which are increasingly digitised.
The researchers will work with industrial partners to
understand gender and ethnic bias within HR processes,
such as hiring and professional networking. They will analyse
data from across hiring and recruitment platforms and
develop new tools and protocols to mitigate and address
such bias. This will allow companies, HR departments
and recruitment agencies to tackle such issues in
future recruitment.
Professor Monideepa Tarafdar, professor of information
systems and co-director of the centre for technological
futures at Lancaster University Management School, will lead
16 // Future Talent
the research as principal investigator, working with Lancaster
colleagues Dr Yang Hu, from Sociology, and Dr Bran Knowles,
from the School of Computing and Communications.
The research ties in with the UK’s Industrial Strategy, which
has “putting the UK at the forefront of the AI and data
revolution” as one of its grand challenges, as well as the UK’s
AI sector deal that aims to “boost the UK’s global position as
a leader in developing AI technologies”. It also speaks directly
to the Canadian SSHRC’s goal of tackling persistent ethnic
and gender disparities in workforce selection and
development.
The project will look to develop a protocol for responsible
and trustworthy AI that reduces labour-market inequalities
by tackling gender and ethnic/racial biases in job advertising,
hiring and professional networking processes.