GloPID-R Roadmap for Data Sharing in PHEs | Page 40
ANNEXES
ANNEX B. SAN FRANCISCO
DECLARATION ON RESEARCH
ASSESSMENT
Available from: https://sfdora.org/read/
There is a pressing need to improve the ways
in which the output of scientific research is
evaluated by funding agencies, academic insti-
tutions, and other parties. To address this issue,
a group of editors and publishers of scholarly
journals met during the Annual Meeting of The
American Society for Cell Biology (ASCB) in
San Francisco, CA, on December 16, 2012. The
group developed a set of recommendations,
referred to as the San Francisco Declwaration
on Research Assessment. We invite interested
parties across all scientific disciplines to indi-
cate their support by adding their names to this
Declaration.
The outputs from scientific research are many
and varied, including: research articles report-
ing new knowledge, data, reagents, and soft-
ware; intellectual property; and highly trained
young scientists. Funding agencies, institutions
that employ scientists, and scientists them-
selves, all have a desire, and need, to assess
the quality and impact of scientific outputs. It is
thus imperative that scientific output is meas-
ured accurately and evaluated wisely.
The Journal Impact Factor is frequently used
as the primary parameter with which to
compare the scientific output of individuals
and institutions. The Journal Impact Factor, as
calculated by Thomson Reuters*, was original-
ly created as a tool to help librarians identify
journals to purchase, not as a measure of the
scientific quality of research in an article. With
that in mind, it is critical to understand that
the Journal Impact Factor has a number of
well-documented deficiencies as a tool for re-
search assessment. These limitations include:
A) citation distributions within journals are high-
ly skewed [1–3]; B) the properties of the Journal
Impact Factor are field-specific: it is a com-
posite of multiple, highly diverse article types,
including primary research papers and reviews
[1, 4]; C) Journal Impact Factors can be manip-
ulated (or “gamed”) by editorial policy [5]; and
D) data used to calculate the Journal Impact
Factors are neither transparent nor openly
available to the public [4, 6, 7]. Below we make a
number of recommendations for improving the
way in which the quality of research output is
evaluated. Outputs other than research articles
will grow in importance in assessing research
effectiveness in the future, but the peer-re-
viewed research paper will remain a central
research output that informs research assess-
ment. Our recommendations therefore focus
primarily on practices relating to research
articles published in peer-reviewed journals
but can and should be extended by recognizing
additional products, such as datasets, as im-
portant research outputs. These recommenda-
tions are aimed at funding agencies, academic
institutions, journals, organizations that supply
metrics, and individual researchers.
A number of themes run through these
recommendations:
•
the need to eliminate the use of jour-
nal-based metrics, such as Journal Impact
40