How can academia kick its addiction to the impact factor?
The impact factor is academia’s worst
nightmare. So much has been written about its flaws, both in calculation
and application, that there is little point in reiterating the same
tired points here (see here by Stephen Curry for a good starting point).
nightmare. So much has been written about its flaws, both in calculation
and application, that there is little point in reiterating the same
tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a
conversation on Twitter (story of my life..), with the nice folks over
at the Scholarly Kitchen and a few researchers. There was a lot of
finger pointing, with the blame for impact factor abuse being aimed at
researchers, at publishers, funders, Thomson Reuters, and basically any
player in the whole scholarly communication environment.
conversation on Twitter (story of my life..), with the nice folks over
at the Scholarly Kitchen and a few researchers. There was a lot of
finger pointing, with the blame for impact factor abuse being aimed at
researchers, at publishers, funders, Thomson Reuters, and basically any
player in the whole scholarly communication environment.
As with most Twitter conversations, very
little was achieved in the moderately heated back and forth about all
this. What became clear though, or at least more so, is that despite
what has been written about the detrimental effects of the impact factor
in academia, they are still widely used: by publishers for advertising,
by funders for assessment, by researchers for choosing where to submit
their work. The list is endless. As such, there are no innocents in the
impact factor game: all are culpable, and all need to take
responsibility for its frustrating immortality.
little was achieved in the moderately heated back and forth about all
this. What became clear though, or at least more so, is that despite
what has been written about the detrimental effects of the impact factor
in academia, they are still widely used: by publishers for advertising,
by funders for assessment, by researchers for choosing where to submit
their work. The list is endless. As such, there are no innocents in the
impact factor game: all are culpable, and all need to take
responsibility for its frustrating immortality.
The problem is cyclical if you think
about it: publishers use the impact factor to appeal to researchers,
researchers use the impact factor to justify their publishing decisions,
and funders sit at the top of the triangle facilitating the whole
thing. One ‘chef’ of the Kitchen piped in by saying that publishers
recognise the problems, but still have to use it because it’s what
researchers want. This sort of passive facilitation of a broken system
helps no one, and is a simple way of failing to take partial
responsibility for fundamental mis-use with a problematic metric, while
acknowledging that it is a problem. The same is similar for academics.
about it: publishers use the impact factor to appeal to researchers,
researchers use the impact factor to justify their publishing decisions,
and funders sit at the top of the triangle facilitating the whole
thing. One ‘chef’ of the Kitchen piped in by saying that publishers
recognise the problems, but still have to use it because it’s what
researchers want. This sort of passive facilitation of a broken system
helps no one, and is a simple way of failing to take partial
responsibility for fundamental mis-use with a problematic metric, while
acknowledging that it is a problem. The same is similar for academics.
Oh, I didn’t realise it was that simple. Problem solved.
Eventually, we agreed on the point that
finding a universal solution to impact factor mis-use is difficult. If
it were so easy, there’d be start-ups stepping in to capitalise on it!
finding a universal solution to impact factor mis-use is difficult. If
it were so easy, there’d be start-ups stepping in to capitalise on it!
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in
the end, or at least a point remains important, is that everyone in the
scholarly communication ecosystem needs to take responsibility for, and
action against, mis-use of the impact factor. Pointing fingers and
dealing out blame solves nothing, and just alleviates accountability
without changing anything, and worse, facilitating what is known to be a
broken system.
the end, or at least a point remains important, is that everyone in the
scholarly communication ecosystem needs to take responsibility for, and
action against, mis-use of the impact factor. Pointing fingers and
dealing out blame solves nothing, and just alleviates accountability
without changing anything, and worse, facilitating what is known to be a
broken system.
So here are eight ways to kick that
nasty habit! The impact factor is often referred to as an addiction for
researchers, or a drug, so let’s play with that metaphor.
nasty habit! The impact factor is often referred to as an addiction for
researchers, or a drug, so let’s play with that metaphor.
- Detox on the Leiden Manifesto
The Leiden Manifesto
provides a great set of principles for more rigorous research
evaluation. If these best-practice principles could be converted into
high level policy for institutes and funders, with a major push for
their implementation coming from the research community, we could see a
real and great change in the assessment ecosystem. With this, we will
see a concomitant change in how research develops and interacts with
society. Evaluation criteria must be based on high quality and objective
quantitative and qualitative data, and the Leiden Manifesto lays out
how to do this.
provides a great set of principles for more rigorous research
evaluation. If these best-practice principles could be converted into
high level policy for institutes and funders, with a major push for
their implementation coming from the research community, we could see a
real and great change in the assessment ecosystem. With this, we will
see a concomitant change in how research develops and interacts with
society. Evaluation criteria must be based on high quality and objective
quantitative and qualitative data, and the Leiden Manifesto lays out
how to do this.
- Take a DORA nicotine patch
The San Franciso Declaration on Research Assessment (DORA)
was started in 2012 by a group of Editors and publishers of scholarly
journals in order to tackle malpractice in research evaluation. It
recognised the inadequacies of the impact factor of scientific quality,
and provided a series of recommendations for improving research
evaluation. These include: eliminating the use of journal-based metrics;
assessing research based on its own merits; and exploring new
indicators of significance.
was started in 2012 by a group of Editors and publishers of scholarly
journals in order to tackle malpractice in research evaluation. It
recognised the inadequacies of the impact factor of scientific quality,
and provided a series of recommendations for improving research
evaluation. These include: eliminating the use of journal-based metrics;
assessing research based on its own merits; and exploring new
indicators of significance.
To date, 7985 individuals and 589 organisations have signed DORA. That is less than the number of researchers boycotting Elsevier, and the number of global open access policies, respectively, so there is still much scope for communicating and implementing these recommendations.
- Attend ‘objective evaluation’ clinics and bathe in a sea of metrics
In 2015, a report called ‘The Metric Tide’
was published following an Independent Review of the Role of Metrics in
Research Assessment and Management. This was set up in April 2014 to
investigate the current and potential future roles that quantitative
indicators can play in the assessment and management of research.
was published following an Independent Review of the Role of Metrics in
Research Assessment and Management. This was set up in April 2014 to
investigate the current and potential future roles that quantitative
indicators can play in the assessment and management of research.
They found that peer review and
qualitative indicators should form the basis for evaluating research
outputs and individuals, with careful use of metrics as a supplement.
This will help to capture more diverse aspects of research, and limit
concerns arising from the gaming and mis-use of metrics such as the
impact factor. They also advocated the responsible use of metrics based
on dimensions of transparency, diversity, robustness, reflectivity, and
humility – 5 traits, neither of which the impact factor possesses.
qualitative indicators should form the basis for evaluating research
outputs and individuals, with careful use of metrics as a supplement.
This will help to capture more diverse aspects of research, and limit
concerns arising from the gaming and mis-use of metrics such as the
impact factor. They also advocated the responsible use of metrics based
on dimensions of transparency, diversity, robustness, reflectivity, and
humility – 5 traits, neither of which the impact factor possesses.
- Vape your way to a deeper understanding of impact factors
To summarise well-documented limitations of the impact factor (from DORA):
- Citation distributions within journals are highly skewed;
- The properties of the Journal Impact
Factor are field-specific: it is a composite of multiple, highly diverse
article types, including primary research papers and reviews; - Journal Impact Factors can be manipulated (or “gamed”) by editorial policy;
- Data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.
Recent research has also shown that impact factors are strongly auto-correlated,
becoming a sort of self-fulfilling prophecy. It is deeply ironic that
researchers, supposedly the torch-bearers of reason, evidence, and
objectivity, persistently commit to using a metric that has been so
consistently shown to be unreasonable, secretive, and statistically
weak. To learn more, see Google.
becoming a sort of self-fulfilling prophecy. It is deeply ironic that
researchers, supposedly the torch-bearers of reason, evidence, and
objectivity, persistently commit to using a metric that has been so
consistently shown to be unreasonable, secretive, and statistically
weak. To learn more, see Google.
Understanding thine enemy is the first step to being able to defeat them.
- Chew on the gummy content of the paper
Knowledge is nicotine for researchers.
There will never ever be a metric that surpasses the value of assessing
the quality of a paper than reading it. ‘Read the damn paper’ has even
become a bit of a rallying cry for the anti-impact factor community,
which makes perfect sense. However, there are often situations on which
assessment of huge swathes of papers and other research outputs has to
be conducted, and therefore short-hand alternatives to reading papers
are used as proxies to measure the quality of papers – such as the
impact factor, or the journal title. This becomes a problem when quality
and prestige diverge, as is very common, as they are no longer
reflective of the same traits. Solutions exist, such as employing
greater numbers of people in assessments, submission only of key
research outputs, which enable the process of being able to digest the
content of an article or other output and being able to make
more-informed assessments of research. It is vastly unfair and
inappropriate that researchers, funders, and other bodies are put in a
position where they are unable to commit to these and forced to use
inappropriate shortcuts instead. However, when time and volume is not an
issue, there is simply no excuse for evaluating work based on poor
proxies.
There will never ever be a metric that surpasses the value of assessing
the quality of a paper than reading it. ‘Read the damn paper’ has even
become a bit of a rallying cry for the anti-impact factor community,
which makes perfect sense. However, there are often situations on which
assessment of huge swathes of papers and other research outputs has to
be conducted, and therefore short-hand alternatives to reading papers
are used as proxies to measure the quality of papers – such as the
impact factor, or the journal title. This becomes a problem when quality
and prestige diverge, as is very common, as they are no longer
reflective of the same traits. Solutions exist, such as employing
greater numbers of people in assessments, submission only of key
research outputs, which enable the process of being able to digest the
content of an article or other output and being able to make
more-informed assessments of research. It is vastly unfair and
inappropriate that researchers, funders, and other bodies are put in a
position where they are unable to commit to these and forced to use
inappropriate shortcuts instead. However, when time and volume is not an
issue, there is simply no excuse for evaluating work based on poor
proxies.
- Just quit. Go cold turkey.
As someone who used to smoke, I finally
quit by going cold turkey. Partially because I could no longer afford to
keep up the habit as a student, but that’s besides the point. The point
is to make a personal commitment to yourself that you will no longer
succumb to the lures of the impact factor. Reward yourself with cupcakes
and brownies. You owe it to yourself to be objective, to be critical,
and to be evidence informed about your research, and this includes how
you evaluate your colleagues’ work too. Commitments like this can be
contagious, and it always helps to have the support of your colleagues
and research partners. Create a poster like “This is an impact factor
free work environment” and stick it somewhere everyone can see!
quit by going cold turkey. Partially because I could no longer afford to
keep up the habit as a student, but that’s besides the point. The point
is to make a personal commitment to yourself that you will no longer
succumb to the lures of the impact factor. Reward yourself with cupcakes
and brownies. You owe it to yourself to be objective, to be critical,
and to be evidence informed about your research, and this includes how
you evaluate your colleagues’ work too. Commitments like this can be
contagious, and it always helps to have the support of your colleagues
and research partners. Create a poster like “This is an impact factor
free work environment” and stick it somewhere everyone can see!
- Don’t hang around other impact factor junkies
The first rule of impact factors is we
don’t talk about impact factors (irony of this post fully appreciated).
This is ‘how to kick an addiction 101’. When you quit
smoking/drugs/coffee, the last thing you want is to be hanging around
others who keep doing it. It’s bad for your health, and just drags you
right back down the path of temptation. If someone insists on using the
impact factor around you, explain to them everything in this post. Or
just leave. They’re simply adopting bad practices, and you don’t want to
or have to be part of that. If it’s your superior, a long frank
discussion about about the numerous problems and alternatives of the
impact factor is well worth your time. Scientists are well known for
being completely reasonable and open to these sorts of discussion, so no
problems there.
don’t talk about impact factors (irony of this post fully appreciated).
This is ‘how to kick an addiction 101’. When you quit
smoking/drugs/coffee, the last thing you want is to be hanging around
others who keep doing it. It’s bad for your health, and just drags you
right back down the path of temptation. If someone insists on using the
impact factor around you, explain to them everything in this post. Or
just leave. They’re simply adopting bad practices, and you don’t want to
or have to be part of that. If it’s your superior, a long frank
discussion about about the numerous problems and alternatives of the
impact factor is well worth your time. Scientists are well known for
being completely reasonable and open to these sorts of discussion, so no
problems there.
- Take a methadone hit of sweet sweet altmetrics
While this has sort of been covered by
the first three points, but altmetrics, or alternative metrics are a
great way of assessing how your research has been disseminated on social
channels. As such, they are a sort of pathway or guide to ‘societal
impact’, and provide a nice compliment to citation counts, which are
often used as a proxy for ‘academic impact’. Importantly, they are at
the article level, so do not suffer the enormous shortcomings of
journal-level metrics such as the impact factor, and offer a much more
accurate insight into how research is being re-used.
the first three points, but altmetrics, or alternative metrics are a
great way of assessing how your research has been disseminated on social
channels. As such, they are a sort of pathway or guide to ‘societal
impact’, and provide a nice compliment to citation counts, which are
often used as a proxy for ‘academic impact’. Importantly, they are at
the article level, so do not suffer the enormous shortcomings of
journal-level metrics such as the impact factor, and offer a much more
accurate insight into how research is being re-used.
What other solutions can we implement in
eliminating the impact factor, and making academic assessment and
publishing a more fair, transparent, and evidence-informed process? The
Metrics Tide report, DORA, and the Leiden Manifesto are all great steps
towards this goal, but the question still remains of how we embed their
recommendations and principles in academic culture.
eliminating the impact factor, and making academic assessment and
publishing a more fair, transparent, and evidence-informed process? The
Metrics Tide report, DORA, and the Leiden Manifesto are all great steps
towards this goal, but the question still remains of how we embed their
recommendations and principles in academic culture.
We should be very aware that there is
absolutely nothing to lose from employing these recommendations and
partial solutions. What we can gain though is an enriched and informed
process of evaluation, which is fair and benefits everyone. That’s
important.
absolutely nothing to lose from employing these recommendations and
partial solutions. What we can gain though is an enriched and informed
process of evaluation, which is fair and benefits everyone. That’s
important.
How can academia kick its addiction to the impact factor? – ScienceOpen Blog
No comments:
Post a Comment