Are citation rates the best way to assess the impact of research?
The United Kingdom’s Medical Research Council (MRC) - the
equivalent of Australia’s National Health and Medical Research Council -
recently released its 2014-15 economic impact report, details of which make for interesting reading.
Since 2006, research funded by the MRC resulted in more than 94,000 publications, 63,000 of which (67%) were peer-reviewed.
The traditional starting point for considering the scientific impact
of research are its citations. This is how many other research papers
and editorials subsequently cite a given paper into the future or in a
given number of years since publication.
Different numbers of researchers are involved in different fields of
research. So when a relatively small number of scientists work in a
study area, even if they write a spectacularly important paper, it can
still receive a small fraction of the citations a comparably important
paper receives in an area where more researchers are involved.
It would be unreasonable and misleading to conclude that a leading
researcher in a small field had less scientific impact than one in a big
field. Also, as can be seen from this
table, a paper that has been published for a short time has naturally
had less time to be cited than one that has been out for longer.
impact (NCI) to adjust citation volumes in different fields. This allows
them to be compared in analyses of entire research funding schemes,
national research and international activity.
The MRC report provides a graph
showing the NCIs for all MRC-funded research publications for the
sample period 2006-13. The average NCI citation for all papers in health
and medical fields across these eight years was a desultory 2.08. And
this was more than twice the world average.
The report notes that of more than six million papers, more than a
fifth (21%) had not been cited, while only 3% of the MRC funded research
had no citations to date.
What counts as a high and very high citation in the MRC data is also
interesting. “Highly cited” means a paper with just four or more cites
and “very highly cited” is anything with eight or more.
The situation in Australia is more opaque. A 2013 report
provides lots of comparative data that show Australian health and
medical researchers punch well above our weight compared to other
nations and our small population.
But nowhere could I find comparable data to that provided by the MRC report. However, the NHMRC report notes:
Journals that provide all or some of their papers as open access to
anyone, often have daily updated counters showing the number of readers
who have been to the publication. Not all of these will have read it and
only land on it in a search for something else, so the data exaggerate
actual readers.
My most read paper is one looking at the incidence of gun massacres and deaths ten years after the 1996 Australian gun law reforms.
It received some 136,946 views since 2006, 82,312 of which were in
December 2012 in the aftermath of the Sandy Hook school shootings in the
United States. I tweeted a link to the paper, that saw it go viral.
Ironically, the paper wasn’t funded by any research grant.
The Altmetric score provides an index of the extent to which a
research paper is being circulated and discussed on social media and
covered in the news media.
My 2006 firearms paper has a stratospheric Altmetric score of 2,118.
The 100 highest Altmetric scoring papers across all research fields in
2015 are listed here.
Had mine been published in 2015 with the same attention it has received
it would have had the seventh highest Altmetric score that year.
The MRC report provides data on a range of social impacts that go
well beyond metrics. The data assesses the direct impact of research on
outcomes that may have social and economic impact.
These include:
in 2015 examining the “impact in society” of intervention research in
health and medical research funded by the NHMRC, we found 38% of
research projects could demonstrate some level of social or health
service impact.
We investigated the characteristics of those projects that demonstrated impact and compared them to those that didn’t.
Our study indicated that sophisticated approaches to intervention
development, dissemination actions and translational efforts, are
actually widespread among experienced researchers, and can achieve
policy and practice impacts.
However, it was the links between the intervention results, further
dissemination actions by researchers and a variety of contextual factors
after the research that ultimately determined whether a study had
policy and practice impacts.
Given the complicated interplay between various factors, there (alas)
appears to be no simple formula for determining which intervention
studies should be funded in order to achieve optimal policy and practice
impacts.
The judgement of which research applications should be funded is, and is likely to remain, a very inexact science.
equivalent of Australia’s National Health and Medical Research Council -
recently released its 2014-15 economic impact report, details of which make for interesting reading.
Since 2006, research funded by the MRC resulted in more than 94,000 publications, 63,000 of which (67%) were peer-reviewed.
The traditional starting point for considering the scientific impact
of research are its citations. This is how many other research papers
and editorials subsequently cite a given paper into the future or in a
given number of years since publication.
Different numbers of researchers are involved in different fields of
research. So when a relatively small number of scientists work in a
study area, even if they write a spectacularly important paper, it can
still receive a small fraction of the citations a comparably important
paper receives in an area where more researchers are involved.
It would be unreasonable and misleading to conclude that a leading
researcher in a small field had less scientific impact than one in a big
field. Also, as can be seen from this
table, a paper that has been published for a short time has naturally
had less time to be cited than one that has been out for longer.
Normalised citation impact
For these reasons, citation analysts use the normalised citationimpact (NCI) to adjust citation volumes in different fields. This allows
them to be compared in analyses of entire research funding schemes,
national research and international activity.
The MRC report provides a graph
showing the NCIs for all MRC-funded research publications for the
sample period 2006-13. The average NCI citation for all papers in health
and medical fields across these eight years was a desultory 2.08. And
this was more than twice the world average.
The report notes that of more than six million papers, more than a
fifth (21%) had not been cited, while only 3% of the MRC funded research
had no citations to date.
What counts as a high and very high citation in the MRC data is also
interesting. “Highly cited” means a paper with just four or more cites
and “very highly cited” is anything with eight or more.
The situation in Australia is more opaque. A 2013 report
provides lots of comparative data that show Australian health and
medical researchers punch well above our weight compared to other
nations and our small population.
But nowhere could I find comparable data to that provided by the MRC report. However, the NHMRC report notes:
The citation distribution among publications is veryCitation is of course only a measure of interest in your work by other researchers.
skewed. While very few publications achieve high citation counts, a vast
majority receive very few or no citations at all.
Journals that provide all or some of their papers as open access to
anyone, often have daily updated counters showing the number of readers
who have been to the publication. Not all of these will have read it and
only land on it in a search for something else, so the data exaggerate
actual readers.
My most read paper is one looking at the incidence of gun massacres and deaths ten years after the 1996 Australian gun law reforms.
It received some 136,946 views since 2006, 82,312 of which were in
December 2012 in the aftermath of the Sandy Hook school shootings in the
United States. I tweeted a link to the paper, that saw it go viral.
Ironically, the paper wasn’t funded by any research grant.
Altmetric
A metric increasingly being used by researchers since 2013 to demonstrate wider interest in their work is Altmetric.The Altmetric score provides an index of the extent to which a
research paper is being circulated and discussed on social media and
covered in the news media.
My 2006 firearms paper has a stratospheric Altmetric score of 2,118.
The 100 highest Altmetric scoring papers across all research fields in
2015 are listed here.
Had mine been published in 2015 with the same attention it has received
it would have had the seventh highest Altmetric score that year.
The MRC report provides data on a range of social impacts that go
well beyond metrics. The data assesses the direct impact of research on
outcomes that may have social and economic impact.
These include:
- The development of more than 4,400 instances of influence on
policy and practice - 416 new in 2014, including 472 citations in
clinical guidelines. - The development of more than 1,000 products and interventions - 126 new in 2014.
- The creation or growth of 88 companies - seven new in 2014.
- Approximately 1,081 patents - 37 filed or granted in 2014, with
discoveries related to 246 (23%) of these patents already licensed
worldwide.
in 2015 examining the “impact in society” of intervention research in
health and medical research funded by the NHMRC, we found 38% of
research projects could demonstrate some level of social or health
service impact.
We investigated the characteristics of those projects that demonstrated impact and compared them to those that didn’t.
Our study indicated that sophisticated approaches to intervention
development, dissemination actions and translational efforts, are
actually widespread among experienced researchers, and can achieve
policy and practice impacts.
However, it was the links between the intervention results, further
dissemination actions by researchers and a variety of contextual factors
after the research that ultimately determined whether a study had
policy and practice impacts.
Given the complicated interplay between various factors, there (alas)
appears to be no simple formula for determining which intervention
studies should be funded in order to achieve optimal policy and practice
impacts.
The judgement of which research applications should be funded is, and is likely to remain, a very inexact science.
Are citation rates the best way to assess the impact of research?
No comments:
Post a Comment