Sunday, 24 July 2016

A New and Stunning Metric from NIH Reveals the Real Nature of Scientific Impact - ASCB

 Source: http://www.ascb.org/a-new-and-stunning-metric-from-nih-reveals-the-real-nature-of-scientific-impact/

A New and Stunning Metric from NIH Reveals the Real Nature of Scientific Impact





What if I told you that nearly 90 percent of the publications which
have profoundly influenced the life sciences did not appear in a
high-impact factor journal? If you signed the San Francisco Declaration on Research Assessment,
you probably aren’t surprised. If you haven’t signed DORA, it may be
time for you to reconsider the connection between true breakthrough
papers and so-called journal impact factors (JIFs).


Today we received strong evidence that significant scientific impact
is not tied to the publishing journal’s JIF score. First results from a
new analytical method that the National Institutes of Health (NIH) is
calling the Relative Citation Ratio (RCR) reveal that almost 90% of
breakthrough papers first appeared in journals with relatively modest
journal impact factors. According to the RCR, these papers exerted major
influence within their fields yet their impact was overlooked, not
because of their irrelevance, but because of the widespread use of the
wrong metrics to rank science.


In the initial RCR analysis carried out by NIH, high impact factor
journals (JIF ≥ 28) account for only 11% of papers that have high RCR (3
or above). Here is hard evidence for what DORA supporters have been
saying since 2012. Using the JIF to credit influential work means
overlooking 89% of similarly influential papers published in less
prestigious venues.


The RCR is the creation of an NIH working group led by George
Santangelo in the Office of the NIH Director. Santangelo has just
uploaded an article in the Cold Spring Harbor BioArchive repository describing the RCR metric. I believe that the Santangelo proposal would significantly advance research assessment.


This marks a significant change in my own thinking. I am firmly
convinced that no single metric can serve all purposes. There is no
silver bullet in research evaluation; qualitative review by experts
remains the gold standard for assessment. And yet I would bet that this
new metric will gain currency, contributing to a new and better
understanding of impact in science. The RCR provides us a new sophisticated analytical tool,
which I hope will put another nail into the coffin of the phony metric
of the journal impact factor. As I and many others have said many times,
the JIF is the wrong way of assessing article-level or, even worse,
individual productivity.


So, what is this new RCR metric? The Relative Citation Ratio may seem
complicated at first glance but the concept is simple and very clever.
Key to the RCR is the concept of co-citation network. In essence, this
new metric compares the citations an article receives to a custom-built
citation network which is relevant to that particular paper. The
relevant network is defined by the entire collection of papers which are
referenced in the papers that cite the reference paper. All this
constitutes the denominator of the RCR, while the numerator is simply
the citations received by the reference article.


The values used to calculate the denominator of the defined citation
network are based on the Journal Citation Rate (JCR), which is also used
to compute in journal impact factor. But it is important to note that
the RCR, besides being based on the co-citation network, places the
journal metric, the JCR, NOT at the numerator, but at the denominator.
This makes this new RCR a robust article-level metric normalized to the
citations in the custom-built field of relevance and to the expected
citations that journals receive in that network. The RCR is then
normalized to make comparisons easier. To do so the authors use the
cohort of NIH R01 funded papers as the benchmark set.


NIH will provide full access to the algorithms and data to calculate
the RCR, making this a highly transparent and accessible tool for the
whole scientific community. This is a fundamental change in assessment
and it is incredibly exciting.


I am not a bibliometrician, so I don’t pretend to have all the skills
to evaluate the metric algorithm in detail. But I am familiar with
research evaluation and, after reading this paper carefully, I am
convinced it adds something important to our toolbox in the thorny field
of research assessment. I am reminded by something that the legendary
Nobel laureate Renato Dulbecco once told me: Based on the JIF metric not
only would Dulbecco have never been awarded the Nobel Prize, but he
probably would have never gotten a job, since his landmark papers were
published in rather obscure journals.


This is exactly the point underscored by this new RCR analysis. Often
highly innovative ideas—new concepts, technologies, or methods— may be
of immediate interest only to a very small group of scientists within
their highly specialized area. These seemingly arcane advances attract
little notice outside that subfield. Yet on the meandering roads of
research, an obscure breakthrough with seemingly little relevance to
outsiders may reorient the field. What began with a curiosity-driven
observation reported in an obscure journal may roll on to become a
landmark discovery. I believe the RCR addresses this problem. My
concerns about miracle metrics were assuaged by NIH’s careful
benchmarking of the RCR, using expert qualitative review of the RCR
scored papers that reported strong concordance.


After years of blasting one-size-fits-all metrics, I find myself in
the uncomfortable position of cheering for a new one. Yes, the RCR must
be road tested further. It must be tried in multiple fields and
situations, and modified, if necessary, to address blind spots. And I
still hold that qualitative review by experts remains the gold standard
for individual assessment. But from this early report by the Santangelo
group, I am convinced that here is a metric that reflects how science
really evolves in laboratories, scientific meetings, and in obscure
journals. It evaluates science by putting discoveries into a meaningful
context. I believe that the RCR is a road out of the JIF swamp.


Well done, NIH.


Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInPin on PinterestEmail this to someone

Stefano Bertuzzi

Dr. Stefano Bertuzzi is the Executive Director of
the American Society for Cell Biology. In this position he is
responsible, with the ASCB Board, for strategic planning and all
operations at the Society to serve the needs of its ~9,000 members and
to promote the field of cellular biology and basic science. Email: sbertuzzi@ascb.org




A New and Stunning Metric from NIH Reveals the Real Nature of Scientific Impact - ASCB

No comments:

Post a Comment