Wednesday, 10 February 2016

Citable Items: The Contested Impact Factor Denominator | The Scholarly Kitchen

Source: http://scholarlykitchen.sspnet.org/2016/02/10/citable-items-the-contested-impact-factor-denominator/








Controversial Topics, Metrics and Analytics

Citable Items: The Contested Impact Factor Denominator

Caption
The Rise of Editorial Material: Papers published in the NEJM, JAMA, The Lancet, and The BMJ. Data source: Web of Science.
Discussing the Journal Impact Factor inevitably leads one down a
rabbit hole. While the numerator of the ratio (total citations) to the
journal is clear enough, the denominator (citable items) causes great
confusion, and getting a clear answer to its construction requires real
work.


This post is about the Impact Factor denominator — how it is is defined, why is it inconsistent, and how it could be improved.


In their paper, The Journal Impact Factor Denominator: Defining Citable (Counted) Items,
Marie McVeigh and Stephen Mann describe how Thomson Reuters determines
what makes a citable item. Their guidelines include such characteristics
as whether a paper has a descriptive title, whether there are named
authors and addresses, whether there is an abstract, the article length,
whether it contains cited references, and the density those cited
references.


The assignment of journal content into article types is not conducted for each new paper but is done at the section
level, based on an initial analysis of a journal and its content. For
example, papers listed under Original Research are assigned to the
document type “Article,” Mini-Reviews are assigned to “Review,” Editor’s
Choice, to “Editorial Material,” etc. The rules about how sections are
defined are kept in a separate authority file for each journal.


While the vast majority of journals are simple to classify,
consisting mainly of original articles accompanied by an editorial, a
bit of news, and perhaps a correction or obituary, for some journals,
there exists a grey zone of article types (perspectives, commentaries,
essays, highlights, spotlights, opinions, among others) that could be
classified either as Article or as Editorial Material.


This is where the problem begins.


Journals change after their initial evaluation and some editors take
great liberties in starting new sections, if only for a single issue. In
the absence of specific instruction from the publisher, an indexer at
Thomson Reuters will evaluate the new papers and make a determination on
how they will be classified but does not update the authority file.


From time to time, Thomson Reuters will receive requests to
re-evaluate how a journal section is indexed. Most often, these requests
challenge the current classification schema and maintain that papers
presently classified as “Article,” which are considered citable, should
really be classified as “Editorial Material,” which are not. A
reclassification from Article to Editorial Material does nothing to
reduce citation counts in the numerator of the Impact Factor calculation
but reduces the number in its denominator. Depending on the size of the
section, this can have a huge effect on the resulting quotient. For
elite medical journals, Editorial Material now greatly
outnumbers Article publication (see figure above).


Journal sections can evolve as well, swelling with content that
wasn’t there when the section was first classified and codified in the
journal’s authority file. The paper, “Can Tweets Predict Citations?” reports original research and included 102 references (67 self-citations were removed before indexing after I called this paper into question), but was published as an editorial and indexed by Thomson Reuters as Editorial Material. The editor of JMIR often uses his editorial section for other substantial papers (see examples here and here).


Thomson Reuter’s approach of classifying by section and revising by
demand can also lead to inconsistencies in how similar content is
classified across journals. For example, the Journal of Clinical Investigation (JCI) publishes a number of short papers (about 1000 words) called Hindsight, whose purpose is to provide historical perspectives on landmark JCI papers. Hindsight papers are classified as “Article,” meaning that they contribute to the journal’s citable item count.


Science Translational Medicine publishes a journal section
called “Perspectives” and another called “Commentary.” These papers are
generally a little longer and contain more citations than JCI’s Hindsight papers and are also classified as “Article.”


In contrast, PLOS Medicine publishes an article type called “Perspective,” which covers recent discoveries, often published in PLOS Medicine. While statistically similar by article length and number of references as JCI‘s Hindsight papers, Perspectives are classified as “Editorial Material,” meaning they do not count towards PLOS Medicine‘s citable item count. PLOS Medicine
also publishes papers under Policy Forum, Essay, and Health in Action,
all of which Thomson Reuters classifies as “Editorial Material.”


What would happen to the Impact Factors of these journals if we reclassified some of these grey categories of papers?


If we reclassified Hindsight papers in the Journal of Clinical Investigation as
“Editorial Material” and recalculated its 2014 Impact Factor, the
journal’s score would rise marginally, from 13.262 to 13.583. The
title would retain its third place rank among journals classified under
Medicine, Research & Experimental. If we reclassified Commentary and
Perspective papers in Science Translational Medicine as
“Editorial Material,” the journal’s Impact Factor would rise nearly 3
points, from 15.843 to 18.598. The journal would still retain second
place in its subject category. However, if we reclassified Perspective,
Policy Forum, Essay, and Health in Action papers in PLOS Medicine
from “Editorial Material” to “Article,” its Impact Factor would drop by
nearly half, from 14.429 to 8.447 and have a standing similar to BMC Medicine (7.356).


Should these results be surprising?


If we consider that article classification at Thomson Reuters is
determined during an initial evaluation, based on guidelines and not
hard rules, is made at the journal section level rather than the
individual article level, and requires an event to trigger a reanalysis,
we shouldn’t be surprised with inconsistencies in article type
classification across journals.


While I have no doubt that Thomson Reuters attempts to maintain the
integrity and independence of their indexers — after all, trust in their
products, especially, the Journal Citation Reports (the
product that reports journal Impact Factors) is dependent upon these
attributes — the process created by the company results in an
indeterminate system that invites outside influence. When this happens,
those with the resources and stamina to advance their classification
position have an advantage in how Thomson Reuters reports its
performance metrics.


Is it possible to reduce bias in this system?


To me, the sensible solution is to remove the human element out of
article type classification and put it in the hands of an
algorithm. Papers would be classified individually at the point of indexing rather than by section from an authority file.


The indexing algorithm may be black-boxed, meaning, that the rules
for classification are opaque and can be tweaked at will to prevent
reverse engineering on the part of publishers, not unlike the approach
Google takes with its search engine or Altmetric with its donut score. For
an industry that puts so much weight on transparency, this seems like
an unsatisfactory approach, however. When classification rules are
explicit and spell out exactly what makes a citable and non-citable
item, ambiguity and inconsistency are removed from the system. Editors
no longer need to worry about how new papers will be treated. When
everyone has the rules and the system defines the score, the back door
to lobbying is closed and we have a more level playing field for all
players.


Under such a transparent algorithmic model, editors may decide to
continue to publish underperforming material that drags down their
ranking, in principle, because this content (e.g. perspectives,
commentary, policy and teaching resources, among others) serves
important reader communities. An explicit rule book would also help
editors modify such types of papers so that they can still be published
but just not be counted as citable items, for example, by stripping out
the reference section and including only inline URLs or footnotes.


Understandably, such a move would result in major adjustments to the
Impact Factors and rankings of thousands of journals, especially the
elite journals that produce the vast majority of grey content.
Historical precedent, however — especially when it comes to a metric
that attempts to rank journal based on importance in the journal record —
should not preside over such necessary changes. Unlike many of my
colleagues who decry the end of the Journal Impact Factor, I think its a
useful metric and sincerely hope that the company is listening to
constructive feedback and is dedicated to building a better, more
authoritative product.




Citable Items: The Contested Impact Factor Denominator | The Scholarly Kitchen

No comments:

Post a Comment