Getting the Impact Factor Genie Back in the Box
Posted June 5, 2017 by
On
occasion The Official PLOS Blog presents Thought Leadership interviews
with scientists leading the way on issues integral to the transformation
of science communication and advancement of Open Science. Previous
interviewees include Bruce Alberts and Trevor Bedford. Here we present our conversation with Sandra Schmid from University of Texas Southwestern Medical Center.
Over the years, Sandra Schmid has gained a reputation for academic strength and leadership, most recently as Professor and Chairman
of the cell biology department at University of Texas Southwestern
(UTSW) Medical Center. She’s also gained a reputation for her honesty
regarding varied issues, including the position of post-docs, “if it
were a job, we’d pay you better and give you retirement benefits,” the
training of faculty, “few of us as mentors, as Principal Investigators,
were ever taught how to run a lab or how to mentor individuals” and how
she participates in open discussion of research before publication
“mostly over beers.”
Schmid has been particularly vocal about the misuse of journal impact
factors (JIFs) as a way to evaluate researchers and, as she claims,
“the unfortunate consequences to the scientific community of their
misuse.” At UTSW, Schmid’s home institution, there has been no overt
discussion among the leadership regarding JIFs and where faculty should
choose to publish. There is no formalized preference for high impact
journals. “In fact, we celebrated the founding of eLIFE [a
journal which rejects the use of JIFs] and have faculty on the Editorial
Board of the journal,” says Schmid. The JIF was “never intended to and
indeed does not measure the quality or impact of the individual papers
in a journal,” says Schmid. It was originally developed and
commercialized by Eugene Garfield to help librarians decide on which
journals they should spend their subscription dollars.
“Individuals and institutions are being spuriously judged
– by other scientists, funders, governing bodies and administrators –
based indirectly on JIF, rather than directly on the quality and impact
of their work,” Schmid wrote in “Negative Consequences of the Misuse of Journal Impact Factors for Scientific Assessment” as part of the 8th Forum on the Internationalization of Sciences and Humanities.
Flawed Statistics
The JIF is a statistic calculated based on the average citations of aselection of papers in a given journal. One major problem with the JIF
is that citations are highly skewed, with most articles receiving fewer
citations. Since citation distributions are skewed, averages are
meaningless. “Indeed,” writes Schmid, there are journals that “flaunt
their JIF in marketing material to authors that would ironically not
accept papers reporting such flawed statistics.” This skewed
distribution was clearly demonstrated last year through a collaboration
between multiple publishers, including Université de Montréal, Imperial
College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science (see Measuring Up: Impact Factors Do Not Reflect Article Citation Rates). The analysis, posted on bioRxiv,
showed that citation distributions of journals with clearly distinct
impact factors greatly overlap—in other words that all journals publish
many papers with similar lower numbers of citations, and few highly
cited papers.
A Better Option: Citation Distributions
The authors of the bioRxiv analysis call for publishers to makepublicly available the actual citation distributions of their journal’s
articles, rather than rely on irrelevant and misleading JIFs. Since
journals use many different techniques to artificially increase their
impact factor, including publication of review articles (which are often
more highly cited than the original research papers they review) and
front matter, including commentary and mini-review articles (that
generate citations but are not counted as “citable” content) comparison
across journals is problematic. It is hoped that public disclosure of
article citation distributions will lead to more granular comparisons
and better informed decisions by authors on where to submit their work.
Then and Now
From the perspective of a senior investigatorwith a long-established career and history of publishing quality work
at all tiers of influence, what has changed for Schmid when deciding
where to publish is that in the past, “journals had different purposes
and different scopes” and that was good. Before there was the JIF there
was an understanding of what journal went with what type of data. “We
sent our best biochemistry to Journal of Biological Chemistry; our best cell biology to Journal of Cell Biology.
If we happened upon a new and potentially important discovery, even
before we understood mechanism, we’d communicate it rapidly in Science and Nature
because they were three figure papers.” Before the advent of
supplemental materials, more meaty, in-depth studies were published in
non-page limited, subject-specific journals.
When asked in the post-print era, how do researchers decide where to
publish, Schmid replies, “That is the unfortunate part.” A lot of the
decisions are being made by postdocs telling her about impact factors,
although she cautions that “publishing in high impact factor journals
doesn’t mean it’s high quality work.” Early career researchers are
looking at numbers as a distinguisher between journals, says Schmid, so
her efforts are focused on getting these scientists to think more
broadly. Her response and recommendation? First and foremost is to
choose the journal where the work will get in front of the audience that
matters the most. Schmid is crystal clear when outlining her main
considerations for deciding where to publish her work and the work from
her lab:
- Are the people who handle my paper able to identify qualified referees?
- Are the editors going to understand the discussion and
criticisms and be helpful in handling my papers; do they understand my
field? - Do my peers read and respect the content in this journal?
Unintended Consequences
The real question for Schmid is how to get the “impact factor genie”as she calls it, “back in the box.” Why is this so important? Scientists
and publishers often focus on the limitations of JIFs and the benefits
of evaluating work at the article rather than journal level. However,
there are more than just limitations to the JIF. According to Schmid
there are very “specific and unintended consequences of the abuse of JIF
as a tool for individual and institutional assessment.” Many of these,
she notes, are direct; others are subtle, downstream ramifications:
- Deferred communication of discoveries that might launch new fields as reviewers and editors demand more information per paper
- Discouraged follow-up or augmentative studies to verify
results due to over-interpretation of findings for the purpose of
artificially inflating a work’s value - Misguided evaluation of graduate students, postdoctoral
fellows and junior faculty by their individual papers rather than the
combinatorial impact of their work in context - Wasted time and resources spent satisfying unnecessary demands of reviewers and editors in high-impact journals
- Demoralized early career researchers forced to package an entire thesis or postdoctoral project into one comprehensive paper
A Better Option: Article-Level Metrics
Perhaps wanting to get that impact factor genie back in the box wasmore than a mistaken mixing of two idioms. The difficulty of reverting
to a situation that formerly existed (putting the genie back in the
bottle) combined with the repercussions of doing something that causes
unexpected and unintended negative consequences (opening a Pandora’s
box) does describe the situation the scientific community has with JIFs.
Fortunately, this is not an impossible situation to remedy. Article-Level Metrics
were developed by PLOS as a better means to assess research value in an
electronically networked world. They are gaining acceptance across a
broad swath of the scientific community, from scientists to funders and
more, since they provide granularity, breadth and proximity (PLOS ALMs
are updated daily to monthly, depending on source and age of the
individual article). ALMs also allow different scholarly research
outputs to be tracked, such as policy impact, datasets, software and
code. Schmid also recommends simply using PubMed as a portal for
assessing the influence of an article, stating, “from title to abstract
to download is a good metric,” although not as complete as a suite of
ALMs.
Leadership in Practice
In 2013 as Schmid took up the position of Chair of the Department of Cell Biology at UTSW, she offered an employer’s manifesto (published as a ScienceCareers column) on the approach her department would take in hiring new
assistant professors. This manifesto promised “a better job of
screening applicants—and to avoid inappropriate criteria such as journal
impact factors.” The idea was to encourage applications from qualified
candidates who “might feel sidelined because their paper has yet to be,
or perhaps won’t be, published in a high-impact journal.” Schmid closed
her column with an enthusiastic “Let’s run this experiment!” Four years
later, she shared some of the results with PLOS. Using their Academic
Jobs portal the entire faculty is engaged in viewing applicants and
every candidate that has piqued the interest of even one faculty member
is interviewed via Skype, removing the need for reaching a ‘consensus’
that might rely more on JIFs. Those few candidates whose programs are
most likely to thrive in the department’s specific environment are
invited to campus to visit. Since taking this approach “our new faculty
are indeed thriving,” says Schmid.
This approach suggests that a reduction on emphasis of JIFs
in favor of more constructive and meaningful measures of evaluation,
both quantitative and qualitative, fosters an assessment program that is
both fair and thoughtful. This is how science should be; if it works
for people it can work for research outputs as well.
*************
Sandra Schmid is Cecil H. Green Distinguished Chair in Cellular and
Molecular Biology, Professor and Chairman, Department of Cell Biology at
the University of Texas Southwestern Medical Center. She was
co-founding editor of Traffic, Editor-in-Chief of Molecular Biology of the Cell
and president of the American Society for Cell Biology. Schmid was
elected fellow of the American Association for the Advancement of
Science and Vice-Chair of the European Molecular Biology Laboratory
Scientific Advisory Committee.
Getting the Impact Factor Genie Back in the Box | The Official PLOS Blog
No comments:
Post a Comment