Saturday 24 December 2016

Exhibition Prohibition — Why Shouldn’t Publishers Celebrate an Improved Impact Factor? | The Scholarly Kitchen

Source: https://scholarlykitchen.sspnet.org/2014/09/11/exhibition-prohibition-why-shouldnt-publishers-celebrate-an-improved-impact-factor/

Exhibition Prohibition — Why Shouldn’t Publishers Celebrate an Improved Impact Factor?

English: Fireworks
English: Fireworks (Photo credit: Wikipedia)
Recently, it seems some keepers of the zeitgeist are suggesting that
publishers should avoid promoting their impact factors. From the Declaration on Research Asssessment (DORA)
to certain voices in academia, various monitors of journal etiquette
believe editors, journals, and publishers that promote their impact
factors are in some way participating in a ruse or are doing something
illegitimate. Some new journals, notably eLife, have publicly pledged not to promote their impact factors.


Such self-imposed restrictions seem akin to putting your head in the
sand — a form of avoiding reality, which for journals includes being
measured by things like the impact factor, circulation size, editorial
reputation, turnaround times, peer review standards, disclosure rules,
and more.


For me, journals wanting to promote their impact factors have no
reason to apologize and are actually serving the academic community by
making their impact factors known and easily obtained. After all, the
impact factor is a journal metric. It is not an author metric or an
article metric, but a journal metric. It simply states an average number
of citations per scholarly article over the past two years. So, if a
journal has an impact factor of 10, that means that, on average (with
some disputes around the edges over what’s included and what’s counted),
articles in that journal received 10 citations.


Just because some in academia can’t stop misusing it doesn’t change
the fact that it’s entirely appropriate for a journal to use this
measure.


So when Elsevier or the American College of Chest Physicians or Springer or Taylor & Francis or SAGE or Oxford University Press
promote their impact factors, they are doing something very appropriate
and helpful. And when Google — which has the impact factor calculation
at the heart of its PageRank algorithm, thank you very much — promotes
journal impact factors as a way of differentiating journals in search
listings, they are also adding a helpful and proper signal to their
search results.


Interest in impact factor remains intense for authors. Last year, David
Crotty documented how posts that touch on the impact factor on this
blog routinely receive inordinate and sustained traffic
, signifying a
persistent interest in the topic. As publishers are essentially
providing a service to academia, promoting this useful differentiator is
simply part of the service. Authors want to know it, so we make it
obvious.


Part of what makes impact factor promotion problematic for some seems
to stream from a misunderstanding of what the impact factor connotes —
and this misunderstanding fuels other problems I’ll discuss later. At
its base, the impact factor is just an average. Like other averages,
it’s easy to fall into the trap of thinking that every article received
the average number. But a batting average of .250 doesn’t mean that a
batter will get a hit every four at-bats. The batter may go 2-3 games
without a hit, and then get on a hot streak. An average temperature of
80ºF on June 23rd doesn’t mean that the temperature will be 80ºF
annually on June 23rd. It may be 67ºF one year and 83ºF another, with a
bunch of seemingly aberrant temperatures in between. Impact factors, since they are averages and not medians, mostly skew heavily toward a few big papers, while the rest trail behind.


Because of this, assigning impact to articles or authors is a
misapplication of the impact factor. Yet, it continues to be used
inappropriately in some settings, as a journal metric is elided into an
academic metric.


For editors, reviewers, authors, and publishers, a higher impact
factor is nearly always something to celebrate. While there are
illegitimate ways to achieve a higher impact factor — self-citation,
citation rings, and denominator manipulations — most impact factors
increase thanks to the hard work and careful choices of editors,
reviewers, and publishers. In some cases, a higher impact factor is
achieved after years of dedicated work, new resources, and careful
strategic choices. In short, a higher impact factor — one that increases
more than the inherent inflation rate — is typically well-earned.


The impact factor for our flagship journal recently increased more
than 30%. This increase is attributable to multiple improvements,
including greater editorial selectivity and focus, better brand
management, new product development, and stronger social media efforts
driving awareness of our content. This is a legitimate increase achieved
after years of coordinated editorial, publishing, and marketing
efforts. It feels like something to celebrate.


Journals with impact factors that rank well within their disciplines
are also the most desirable places to publish. They usually have
achieved a virtuous cycle of editorial reputation, important
submissions, strong review, careful selection, and consistently high
standards for publication. Making it in a journal with these features is
a positive sign for a researcher.


Where things diverge from rationality is when the impact factor of
the journal is then assigned to the researcher in some way — an average
is used as a proxy. This cuts both ways. For authors of a highly cited
paper, the impact factor of the journal may under-represent the actual
citations for the paper. For authors of a more typical paper, the impact
factor will overstate the citations.


The majority of DORA is absolutely correct, including the point above, which DORA states as:


Do not use journal-based metrics, such as Journal Impact
Factors, as a surrogate measure of the quality of individual research
articles, to assess an individual scientist’s contributions, or in
hiring, promotion, or funding decisions.
But when DORA calls for publishers to cease promoting their impact
factors as the “ideal” way to solve the problems with academia’s misuse
of the metric, that seems an overreach. Publishers who buy into this
line of thinking aren’t doing anyone any favors, as their tacit
acceptance of a conflated use of the metric only clouds the waters
further and seems to confirm DORA’s overreach. If you’re running a
journal, you should know and make others aware of your impact factor.


Journal impact factors are useful, but like any tool, they should be
used correctly. They represent an average for a journal. They change.
They can be trended. There are other measures that can be used to
complement or contextualize them. Impact factors are journal-level
metrics, not article-level or researcher-level metrics.


So, if you have a good impact factor, promote it. But if you don’t
edit or publish a journal, don’t borrow the impact factor and use it to
imply something it wasn’t designed to measure.




Exhibition Prohibition — Why Shouldn’t Publishers Celebrate an Improved Impact Factor? | The Scholarly Kitchen

No comments:

Post a Comment