The Journal Impact Factor (JIF) is the brainchild of Eugene Garfield,
the founder of the Institute for Scientific Information, who devised
this citation metric in 1955 to help librarians prioritize their
purchases of the most important journals. The idea of quantifying the
‘impact’ by counting citations led to the creation of the prestigious
journal rankings, which have been recorded annually in the Science
Citation Index since 1961 (
1). The JIFs are currently calculated by Thomson Reuters annually and published in the Journal Citation Reports (JCR).
The original formula of the JIF measures the average impact of
articles published in a journal with a citation window of one year
(numerator). The ‘citable’ articles, which are counted in the
denominator of the formula, are published during the 2 preceding years.
To get the JIF, a journal should be accepted for coverage by citation
databases of Thomson Reuters, such as the Science Citation Index
Expanded, and remain in the system for at least three years. Although
there are no publicized criteria, influential new journals occasionally
get their first (partial) JIF for a shorter period of indexing by
Thomson Reuters databases (
2).
Thomson Reuters' citation databases were initially designed to serve
regional interests of their users from the U.S. English sources were
preferentially accepted for coverage, and the JIFs were published to
compare the ‘importance’ of journals within a scientific discipline.
Nonetheless, the JIFs have gradually become yardsticks for ranking
scholarly journals worldwide, and their use has expanded far beyond the
initial regional and disciplinary limits (
3).
The issue of uses and misuses of the JIFs is a hot topic itself. The
dynamics and patterns of global interest to the issue can be explored by
a snapshot analysis of searches through Scopus, which is the most
comprehensive multidisciplinary database. As of November 6, 2016, there
are 4,003 indexed items, which are tagged with the term “Journal Impact
Factor (JIF)” in their titles, abstracts, or keywords, with date range
of 1983 to 2016. A steady increase of the indexed items starts from 2000
(n = 10) and reaches its pick in 2013 (n = 645) (
Fig. 1). Top 5 periodicals that actively publish relevant articles are
PLOS One (n = 111),
Scientometrics (n = 105),
Nature (n = 50),
J Informetrics (n = 41), and
J Am Soc Inform Sci Technol
(n = 26). Top 3 prolific authors in the field are the following
renowned experts in research evaluation and scientometrics: Bornmann L
(n = 22), Smith DR (n = 17), and Leydesdorff L (n = 14). Among the most
prolific countries, the U.S.A. is the absolute leader with 904 published
documents. Importantly, the absolute majority of the articles covers
issues in the medical sciences (n = 2,968, 74%). A large proportion of
the items are editorials (n = 1,477, 37%). The absolute majority of the
documents are in English (n = 3,595), followed by those in Spanish (n =
167), German (n = 110), Portuguese (n = 79), and French (n = 39).
Finally, 2 top-cited articles on the JIFs (893 and 391 times) are
authored by its creator, Eugene Garfield (
1, 4).
The JIFs and related journal rankings in the JCR have enourmously
influenced editorial policies across academic disciplines over the past
few decades. The growing importance of journals published from the
U.S.A. and Western Europe has marked a shift in the prioritization of
English articles (
5, 6),
sending a strong message to non-English periodicals — change the
language, cover issues of global interest, or perish. A large number of
articles across scientific disciplines from non-Anglophone countries,
and particularly those with a country name in the title, abstract, or
keywords, unduly end up in low-impact periodicals and do not appeal to
the authors, who cite references in high-impact journals (
7, 8).
Editors and publishers, who encounter the harsh competition in the
publishing market, are forced to change their priorities in line with
the citation chances of scholarly articles and ‘hot’ topics (
9). Several quantitative analyses have demonstrated that randomized controlled trials (
10) and methodological articles are highly cited (
11), and that systematic reviews receive more citations than narrative ones (
12).
Relying on these analyses, most journal editors have embarked on
rejecting ‘unattractive’ scientific topics and certain types of
articles. High-impact journals, and particularly those from the U.S.,
have boosted their JIFs by preferentially accepting authoritative
submissions of ‘big names’ in science, systematic reviews and
meta-analyses, reports on large cohorts and multicenter trials, and
practice guidelines.
Some established publishers have also decided to limit or ban
entirely items that receive few citations (e.g., short communications,
preliminary scientific reports, case studies) (
13).
Clinical case reports with enormous educational value for medical
students and physicians but low citation records have been fallen out of
favor and disappeared in most high-impact medical journals. And many
young researchers and students have been ousted from the mainstream
high-impact periodicals. All these subjective factors and the
‘obsession’ with impact factors have created a citation-related
publication bias, with discontinuing publication of a journal without
JIF as an extreme measure.
The ‘obsession’ with articles attracting abundant citations may be
also the trigger of the current unprecedented proliferation of
systematic reviews (
14), most of which are of low quality and even harmful for the scientific evidence accumulation (
15, 16, 17).
Academic promotion, grant funding, and rewarding schemes across most
developed countries and emerging scientific powers currently rely
heavily on where, but not necessarily what the authors publish.
Fallaciously, getting an article published in a high-impact journal is
viewed as a premise for academic promotion and research grant funding.
Many researchers list their articles on their individual profiles,
covering a certain period of academic activities, along with the JIFs
that tend to dynamically change (
18).
Likewise, ResearchGate™, the global scholarly networking platform,
calculates scores of publication activity in connection with the JIFs.
The JIFs of the target journals are still inappropriately employed by
research evaluators as the proxies of the quality. In China, for
example, bonuses paid to academics depend on a category of the target
journals, which is calculated as an average of the JIFs in the last
three years (
19). In the leading Chinese universities, distinctive monetary reward schemes push authors to submit to and publish more in
Nature, Science, and other high-impact journals (
20).
An analysis of more than 130,000 research projects, which were funded
by the U.S. National Institutes of Health, revealed that higher scores
were given by reviewers to proposals with potentially influential output
in terms of high JIFs and more citations, but not necessarily
innovative ideas (
21).
The decades-long overemphasis placed on the JIFs has evolved into a
grossly incorrect use of the term “impact factor” by sprung up bogus
agencies. These ‘predatory’ agencies claim to assess the impact of
journals and calculate metrics, which often mimic those by Thomson
Reuters, but do not take into account indexing in established databases
and citations from indexed journals (
22).
Predatory journals often display misleading or fake metrics on their
websites to influence inexperienced authors' choices of the target
journals (
23).
GLOBAL INITIATIVES AGAINST MISUSES |
To a certain degree, the decades-long global competition for getting
and increasing the JIFs has enabled improving the quality of the indexed
periodicals and subsequently attracting professional interest and
citations (
24).
However, the absence of alternative metrics for a long time has led to
monopoly and misuses of the JIFs. Journals publishing a single or a few
highly-cited articles and boosting their JIFs in the two succeeding
years have achieved an advantage over the competing periodicals (
25).
Disparagingly, some journal editors have also embarked on coercive
citation practices that unethically boosted their JIFs and adversely
affected the whole field of scientometrics (
26, 27).
Additionally, a thorough analysis of impressive increases of the JIFs
of a cohort of journals in 2013–2014 (> 3, n = 49) revealed
manipulations with shrinking of publication output and decreasing
article numbers in the denominator of the JIF formula (
28).
Curiously, despite the seemingly simple methodology of calculating
the JIF, values of metrics presented in the JCR often differ from those
calculated by editors and publishers themselves (
29).
All these and many other deficiencies of the JIF have prompted
several campaigns against its monopoly and misuses. The San Francisco
Declaration on Research Assessment (DORA), which was developed by a
group of editors and publishers at the Annual Meeting of the American
Society for Cell Biology in 2012, encouraged interested parties across
all scientific disciplines to improve the evaluation of research output
and avoid relying on the JIFs as the proxies of the quality (
30).
The Declaration highlighted the importance of crediting research works
based on scientific merits but not values of related JIFs. It also
called to discontinue practices of grant funding and academic promotion
in connection with JIFs. The organizations that issue journal metrics
were called to transparently publicize their data, allowing unrestricted
reuse and calculations by others.
A series of opinion pieces and comments on journal metrics, which were recently published in
Nature, heralded a new powerful campaign against misuses of the JIFs (
31).
First of all, it was announced that several influential journals of the
American Society for Microbiology would remove the JIFs from their
websites (
32). By analyzing distribution of citations, which contributed to JIFs of
Nature, Science, and
PLOS One,
it was emphasized that the average citation values did not reveal the
real impact of most articles published in these journals. For example,
78% of
Nature articles were cited below its latest impact factor
of 38.1. Displaying distribution of citations and drawing attention of
readers to highly-cited articles were considered as more appropriate for
assessing journal stance than simply publicizing the JIFs (
33).
Editors of
Nature strongly advised against replacing opinion
of peer reviewers with citations and related quantitative metrics for
evaluating grant applications and publications (
34).
Paying more attention to what is new and important for public health
rather than relying on surrogate metrics and prestige of target journals
was considered as a more justified approach to academic promotion of
authors (
35).
Finally, ten principles of research evaluation (The Leiden Manifesto) were published in
Nature to guide research managers how to use a combination of quantitiative and qualitative tools (
36).
The Leiden Manifesto called to protect locally relevant research, which
can be published in non-English and low-impact media, particularly in
the fields of social sciences and humanities. It pointed to the
differences in publication and citation practices across disciplines
that should not confound crediting and promotion systems; books,
national-language literature, and conference papers can be counted as
highly influential sources in some fields.
EMERGING ALTERNATIVE FACTORS OF THE IMPACT |
The digitization of scholarly publishing has offered numerous ways
for increasing the discoverability of individual articles and improving
knowledge transfer (
Box 1).
The systematization of searches through digital platforms and databases
has emerged as the main factor of scholarly influence. Authors and
editors alike are currently advised to carefully edit their article's
titles, abstracts, and keywords to increase the discoverability and
related impact (
37).
Importantly, a recent analysis of 500 highly-cited articles in the
field of knowledge management revealed a positive correlation between
the number of keywords and citations (
38). The same study pointed to the value of article references and page numbers for prediciting citations.
Box 1. Factors of the journal impact and importance
- Discoverability of journal articles by search engines by properly structuring titles, abstracts, and keywords
- Citations received by journal articles over a certain period of time from Scopus or Web of Science databases
- Downloads of journal articles within a certain period of time
- Attention to the journal by social media (e.g., Twitter, Facebook), blogs, newspapers, and magazines
- Journal endorsements and support by professional societies
- Completeness and adherence to ethical standards in the journal instructions
|
Experts advocate shifting from traditional JIF-based evaluations to
combined qualitative and quantitative metrics schemes for scholarly
sources (
39).
Citation counts from prestigious citation databases, such as Web of
Science and Scopus, and related arithmetic metrics will remain the
strongholds of the journal ranking in the years to come (
40).
Following a recent debate over the distribution of citations
contributing to the JIFs, it is likely that citation metrics will be
accompanied by plots depicting most and least cited items (
41).
An argument in favor of a combined approach to the impact
particularly concerns the use of individual articles, which are
published in journals with low or declining JIFs, but are still actively
dowloaded and distributed among professionals, most of whom read but
never publish papers (
42, 43).
The combined approach has been already embraced by Elsevier, displaying
top 25 most downloaded articles along with citation metrics from Web of
Science and Scopus on their journal websites. Although there is no
linear correlation, downloads reveal interest of the professional
community and may predict citations (
44, 45).
Some established publishers, such as Nature Publishing Group and
Elsevier, have gone further and started providing their readers with
more inclusive information about the use of individual articles by
combining citation metrics and downloads with altmetric scores (
46).
The altmetric score is a relatively new multidimentional metric, which
was proposed in 2010 to capture a board online attention of social
media, blogs, and scholarly networking platforms to research output (
47).
Essentially, the enhanced online visibility of articles may attract
views, downloads, bookmarks, likes, and comments on various networking
platforms. Pilot studies of Facebook “likes” and Twitter mentions have
pointed to an association between social media attention and traditional
impact metrics, such as citations and downloads, in the field of
psychology and psychiatry (
48, 49) and emergency medicine (
50).
Although no such association has been reported across many other fields
of science, wider distribution of journal information through social
media holds promise for distinguishing popular and scientifically
important research output (
51, 52, 53).
With the rapid growth of numerous online publication outlets,
reaching out to relevant readers and evaluators is becoming a critical
factor of impact. Emerging evidence suggests that periodicals with
affiliation and endorsement of relevant professional societies get an
advantage and attract more citations (
54).
The journal affiliation to a professional society is advantageous in
terms of maintaining flow of relevant submissions from the membership
and continuous support of the scientific community, both valued by
prestigous indexing services. There are even suggestions to prefentially
submit articles to journals, which are supported by professional
societies, regardless of their JIFs. Such an approach can be
strategically important for circumventing substandard open-access
periodicals (
55, 56).
Finally, several studies have examined the relationship between JIF
and completeness of the journal instructions with regard to research and
publication ethics (
57, 58, 59). In a landmark comparative analysis of the instructions of 60 medical journals with JIFs above 10 (e.g.,
Nature, Science, Lancet) and below 10 for the year 2009 (e.g.,
Gut, Archives of Internal Medicine, Pain), ethical considerations were significantly better scored for periodicals with higher JIF (
57).
The results of the study pointed to the importance of mentioning about
research reporting guidelines, such as STrengthening the Reporting of
OBservational studies in Epidemiology (STROBE) and Consolidated
Standards of Reporting Trials (CONSORT), conflicts of interest, local
ethics committee approval, and patient consents for increasing the
impact and attractiveness of the journals for authors. Similar results
were obtained in a subsequent analysis of the instructions of
radiological (
58), but not medical laboratory journals (
59).
Despite the differences across the journals, it can be concluded that
upgrading ethical instructions in line with the examples of the flagship
multidisciplinary and specialist journals is rewarding in terms of
attracting the best possible and complete research reports (
60).
The lasting debates over the JIF, its uses, and misuses highlight
several points of interest to all stakeholders of science communication.
First of all, authors are currently offered numerous options for
choosing best target journals for their research. The JIFs may influence
their choices along with other journal metrics and emerging alternative
factors of the impact. They should realize that not all journals with
the JIFs are up to high ethical standards, and that some periodicals
without the JIFs but with support of professional societies can be
better platforms for relevant research. Journals accepting locally
important articles in English or national languages can be still
influential and useful (
61).
Journal editors have an obligation toward their authors to widely
distribute relevant information to increase the use of the articles and
attract citations. Social media and scholarly networking platforms can
be instrumental in this regard. Regularly revising and upgrading journal
instructions may also improve the structure and ethical soundness of
the publications, and translate into the discoverability and
attractiveness for indexing services (
60).
Editors, who aim to boost the JIFs, should not undermine the importance
of publishing different types of articles, regardless of their citation
chances. Manipulating with the number of articles, which are counted in
the denominator of the JIF formula, cannot be considered as the best
service to the authors.
Indexers of Thomson Reuters databases should respond to arguments
that point to the need for revising the original formula of the JIF (
62, 63).
Remarkably, editorials and letters, the so-called noncitable items,
which have long been excluded from the denominator of the JIF, have
changed their influence over the past decades. These items, and
particularly in the modern biomedicine, contain long lists of
references, affecting the JIF calculations in many ways. It should be
also stressed that the lack of transparency of the JIF calculations,
which is partly due to the lack of open access to citations tracked by
Thomson Reuters databases (
64), damages reputation of the JIF as a reliable and reproducible scientometric tool.
Finally, research evaluators should consider the true impact of
scholarly articles, which is confounded by their novelty, methodological
quality, ethical soundness, and relevance to the global and local
scientific communities.
nice post your way of explanation is good..Excellent article,it was helpful to us to learn more and useful to teach others.
ReplyDeletePainless Dental Treatment In Chennai | Invisalign Treatment In Chennai