Sunday, 14 June 2015

10 simple strategies to increase the impact factor of your publication

 Source: http://www.smartsciencecareer.com/increase-the-impact-factor-of-your-publication/

10 simple strategies to increase the impact factor of your publication

impact factorImpact
factors are heavily criticized as measures of scientific quality.
However, they still dominate every discussion about scientific
excellence. They are still used to select candidates for positions as
PhD student, postdoc and academic staff, to promote professors and to
select grant proposals for funding. As a consequence, researchers tend
to adapt their publication strategy to avoid negative impact on their
careers. Until alternative methods to measure excellence are
established, young researchers have to learn the “rules of the game”.
However, young scientists often need advice how to reach higher impact factors with their publications.
Do you want this article as a PDF file? Click here.

The importance of ‘excellent’ publications for the career of scientists

Young researchers are well-advised to
strive for publications in journals with high impact factors  –
especially if they are not sure yet whether they want to pursue a career
in academia or in the non-academic job market. Read more here: Do I need Nature or Science papers for a successful career in science? and What is the best publication strategy in science?

What are impact factors?

The impact factor of a scientific
journal is a measure reflecting the average number of citations to
recent articles published in that journal. The impact factor is
frequently used as a proxy for the relative importance of a journal
within its field. See this summary for details.

Are impact factors a good proxy for scientific quality?

qualityThere
is considerable discussion in the scientific world whether impact
factors are a reliable instrument to measure scientific quality.

Several funding organizations worldwide started to reduce the influence
of this parameter on their strategy to fund excellent science.
One of the many critical points is that impact factors describe the average quality of a journal and should not be used for single publications. In
the everyday lab talk we always talk about “the impact factor of a
publication” although the correct terminology would be “the impact
factor of the journal where the paper has been published”. But we are
lazy. I even used this misleading terminology in the title of this post.
:)

Are there better metrics to measure scientific quality?

Many alternatives for impact factors have been suggested for example the h-index
(or h-factor)  which are primarily based on citations and not on the
impact factor of a journal where a paper is published. However, most of
these alternative metrics have their own disadvantages – especially for
young researchers (see below).
It is also important to note that there are many ways for journals to manipulate their impact factors. See this interesting article about easy strategies to increase the impact factor of a journal from the perspective of an editor.
Does this sound like a good way to measure scientific quality? Probably not.

Can we ignore impact factors?

Impact factors are easy to determine. Therefore, administrators love to use them to evaluate scientific output.
Most scientists get exposed to discussions about impact factors  – even
when they are *not* in scientific domains which are dominated by these
bibliometric measures such as life sciences. Thus, we cannot
ignore impact factors because they are still broadly used to evaluate
the performance of single scientists, departments  and institutions.

How are impact factors used?

Impact factors are still used during many procedures:
  • to select excellent candidates for positions as PhD student, postdoc and academic staff
  • to select recipients of grants
  • to promote professors
  • to distribute internal grants, resources and infrastructures in universities
  • to establish scientific collaborations in the context of international networks
  • to select reviewers and editors for journals
  • to select speakers on scientific conferences
  • to select members of scientific commissions e.g. to evaluate grant proposals or select new staff members
  • to determine the scientific output in university rankings
  • … and many others
As a consequence, researchers tend to adapt their publication strategy to avoid negative impact on their careers. Unfortunately,
until alternative methods to measure excellence are established young
researchers have to learn the “rules of the game”.  
 

Do you want higher impact factors or more citations?

Young researchers often wonder whether
the impact factor or the number of citations is more relevant.  This
question is difficult to answer. My very personal view is that citations
become increasingly important with increasing maturity of the career of
a scientists. The older scientists get the more they will be judged for
the consistency of their output (how many papers per year during the
last 5 or 10 years – but also  how many ‘excellent’ papers per year
based on the impact factor and/or citations). Young researchers often have only one or two publications which are pretty new, thus, the number of citations is limited.
Therefore, for pragmatic reasons, funding institutions and universities
will use the impact factor of the journal as a proxy of their
scientific excellence. To evaluate the output of more mature scientists
the h-index or the m-index may be used which are both based exclusively on citations and not on impact factors.
Thus, young researchers are
confronted with the problem that their scientific quality will be judged
 based on the impact factors of their publications – especially in
contexts which are highly relevant for their early careers such as in
selection committees (to get hired) and grant committees (to get
funding).

A systematic approach is needed

The most important first step is to make a plan and discuss it with your co-researchers and your supervisor. The following simple strategies cost more planning, more time, more money and more effort. However, there are good reasons to go for higher impact factors – read more here: What is the best publication strategy in science?

Simple strategies to publish in a better journal

The following strategies are well known among senior scientists and will primarily help young researchers to look for feasible ways to improve their studies within the limits of their contract and budget.
Do you want this article as a PDF file? Click here.

1.      Look for a mechanism not for a phenomenon

mechanismA
very common mistake young researchers do is to fall in love with
descriptive analyses. You can spend many years just by precisely
describing correlations, showing fancy images of receptor expressions 
or dramatic morphological or biochemical changes in test and control
tissues. However, whenever you find a causal link between two effects
the quality of your study will increase.
Thus, look for a functional test which
demonstrates that the effect you describe can be significantly increased
or reduced by a well-defined intervention. Typical examples are the use
of agonists versus antagonists or genetic knockout versus transgene
expression.
Add one or more well-designed functional experiment to increase the quality of a study.

2.      Address the same question with additional methods

A typical characteristic of highly
published studies is that they use a multitude of different methods to
address the same question with at least three different approaches. For
example, instead of showing only a Western blot you can combine it with
qPCR, immunohistology and a FACS bead analysis. When showing the same
result with several different methods it is much more convincing (for
example the upregulation of a specific receptor on a specific cell type
but not on others). Sophisticated labs may use a number of different
genetically modified mouse lines in one publication to address the same
question.
Use at least two other
techniques in your study to corroborate your results. Ideally, you
include two more *functional* tests (see first point).

3.      Re-analyze your samples with a different or more complex method

Similar to the last point you can use
existing samples from previous experiments to run additional analyses.
Often you can buy kits which are not substantially more expensive but
give you more results (such as FACS bead kits that let you determine the
levels of several factors in one sample). Thus, just by obtaining more
data from your existing samples you may improve the quality of the
study. However, you may also end up with a lot of unrelated or
contradictory findings. Critically analyze whether the new analysis
really adds new information.
Get more information from each experiment and a broader perspective by performing more analyses on the same samples.

4.      Add fancy techniques

fancy2A
very well-known method to improve a study is to use fancy techniques.
It always helps to include new and exciting technologies which
corroborate your findings. Good examples are new imaging techniques to
show labelled cells or factors in vivo or inhibitors which work
via a new mechanism. But there is a big caveat: Unfortunately,
scientists often thoughtlessly include the newest techniques to their
grant proposals and publications without really adding value to the
studies. As a result there is an inflationary use of most exciting new
technique (typical examples during the last decade where iPSCs and
optogenetics).
Include a new and exciting technique but make sure that there is convincing added value.

5.      Develop a fancy technology

One of the most effective strategies to
increase the quality of your publications is to include a new techniques
you have developed yourself. If the technique is used later by many
others  your publication will also be cited multiple times. In addition,
there is a good chance that many colleagues will want to collaborate
and give you co-authorships on their publications which increases the
number of your publications. A disadvantage may be that conservative
reviewers do not believe the value of the new technique and give you a
bad time to prove the value or reject the paper.
Developing a new and exciting technology will bring you many citations and co-authorships.

6.      Collaborate with a statistician

In order to increase the quality of your
findings it is in principle obligatory to work together with one or
more statisticians – especially when you work with big datasets or small
sample numbers which are not independent from each other. The choice of
the right test and the correct argumentation in the materials &
methods section is a typical challenge for many young researchers.
Always collaborate with a statistician if possible.

7.      Fuse smaller studies

A classical saying in science is “One
message per paper” often leading to “salami tactics”, thus, a big study
is divided into several smaller publications. The opposite strategy may
be useful to increase the quality of two smaller studies provided they
are complementary.  A typical disadvantage may be discussions about
authorships if the smaller studies have different first authors.
However, being equally contributing second author on a high impact paper
may be better than being first author on a much smaller paper.
Unfortunately, the value of such an equally contributing co-authorship
differs dramatically in different domains.
Fuse smaller studies into a big publication – provided the findings are complementary.

8.      Collaborate with experts in the field

networkYoung
researchers often think that collaborating with experts in the field
may help to publish in journals with a higher impact factor. This
hypothesis may be true or not. The advantage is that experts in the
field may help to improve the design of the study, may point early to
weaknesses in the study, help to find relevant literature. In addition,
they may provide access to expensive instruments, exotic transgenic
animals, high class models or excellent infrastructure. The
disadvantages are that experts may have only limited time or motivation
to contribute substantially to a study from another lab and they may
have political enemies or competitors who kill the paper with
exaggerated reviewer requests. In some domains such as genetics it is a
big advantage to become part of huge networks who always publish very
high and include most network members in the authors’ list.
Collaborate with experts in the field who provide intellectual input, additional techniques or better models.

9.      Look for a journal with the perfect scope and check where your competitors publish

This is a simple advice which may
substantially improve your publication output. Many researchers have the
tendency to publish again and again in the same journal. It may make
sense to look outside your niche because there may be journal editors in
other domains who might be excited to publish your study. For example,
we study the neuroimmunology of CNS repair. Instead of only submitting
to neuroimmunology journals we have published in the following domains:
neuroscience, immunology, cytokine research, neuropathology and
pharmacology. Simply use the keywords in the abstract and look for
journals who have this word in the title or in the scope description on
their website.
It is useful to check where scientists
with similar interest and especially your competitors publish their
papers. This may give you a hint which journals may have the right scope
to get their editors interested in your studies. There is a good chance
that they publish in high impact papers outside their classical domain.
Be careful to understand the relation of your competitor with the
journal. If he/she is the corresponding editor for your paper it might
be wise to submit elsewhere J.
Find a high impact journal
outside your domain to publish your work. Maybe submit your paper where
your competitors publish (if they are not the editors).

10. Submit to a journal with a much higher impact factor to get reviewers comments

Finally, try to submit always to a
journal with a substantially higher impact factor than the average of
your group. If you submit too high the chances are high that the paper
gets immediately rejected and you lost some valuable time and maybe the
submission fee. If you made a good choice and the paper gets send out to
the reviewers you may receive very valuable reviewers comments – even when the paper gets rejected.
Some comments may be exaggerated and not feasible, some may be plain
wrong but some may help you to substantially improve the study by
performing the requested experiments. In the best case you can deliver
the requested additional data and get published. If not you can perform
additional experiment, improve the text and submit a substantially
better publication to another journal.
Do you know other useful strategies? Please tell us what you think and add comment below.
If you like this post please join the community!


10 simple strategies to increase the impact factor of your publication - smartsciencecareer blog

No comments:

Post a Comment