10 simple strategies to increase the impact factor of your publication
factors are heavily criticized as measures of scientific quality.
However, they still dominate every discussion about scientific
excellence. They are still used to select candidates for positions as
PhD student, postdoc and academic staff, to promote professors and to
select grant proposals for funding. As a consequence, researchers tend
to adapt their publication strategy to avoid negative impact on their
careers. Until alternative methods to measure excellence are
established, young researchers have to learn the “rules of the game”.
The importance of ‘excellent’ publications for the career of scientists
strive for publications in journals with high impact factors –
especially if they are not sure yet whether they want to pursue a career
in academia or in the non-academic job market. Read more here: Do I need Nature or Science papers for a successful career in science? and What is the best publication strategy in science?
What are impact factors?
journal is a measure reflecting the average number of citations to
recent articles published in that journal. The impact factor is
frequently used as a proxy for the relative importance of a journal
within its field. See this summary for details.
Are impact factors a good proxy for scientific quality?
is considerable discussion in the scientific world whether impact
factors are a reliable instrument to measure scientific quality.
Several funding organizations worldwide started to reduce the influence
of this parameter on their strategy to fund excellent science.
the everyday lab talk we always talk about “the impact factor of a
publication” although the correct terminology would be “the impact
factor of the journal where the paper has been published”. But we are
lazy. I even used this misleading terminology in the title of this post.
Are there better metrics to measure scientific quality?
(or h-factor) which are primarily based on citations and not on the
impact factor of a journal where a paper is published. However, most of
these alternative metrics have their own disadvantages – especially for
young researchers (see below).
Can we ignore impact factors?
Most scientists get exposed to discussions about impact factors – even
when they are *not* in scientific domains which are dominated by these
bibliometric measures such as life sciences. Thus, we cannot
ignore impact factors because they are still broadly used to evaluate
the performance of single scientists, departments and institutions.
How are impact factors used?
- to select excellent candidates for positions as PhD student, postdoc and academic staff
- to select recipients of grants
- to promote professors
- to distribute internal grants, resources and infrastructures in universities
- to establish scientific collaborations in the context of international networks
- to select reviewers and editors for journals
- to select speakers on scientific conferences
- to select members of scientific commissions e.g. to evaluate grant proposals or select new staff members
- to determine the scientific output in university rankings
- … and many others
until alternative methods to measure excellence are established young
researchers have to learn the “rules of the game”.
Do you want higher impact factors or more citations?
the impact factor or the number of citations is more relevant. This
question is difficult to answer. My very personal view is that citations
become increasingly important with increasing maturity of the career of
a scientists. The older scientists get the more they will be judged for
the consistency of their output (how many papers per year during the
last 5 or 10 years – but also how many ‘excellent’ papers per year
based on the impact factor and/or citations). Young researchers often have only one or two publications which are pretty new, thus, the number of citations is limited.
Therefore, for pragmatic reasons, funding institutions and universities
will use the impact factor of the journal as a proxy of their
scientific excellence. To evaluate the output of more mature scientists
the h-index or the m-index may be used which are both based exclusively on citations and not on impact factors.
confronted with the problem that their scientific quality will be judged
based on the impact factors of their publications – especially in
contexts which are highly relevant for their early careers such as in
selection committees (to get hired) and grant committees (to get
A systematic approach is needed
Simple strategies to publish in a better journal
1. Look for a mechanism not for a phenomenon
very common mistake young researchers do is to fall in love with
descriptive analyses. You can spend many years just by precisely
describing correlations, showing fancy images of receptor expressions
or dramatic morphological or biochemical changes in test and control
tissues. However, whenever you find a causal link between two effects
the quality of your study will increase.
demonstrates that the effect you describe can be significantly increased
or reduced by a well-defined intervention. Typical examples are the use
of agonists versus antagonists or genetic knockout versus transgene
2. Address the same question with additional methods
published studies is that they use a multitude of different methods to
address the same question with at least three different approaches. For
example, instead of showing only a Western blot you can combine it with
qPCR, immunohistology and a FACS bead analysis. When showing the same
result with several different methods it is much more convincing (for
example the upregulation of a specific receptor on a specific cell type
but not on others). Sophisticated labs may use a number of different
genetically modified mouse lines in one publication to address the same
techniques in your study to corroborate your results. Ideally, you
include two more *functional* tests (see first point).
3. Re-analyze your samples with a different or more complex method
existing samples from previous experiments to run additional analyses.
Often you can buy kits which are not substantially more expensive but
give you more results (such as FACS bead kits that let you determine the
levels of several factors in one sample). Thus, just by obtaining more
data from your existing samples you may improve the quality of the
study. However, you may also end up with a lot of unrelated or
contradictory findings. Critically analyze whether the new analysis
really adds new information.
4. Add fancy techniques
very well-known method to improve a study is to use fancy techniques.
It always helps to include new and exciting technologies which
corroborate your findings. Good examples are new imaging techniques to
show labelled cells or factors in vivo or inhibitors which work
via a new mechanism. But there is a big caveat: Unfortunately,
scientists often thoughtlessly include the newest techniques to their
grant proposals and publications without really adding value to the
studies. As a result there is an inflationary use of most exciting new
technique (typical examples during the last decade where iPSCs and
5. Develop a fancy technology
increase the quality of your publications is to include a new techniques
you have developed yourself. If the technique is used later by many
others your publication will also be cited multiple times. In addition,
there is a good chance that many colleagues will want to collaborate
and give you co-authorships on their publications which increases the
number of your publications. A disadvantage may be that conservative
reviewers do not believe the value of the new technique and give you a
bad time to prove the value or reject the paper.
6. Collaborate with a statistician
findings it is in principle obligatory to work together with one or
more statisticians – especially when you work with big datasets or small
sample numbers which are not independent from each other. The choice of
the right test and the correct argumentation in the materials &
methods section is a typical challenge for many young researchers.
7. Fuse smaller studies
message per paper” often leading to “salami tactics”, thus, a big study
is divided into several smaller publications. The opposite strategy may
be useful to increase the quality of two smaller studies provided they
are complementary. A typical disadvantage may be discussions about
authorships if the smaller studies have different first authors.
However, being equally contributing second author on a high impact paper
may be better than being first author on a much smaller paper.
Unfortunately, the value of such an equally contributing co-authorship
differs dramatically in different domains.
8. Collaborate with experts in the field
researchers often think that collaborating with experts in the field
may help to publish in journals with a higher impact factor. This
hypothesis may be true or not. The advantage is that experts in the
field may help to improve the design of the study, may point early to
weaknesses in the study, help to find relevant literature. In addition,
they may provide access to expensive instruments, exotic transgenic
animals, high class models or excellent infrastructure. The
disadvantages are that experts may have only limited time or motivation
to contribute substantially to a study from another lab and they may
have political enemies or competitors who kill the paper with
exaggerated reviewer requests. In some domains such as genetics it is a
big advantage to become part of huge networks who always publish very
high and include most network members in the authors’ list.
9. Look for a journal with the perfect scope and check where your competitors publish
substantially improve your publication output. Many researchers have the
tendency to publish again and again in the same journal. It may make
sense to look outside your niche because there may be journal editors in
other domains who might be excited to publish your study. For example,
we study the neuroimmunology of CNS repair. Instead of only submitting
to neuroimmunology journals we have published in the following domains:
neuroscience, immunology, cytokine research, neuropathology and
pharmacology. Simply use the keywords in the abstract and look for
journals who have this word in the title or in the scope description on
with similar interest and especially your competitors publish their
papers. This may give you a hint which journals may have the right scope
to get their editors interested in your studies. There is a good chance
that they publish in high impact papers outside their classical domain.
Be careful to understand the relation of your competitor with the
journal. If he/she is the corresponding editor for your paper it might
be wise to submit elsewhere J.
outside your domain to publish your work. Maybe submit your paper where
your competitors publish (if they are not the editors).
10. Submit to a journal with a much higher impact factor to get reviewers comments
journal with a substantially higher impact factor than the average of
your group. If you submit too high the chances are high that the paper
gets immediately rejected and you lost some valuable time and maybe the
submission fee. If you made a good choice and the paper gets send out to
the reviewers you may receive very valuable reviewers comments – even when the paper gets rejected.
Some comments may be exaggerated and not feasible, some may be plain
wrong but some may help you to substantially improve the study by
performing the requested experiments. In the best case you can deliver
the requested additional data and get published. If not you can perform
additional experiment, improve the text and submit a substantially
better publication to another journal.
10 simple strategies to increase the impact factor of your publication - smartsciencecareer blog