Monday 23 March 2015

High-impact-factor Syndrome

 Source: http://www.aps.org/publications/apsnews/201411/backpage.cfm

High-impact-factor Syndrome

By Carlton M. Caves



Carlton Caves

Carlton Caves
You are surprised to find that you have been tasked with evaluating
minor-league pitchers eager to get into major-league baseball. You
interview applicants, collect information, and observe their
performance. But, being a physicist, you know next to nothing about
evaluating pitching skill, so to make your life easier, you fix on a
single figure of merit, the pitcher’s heat (fastball speed). Although
you have access to each applicant’s fastball speed, you elect to rank
the candidates in terms of the average speed of all the pitchers on an
applicant’s current minor-league team. Using this as a proxy for
individual pitching ability, you assemble a pitching staff. As the
season wears on, your pitchers are drubbed in game after game. You see
the general manager approaching with a frown on his face, and...the
alarm goes off.



Shaking off the nightmare, you chuckle to yourself that no pitching
scout would use a single measure of performance when many skills enter
into effective pitching, and even if he did, it would never occur to him
to evaluate an individual pitcher in terms of the average strength of
the pitching staff the pitcher belongs to.



Later that day, you participate in a meeting to discuss applicants
for a position at your institution. You find that much weight is given
to the number of citations accumulated by an applicant’s publications,
and that extra weight is assigned to publications in high-impact-factor
(HIF) journals, mainly Nature, the Nature suite of specialty research journals, and Science.
You comment that heavy reliance on citation numbers strikes you as a
peculiarly one-dimensional way to evaluate candidates. Moreover, taking a
measure, the impact factor (IF), that was designed to rate journals,
and applying it instead to individual papers within that journal, i.e.,
judging a research paper by the company it keeps — this, you point out,
is an elementary category error. Some good-natured ribbing ensues — how
long have you been asleep? — and you are informed that publication in
HIF journals is prima facie evidence of research prowess and, in any case, is what your higher-ups want to see.



This is a caricature, to be sure, but if you think it’s only a
nightmare, like the pitching-scout dream, you need to wake up.
Increasingly, scientists, especially junior scientists, are being
evaluated in terms of the number of publications they have in HIF
journals, a practice I call high-impact-factor syndrome (HIFS). Take a look at a recently posted widget [1]
that an early-career scientist can use to calculate a probability of
his/her becoming a “principal investigator.” The four most important
factors entering into that probability? Be male. Be selfish (insist on being first author). Be elite [from one of the top 10 institutions in the Academic Ranking of World Universities [2]. Publish in journals with high impact factors. Though each of these deserves an article, here I consider only the last.



I’ve talked to enough people to learn that HIFS is less prevalent in
physics and the other hard sciences than in biology and the biomedical
sciences and also is less prevalent in North America than in Europe,
East Asia, and Australia. For many readers, therefore, this article
might be a wake-up call; if so, keep in mind that your colleagues
elsewhere and in other disciplines might already have severe cases.
Moreover, most physicists I talk to have at least a mild form of the
disease.



What is journal impact factor?



Suppose you want the 2013 IF for Physical Review Letters:
Take all the papers published in PRL in 2011 and 2012; the standard
(two-year) 2013 IF is the average number of citations accumulated by
these papers in 2013, in a list of "indexed journals" maintained by
Thomson Reuters. Its Web of Science indexes over 8,000 science and
technology journals and issues an annual report, called the Journal
Citation Reports (JCR), which lists IFs and other measures of journal
impact. In particular, you will also see five-year IFs, which are
computed using a time horizon of five years instead of the two years for
standard IF. For a given journal, IF (five-year IF) is the average
annual citation rate for papers that are on average 1.5 (3) years old.



Journal 2-year IF 5-year IF
Nature 42.351 40.783

Nature Physics 20.603 20.059
Nature Photonics 29.958 32.342
Nature Medicine 28.054

26.501
Nature Geoscience 11.668 13.930
Nature Communications 10.742 11.023
Science 31.477 34.463
Cell 33.116 35.020
Reviews of Modern Physics 42.860 52.577
Physical Review Letters 7.728 7.411
Physical Review A 2.991 2.729
Physical Review B 3.664 3.564
Physical Review C 3.881 3.551
Physical Review D 4.864 4.046
Physical Review E 2.326 2.302
Physical Review X 8.385 -
New Journal of Physics 3.673 3.678




The Table, taken from the 2013 Journal Citation Reports, gives 2013
IFs and five-year IFs for several journals of interest to physicists,
along with a few other journals for comparison. Even this limited set
illustrates several points. Journals of record, which seek to publish
all significant research in a discipline, are quite different from
magazines that cherry-pick what their editors consider to be the most
important or most significant articles in all of science or in a
particular discipline. Papers in different disciplines, with varying
numbers of researchers, accumulate systematically different numbers of
citations. Different kinds of articles garner different numbers of
citations — if you want to jack up your own citation count, write a good
review article for Reviews of Modern Physics. Some journals
publish a mix of article types, including primary research articles,
reviews, and semi-technical summaries. Comparing a physics journal to
one that publishes in all disciplines or comparing a journal that
publishes primary research articles with one that publishes a mix of
article types is the proverbial apples and oranges. Invidious
comparisons based on IF are a source of concern for the health of the
APS journals, which are rightly a pride of our discipline [3].



What is HIFS?



HIFS is the practice of using number of publications in HIF journals
as a proxy for assessing research accomplishment or potential. This is
often done for institutions or for units within institutions, and it is
also increasingly used for evaluating individuals, in decisions on
hiring, promotion, funding, and prizes and awards. I concentrate here on
its application to individuals, although some of its consequences are
driven as strongly or more strongly by the practice of applying it to
units such as physics departments.



Do you have HIFS? Here is a simple test. You are given a list of
publications, rank-ordered by number of citations, for two physicists
working in the same sub-discipline. All of the first physicist’s
publications are in PRL and PRA, and all of the second’s are in Nature and Nature Physics.
In terms of the citation numbers and publication dates, the two
publication records are identical. You are asked which physicist has had
more impact. You cannot decline to participate by saying you need more
information. Any reasonable assessor would indeed insist on gathering
additional information, for example, by reading some of the papers, but
by excluding additional information, we isolate the effect of IF on your
judgment. If you have even the slightest inclination to give the nod to
the second physicist, you are suffering from HIFS. Given just the
specified information, I would come to the opposite conclusion about the
two physicists: The
first physicist’s record is more impressive because the citation record
has not received the artificial boost of publishing in the
high-visibility
Nature suite.



Where did HIFS come from?



I think HIFS can be traced to the rise of formal assessments of the
collective research impact of institutions, departments, and other units
within institutions. These assessments strive for objectivity, partly
because objectivity seems like something to be strived for and partly
because the scope of the assessment is large enough both to make
objective measures informative and to make subjective evaluations
difficult to assemble and to interpret uniformly across institutions or
units. The number of published papers seems an obvious objective metric,
but not all papers are created equal. Citations might be brought in to
measure the impact of a paper, but since these assessments are meant to
be snapshots, the citation record is generally too recent to be very
informative. Publications in HIF journals are then weighted more heavily
than other papers because these papers have more potential for
substantial impact, as measured, for example, by future citations. HIF
thus emerges as a mildly informative tool for assessment of departments
and larger entities, although those in charge of these assessments often
misread "mildly informative" as "wildly informative."



With HIF accepted as an objective component of unit-wide assessments,
it is only a short step to applying it to individuals. Surely, it is
said, if the department needs publications in HIF journals for its own
assessment, it should hire and value most highly those people who have
demonstrated the capacity to produce those publications. As science
becomes broader and researchers more specialized, we all become less
equipped to assess the contributions of our colleagues, and this
increases the temptation to adopt a shorthand proxy like HIFS.
Administrators, even more distant from particular research areas and
thus weaker on the technical expertise needed to assess individuals,
welcome the convenient and objective HIFS proxy, especially since it is
free of the explicit and implicit biases that plague subjective
evaluations.



Middle-career and senior scientists, sensing a need to secure their
reputations, opt to aim their research at what they think can be
published in HIF journals. Junior scientists, highly attuned to the
direction the wind is blowing, get the message that their job and
funding prospects are tied to publication in HIF journals. Students and
postdocs ask their mentors, “Don’t you think we can get this paper into Nature?”
Some mentors lead the charge, and others acquiesce; motives range from
personal advancement to the desire to help mentees get a job. And so it
goes: A structure of incentives and rewards entrenches itself.



What are the consequences?



Suppose you are evaluating a middle-career or senior scientist for a
promotion or for a prize or award. Focusing only on the citation record
is very narrow indeed, since it ignores many factors that enter into a
scientist’s impact, yet it is also true that research articles are an
important part of a scientist’s record. For middle-career and senior
scientists, with dozens to hundreds of publications, citation counts,
readily available from Web of Science or Google Scholar, are a
rough-and-ready measure of the influence of a scientist’s research, when
the citation record is calibrated to the scientist’s particular field
of research. Giving extra credit for publications in HIF journals is,
however, precisely the category error alluded to above: The paper
citation counts are all the information available from citation data;
giving extra credit for publications in HIF journals, i.e., for the
company a paper kept, makes no sense.



In the case of junior scientists, the situation is more complicated.
Their publication records are thinner and more recent. The focus shifts
from evaluating accomplishment to trying to extract from the record some
measure of potential. It is probably true that there is a correlation
between publication in HIF journals and potential, but it is a weak
correlation that is confounded with questions of multiple co-authors and
influential supervisors and their style of publication. Yet, even if
you think publication in HIF journals is informative, it is not remotely
as instructive as evaluation of the full record, which includes the
actual research papers and the research they report, plus letters of
recommendation, research presentations, and interviews. When HIFS
intrudes into this evaluation, it amounts to devaluing a difficult,
time-consuming, admittedly imperfect process in favor of an easy,
marginally informative proxy whose only claim on our attention is that
it is objective.



At some scale between unit-wide and individual assessments, HIF goes
from being mildly informative to being marginally informative or
useless. Relying on HIF leads to poor decisions, and the worse and more
frequent such decisions are, the more they reinforce the HIFS-induced
incentive structure. As physicists, we should know better. We know data
must be treated with respect and not be pushed to disclose information
it doesn’t have, and we know that just because a number is objective
doesn’t mean it is meaningful or informative.



Even more pernicious than applying HIFS to individuals is the
influence it exerts on the way we practice physics. Social scientists
call this Campbell’s law: “The more any quantitative social indicator is
used for social decision-making, the more subject it will be to
corruption pressures and the more apt it will be to distort and corrupt
the social processes it is intended to monitor.” [4]
This social-science law is nearly as ironclad as a physical law. In the
case of HIFS, there will be gaming of the system. Moreover, our
research agenda will change: If rewards flow to those who publish in HIF
journals, we will move toward doing the research favored by those
journals. No matter how highly you think of the editors of the HIF
journals, they are independent of and unaccountable to the research
community, they do not represent the entire range of research in the
sciences or in physics, and their decisions are inevitably colored by
what sells their magazines.



What to do?



It is far easier to describe and diagnose HIFS than to come up with
effective measures for dealing with it. I give a list below, but the
list consists mainly of appeals to conform to best practices for
conducting and evaluating research. Though I believe that scientists
have better-than-average ability to recognize and adhere to best
practices, I appreciate that high-minded admonitions have little effect
unless they are aligned with incentive and reward structures. When
departments are being assessed on the basis of number of publications in
HIF journals and junior scientists think their job prospects are tied
to such publication, HIFS is not going to go away by asking everybody to
play nice. We need ideas for changing the incentive structure. My one
idea in this regard is the last item in the list.

  • Renew your commitment to effective scientific communication.
    When writing a research paper, first decide on the style and format you
    think most effective for communicating to the audience you want to
    reach, and only then think about a journal that publishes the style you
    have adopted and reaches your desired audience. If you are a mentor,
    teach this approach to your students and postdocs. When they ask, “How
    can we get this paper into Nature Physics or PRL?” your reply should be,
    “How can we most effectively communicate our results to the research
    community?”
  • When evaluating candidates for positions, promotions, and prizes
    or awards, commit to a technically informed evaluation of each
    candidate’s entire record
    . Object when HIFS is introduced as a
    proxy. Should you lack the technical background to judge research
    accomplishments, say so and find ways to obtain expert opinion — letters
    of recommendation are, of course, a traditional way of doing
    that—rather than falling back on HIFS as a proxy.
  • When writing letters of recommendation, write a technically
    informed evaluation of a candidate’s capabilities and impact, including a
    description and evaluation of important research contributions
    . Do
    not fall back on HIFS as a proxy for research potential or impact. If
    you are a mentor, assure your students and postdocs that your letter for
    them will focus on accomplishments and contributions, not on the
    journals they have published in.
  • Educate administrators that the HIF shortcut, though not devoid of information, is only marginally useful.
    For any scientist, junior or senior, an evaluation of research
    potential and accomplishment requires a careful consideration of the
    scientist’s entire record. A good administrator doesn’t need to be
    taught this, so this might be a mechanism for identifying and weeding
    out defective administrators.
  • If you are a senior or mid-career scientist who advertises
    yourself by categorizing your publications in terms of HIF journals,
    stop doing that
    . This only invites others to value and use HIFS. If
    you want to draw attention to the citation record of your publications,
    set up a Web of Science Researcher ID and/or a Google Scholar profile,
    and let the record speak for itself.
  • Help the public-relations people at your institution to identify and
    publicize important research contributions, independent of where they
    are published. Object if your institution uses publication in HIF
    journals as a filter to determine which research contributions are
    important enough to be publicized.
  • Take a look at the San Francisco Declaration on Research Assessment (DORA) [5] which is aimed directly at combating HIFS.
    Consider adopting its principles and signing the declaration yourself.
    DORA comes out of the biosciences; signing might help bioscientists put
    out the fire that is raging through their disciplines and could help to
    prevent the smoldering in physics from bursting into flame.
  • Include in ads for positions at your institution a standard statement along the following lines:
    “Number of publications in high-impact-factor journals will not be a
    factor in assessing research accomplishments or potential.”
Adopting this final recommendation would send an unambiguous message
to everybody concerned: applicants, letter writers, evaluators, and
administrators. Making it a commonplace could, I believe, actually
change things.



References



  1. J. Austin, "What it takes," Science 344, 1422 (2014). For the online widget, see http://scim.ag/1pwIaAF
  2. http://www.shanghairanking.com/
  3. P. Meystre, “A marketplace for physics,” Physical Review Letters 113, 17 (2014). (http://journals.aps.org/prl/edannounce/10.1103/PhysRevLett.113.170001)

  4. D. T. Campbell, “Assessing the impact of planned social change,” Journal of MultiDisciplinary Evaluation
    7(15), 3 (2011); originally published as Paper #8, Occasional Paper
    Series, Public Policy Center, Dartmouth College, December 1976.
  5. http://am.ascb.org/dora/



©1995 - 2015, AMERICAN PHYSICAL SOCIETY

APS encourages the redistribution of the materials included in this
newspaper provided that attribution to the source is noted and the
materials are not truncated or changed.



Editor: David Voss

Staff Science Writer: Michael Lucibella

Art Director and Special Publications Manager: Kerry G. Johnson

Publication Designer and Production: Nancy Bennett-Karasik

The Back Page

No comments:

Post a Comment