Cassidy Sugimoto
Cassidy Sugimoto
Assistant Professor, Indiana University
Source: iunewind/iStockphoto
Source: iunewind/iStockphoto
The altmetrics manifesto, published in 2010 by Jason Priem and
colleagues, argued for a new type of metric to capture the diversity
of  the contemporary scholarly system—making manifest both the
heterogeneity of scholarly output and the impact of scholarship on 
science and society alike. As stated: “Altmetrics expand our view of
what impact looks like, but also of what’s making the impact.”  Priem
defines altmetrics
as the “study and use of scholarly impact measures based on activity in
online tools and environments” and  lists blogs, microblogs, reference
management systems, and data repositories as potential sources.




There has been a proliferation of activity around altmetrics since this
introduction, spurred in no small part by the growing emphasis of 
funding agencies on the demonstration of impact
beyond academia.” Scholarly metrics have never been without criticism; however,  the expansion of data and sources and the increased use for evaluation have brought renewed concern around the ethical principles  of research metrics.


Here
are some of the particular challenges facing altmetrics, including the
misappropriation of the term impact, the narrow scope of focus, and the
potential goal displacement of scientific activity:



The term impact has been readily and regularly adopted into the altmetrics discourse. Altmetric.com—a start-up measuring online activity of scholarly journal articles—claims to measure attention,
but asserts that publishers can “showcase research impact”,
institutions can have a “richer picture of their online research
impact”, and researchers can monitor “personal research impact.”
Similarly, the tagline of
Impactstory—a non-profit focused on aggregating individual-level altmetric data—urges you to “Discover your impact today.” Scientometric studies have
followed suit—regularly employing the term impact when discussing
altmetric measures. This is in keeping with hallowed traditions within
science evaluation—the simultaneously reviled and revered Journal Impact
Factor (JIF) continues to reign over scientific publishing and, in
doing so, makes metonymic the relationship between citation counts and
impact. It is no far stretch, then, to equate new metrics of scholarly
attention with impact.



This
seems, however, a great distortion of the meaning of impact. Does the
act of tweeting evoke an image of forcible contact? Does a save on
Mendeley represent the strong effect of an article on the user? The term
impact connotes far greater engagement and transformative effect than
is currently justifiable with altmetric data. A more persuasive claim is
that what is captured are metrics of attention of a scholarly
object—the nature of this attention is something much more complex and
far less understood. One can easily find examples of extremely high
Altmetric.com scores which are the result of a
viral joke, proofreading error, or scientific hoax. Behind
these outliers are undoubtedly scores of articles whose recognition in
policy documents, popular press, and on social media is a legitimate
sign that the work is relevant and interesting to a broader public. How
to identify the underlying mechanism of altmetric attention remains a
critical challenge.



Understanding
the mechanisms and motivations of altmetric attention is hampered by
the inability to accurately identify the public upon which altmetrics is
effecting change. The notion of a mythical science-tweeting-lay-public
is persistent in the narrative, yet absent from empirical studies. In a
recent study of tweets to journal articles from PLoS ONE, PNAS, Science, and Nature,
we identified more than a third of those tweeting links to scientific
articles as holders of doctoral degrees—far exceeding the proportion of
doctoral degree holders in the general population. Knowledge about who
is generating traces of attention is a necessary factor in establishing
the credibility of altmetrics.



Another
challenge is in the realization of the expansive goals of the altmetric
movement. The promise of altmetrics was a broadening of measures of
impact that took into account all the various ways in which scholarship
is produced and disseminated. However, most altmetric
studies have
focuses exclusively on journal article metrics from relatively few
platforms. While it is laudable to demonstrate the attention an article
is receiving from these sources, it is a far cry from the foundational
message—that is, that scholarship is no longer confined to “slow rigid
formal communication systems.” Where are the metrics that capture the
cacophony of scholarly activity? Researchers and practitioners must
think creatively about the types of scholarship which remain hidden,
despite the valiant efforts of the altmetric movement.



Lastly,
but perhaps most importantly, is the concern of goal displacement—an
inevitable byproduct of the promotion of altmetrics. Campbell’s Law
states the issue most precisely: “The more any quantitative social
indicator is used for social decision-making, the more subject it will
be to corruption pressures and the more apt it will be to distort and
corrupt the social processes it is intended to monitor.” Libraries have
begun to incorporate altmetrics into institutional repositories and to
provide guidance
to researchers on how to document altmetrics on their CVs for the
purposes of promotion, tenure, and merit evaluation. The underlying
assumption is that an article has more worth if it has a higher
altmetric score and, by extension, a scholar‘s worth increases as more
articles receive mention. One might argue that review of scholars is
more nuanced and that reviewers would not fall prey to such gross
misinterpretation of data. However, as those who have studied citation
analysis or sought publication in a high JIF journal can attest, numbers
are persuasive to evaluators examining dozens of dossiers.



The
scientific community, administrators, and policy makers should take
care lest we let the tweet become the end in itself: topics should not
be chosen for their potential to go viral, nor should scholars spend
inordinate time managing their reputations online. Altmetrics should be
harnessed not to replace any existing metrics, but rather to expand the
tools available to demonstrate the diffusion of science. Responsible use
of altmetrics requires that we diligently seek to understand the
underlying mechanisms of measures of attention, expand our ability to
capture the diversity of traces of scholarly activity, and realize that
attention is not impact.