When Do Citations Reflect “Impact?”
Citation behaviors vary widely between and within STM and HSS; withinparticular disciplines citations can also play sharply different kinds
of roles. The complexity of adducing scholarly significance from
citation metrics is further increased as scholars may use citations
differently from one publication to the next. The Chicago Manual of Style (16th Ed.)
tells us in Chapter 14.1 that the basic “purpose of source citations”
is, as a requirement of “ethics, copyright laws, and courtesy to
readers” to “identify the source of direct quotations or paraphrase and
of any facts or opinions not generally known or easily checked.”
“Conventions vary according to discipline,” the CMS cautions, “the preferences of publishers and authors, and the needs of a particular work.” The CMS doesn’t have much to say about citation indices, and impact.
So much recent attention to the trouble with creating and assessing the “impact” of scholarship (from the New York Times hand-wringing about the affect of big media coverage on scholarly rigor to a very recent report from the UK on the “Rising Tide of Metrics”)
has me thinking about what we do when we are citing a source. In the
assessment ambit, scholarly impact is assessed at least three ways:
anecdote, altmetrics, and citation indices, with the last being the most
influential—and surely the most institutionalized. We hear that
discovery tools will make our scholarship more available, and thus it
will have more impact (as measured by citations). We are told that
social media promotion will help our work to have more impact (as
measured by citations).
I’ve read plenty about the algorithms for citation analyses, about
how and whether we use those data to assess scholars and their
scholarship, and even the circular argument that scholars ought to cite one another more in order to increase their colleagues’…impact. Academia.edu’s claims to be able to increase citations has made news, as have Elsevier’s plans to make Scopus more competitive with the Journal Citation Report from Thomson Reuters.
What I don’t hear much about, though, is what a citation represents
for the scholar writing a research article. When we cite a source, we
could be doing any one of a number of things, and many of them have
little to do with acknowledging, say, impact—with crediting
another scholar’s work with having provided the intellectual foundation,
the inspiration, or vital information for the argument that’s being
made in the article in question.
For historians like me, as well as for other scholars, often the most
important citation is to the primary research material. A lot of
scholarly production goes into preserving and disseminating these
materials, on site and online. This work has an enormous impact on the
kind of scholarship we do, but it is sometimes not visible in our
citations. When I’m writing about an eighteenth-century printed almanac,
for example, I may be using the online version made available through
the Evans Early American Imprint Collection of the American Antiquarian
Society, the premier collection of pre-1800 printed materials available
in microfiche and now digitized through the Evans-Text Creation Partnership. I don’t cite either the fiche or the online access, but rather the text itself, and its publication date.
Of course, our reading of primary sources is also influenced by the
analyses and interpretations of other scholars who have examined the
same or similar sources. We acknowledge them in our footnotes, but these
citations rarely do justice to the complexities of scholarly exchange.
In fact I know a lot more about those eighteenth-century almanacs
because by chance years ago I shared fellowship time at the AAS with a
scholar working on the history of the daily diary,
a researcher who explained the way that early almanacs were regularly
annotated as a form of record keeping. This scholar some years later
wrote a terrific book on the subject, and along the way published some
reflections on her research in various venues. So now I can cite her
book, but in years past I cited conference papers, or an essay in an
online publication — not one of which would turn up in JCR.
So, is the answer to shoo all book text and conference papers online,
too, so someone can figure out how to carefully curate that material
and assess the metrics with the care that JCR takes? Working
backwards from the citation rather than from the scholarship in question
(what work really did influence this work?), we miss a lot, maybe most,
of the impact.
Blind counting of citations, too utterly ignores the fact that titles
often appear in footnotes for reasons having nothing to do with the
previous work’s contribution to the current discussion. An easy example
of a citation that reflects something other than impact is the citation
naming a work that the author is criticizing. The citation could
represent criticism in any number of registers, from devastating (the
primary source is faked or misdated, the quantitative work is off) to
modest (interpretive disagreement) to mild (the argument needs
updating). Such critical citations do not just come in review essays;
they often come within discursive footnotes.
The vast majority of citations that are not critical in any of the
ways noted above still may not suggest that the cited scholarship has
been central to developing the work in question. Let’s look at another
common citation practice, which we might call situating the work. This
is the citation that is usually embedded in a textual footnote with a
large group of other citations. These citation groupings come in
different registers, too. Some might come at the beginning of an
article, describing previous work on the general topic (“For previous
treatments of…” or “A brief review of the literature on this topic would
include…”). They can come mid-argument on a particular point (“For
examples of this phenomenon in other historical contexts, see…” or
“Similar patterns have been found in the following cases:….”)
Here’s another complication. Journals have highly varied policies about citations. Three examples from my field. The Journal of the Early Republic will not review manuscripts more than 9,000 words in length, including notes. The Journal of American History requires that submitted manuscripts be not less than 10,000 or more than 14,000 words, including notes. The generous William and Mary Quarterly
tells authors that their manuscripts may not exceed 10,000 words,
excluding notes, and notes may not exceed 5,000 words. I’m guessing that
authors of WMQ articles include more of those “situating” citations than in other journals.
Another regular citation type is the exemplar. For a long time it was pretty much de rigueur
when discussing the emergence of nations in the wake of democratic
revolutions to reference Jurgen Habermas on the “public sphere,” and
Benedict Anderson on “imagined communities.” You don’t need citation
indices to know that these works were incredibly influential. But the
half-life of influence and citations, I would argue, were inversely
proportionate. The more citations these works collected, the less
intense the engagement with the arguments they made and the more casual
the references.
And there is also the purposely brief reference. Sure, we all think
that every article on a subject brushing up against our own published
work ought to cite us; obviously, we can discount some of this angst as a
inflated assessment of one’s own impact. But sometimes the
no-cite is powerful evidence of impact. Here’s how this goes. Scholar A
is working on topic Z, and publishes just as Scholar B is about to
publish on a similar topic (we’ll call it Z-1). Given the close circles
in which research kin move, it is unlikely that Scholars A and B have
not been aware of one another’s work. So Scholar B might offer this:
“for treatment of a related issue, see Scholar A, citation.” This is a
civil nod, not an acknowledgment of impact.
For an estimate of the proportion of each of these kinds of
references, I did a back of the envelope accounting of citations in a
recent article by a senior scholar in the latest William and Mary Quarterly. Alison Games wrote about “Cohabitation, Surinam-Style: English Inhabitants in Dutch Suriname after 1667,”
which looks at an unusual period in the era of global colonization when
the Dutch conquered an English colony but the two colonial powers
decided to “cohabit.” The breakdown in my admittedly hasty count shows a
preponderance of primary source citations, and only a marginally larger
number of direct as opposed to indirect secondary source references. In
104 footnotes (the WMQ style is to cluster citations at the
end of each paragraph) Games references seventeenth-century Dutch and
English sources 168 times. Fifty-eight times she directly cited a
secondary source, a work of historical scholarship, that provided key
information or insight for her argument about the consequences of
Dutch-English attempt to live and work side-by-side. But forty-five
times she offered an indirect reference to a work of scholarship that
offered a comparative example or another perspective on a similar issue.
Because Games is working in a relatively unexplored field, I thought
that Games would be less likely than others to have a high proportion of
indirect (situating or exemplary style citations). A quick glance at
the article that follows Games’s, on a much more heavily studied
subject, suggests that other articles indeed may include a much higher
proportion of those indirect references.
I think citation is critical; it is the foundation on which
scholarship is built. In my discipline citations serve any number of
purposes, most of which involve educating the reader of a particular
essay about scholarly context rather than intellectual influence. When
we assume, though, that volume of citations ipso facto is equivalent to impact, we have likely misapprehended the purpose of many citations and we have surely missed which citations reflect genuine impact, that is, scholarly influence.
When Do Citations Reflect “Impact?” | The Scholarly Kitchen
No comments:
Post a Comment