Thursday, 14 January 2016

Does mega-authorship matter? | Scientist Sees Squirrel

 Source: https://scientistseessquirrel.wordpress.com/2015/08/18/does-mega-authorship-matter/














megaauthorship

Does mega-authorship matter?




Image: Aad et al. 2015, Phys Rev Letters 114:191803 (short excerpt from author list)


Perhaps you’ve noticed that authorship lists are getting longer. If
you haven’t, Aad et al. (2015, Phys Rev Letters 114:191803) is an
interesting read – especially the last 25 pages, which are taken up by a
list of its 5,154 coauthors. This is “mega-authorship”, and it’s
attracted a lot of attention. Last week, even the Wall Street Journal noticed
Aad et al., suggesting all kinds of reasons that mega-authorship is a
problem for science. For example, the WSJ assures us, “scientists say
that mass authorship makes it harder to tell who did what and who
deserves the real credit for a breakthrough—or blame for misconduct”. As
mega-authored papers have become more common (in the last few years,
dozens of 1,000-author papers have appeared, mostly in particle
physics), there’s been similar handwringing from other outlets, both lay
and scientific.


Actually, neither the trend to increasing coauthorship nor the
handwringing over it is particularly new. Coauthorship was rare in
science for a long time: between 1655 (the birth of the first scientific
journal) and 1800, coauthorship rates were less than 1% in biology (a
little higher in astronomy, even lower in mathematics)*. Coauthorship
began to increase gradually through the 19th century and then more quickly through the 20th;
by 2000, 90% of all scientific papers were coauthored**. Average
numbers of coauthors have risen too, and in parallel, so have
expressions of concern over individual authors’ contributions to the
work. Still: 5,154 authors! The Aad et al. paper (along with its 1,000+
author brethren) seems like a beast of an entirely new colour.


Does mega-authorship threaten our concept of authorship in science?
It would be easy, and fun, to write with a scandalized tone about how
mega-authorship corrupts all that is good and decent about scientific
publishing. But does it really matter? I think both yes and (mostly) no.


It’s certainly clear that mega-authorship involves a very different concept of authorship than most of us are used to. For example, here are the International Committee of Medical Journal Editors’ criteria for authorship, and I think these would sit comfortably with most of us:


  • Substantial contributions to the conception or design
    of the work; or the acquisition, analysis, or interpretation of data for
    the work; AND
  • Drafting the work or revising it critically for important intellectual content; AND
  • Final approval of the version to be published; AND
  • Agreement to be accountable for all aspects of the work in
    ensuring that questions related to the accuracy or integrity of any part
    of the work are appropriately investigated and resolved
It’s pretty clear that the 5,154 authors of Aad et al. can’t possibly
meet these criteria. Even having each author approve the final version
would be so cumbersome it surely didn’t happen. As for drafting the
paper: even if there were some logistically feasible way to have so many
authors actually write together, the paper has more authors than it has
words.


Actually, I didn’t need Holmesian deductions to conclude that Aad et
al. aren’t using a conventional definition of authorship. It’s widely
known*** that at least two groups in experimental particle physics
operate under the policy that every scientist or engineer working on a
particular detector is an author on every paper arising from that
detector’s data. (Two such detectors at the Large Hadron Collider were
used in the Aad et al paper, so the author list is the union of the
“ATLAS collaboration” and the “CMS collaboration”.) The result of this
authorship policy, of course, is lots of “authorships” for everyone: for
the easily searchable George Aad, for instance, over 400 since 2008.


It’s clear, then, that authorship practices in experimental particle
physics bear little resemblance to those in most other fields of
science. That would matter a great deal, if it caught people by
surprise. After all, we use authorship to assess each other all the
time: for hiring, for tenure, for grant adjudication. Might we get the
idea that scientists in ecology (say) are unproductive dilettantes next
to particle physicists or genome sequencers? I hope not. The risk is
that uncritical paper-counters leafing through stacks of CVs will make
unfair assessments. This risk seems low when authorship variation is so
blindingly obvious: nobody is going to be fooled by mega-authorship (and
mega-authors aren’t trying to fool anyone). The real
authorship problems are the small sins (like PIs who insist on being on
every paper coming out of their labs), and the more subtle variation
between subdisciplines (like the different connotations of first and
last authorship in ecology vs. molecular biology). Assessment committees
worth their salt work hard to discover and consider this sort of
fine-grained variation, but it’s true that not every assessment
committee is worth its salt. Good Chairs and Deans, with broad
perspective on science, have an important role to play here.


And what of the WSJ’s concern for “who deserves the real credit for a
breakthrough—or blame for misconduct”? This seems straightforward to
me: participants in mega-authorship are agreeing to dilution of credit;
but if misconduct occurs, are all likely to be tarred by the brush. This
certainly has costs to authors (which they presumably weigh against the
benefits of appearing on so many mega-authored papers), but it’s not
clear to me how it damages the progress of science.


I’m sure I’ll never be a member of a 5,154-author team (and I’ll
never publish 400 papers in 7 years, either). I’ll have to be content
with my smaller number of fewer-authored papers (even the occasional
solo-authored one). I may even think of mega-authorship, from where I
sit as an ecologist, as vaguely silly. But I won’t panic when,
inevitably, the first 10,000-author paper appears.


I may chuckle a little bit, though.


© Stephen Heard (sheard@unb.ca) August 18, 2015


UPDATE: Thanks to Chris Buddle for drawing my attention to this PeerJ paper
suggesting a link between mega-authorship and fraud (NOT, I think, with
any specific reference to Aad et al or the LHC groups).  It’s a
worthwhile read for a different perspective, and I’m going to think more
about the fraud angle.


UPDATE (2): Hat tip to Gregor Kalinkat for pointing out this conference paper
(International Conference on Scientometrics and and Informetrics), The
authors provide some evidence that per-author rates of publication have
grown, but average numbers of coauthors have grown somewhat faster, such
that “fractional productivity” (counting papers divided by coauthors)
has actually dropped.  This could suggest that large-scale
coauthorship (and especially mega-authorship) actually has costs in
overall progress, perhaps due to the logistical overhead of maintaining
huge collaborations.  However, there are many other possibilities,
including decreasing rates of basic-science funding in much of the West
and shifting criteria for awarding coauthorship.  It’s an intriguing
result, though.


Related posts:



*The very first “coauthored” paper seems to be this: “An Extract of a
Letter Containing Some Observations, Made in the Ordering of
Silk-Worms, Communicated by That Known Vertuoso, Mr. Dudley Palmer, from
the Ingenuous Mr. Edward Digges” (Philosophical Transactions of the
Royal Society 1:26-27, and isn’t that a great title?). But it’s a little
hard to tell, because in the 1600s conventions for authorship – and
even for listing authorship in print – hadn’t yet coalesced. Palmer’s
role seems to have only been to forward to the Royal Society (as a
member) a letter from his cousin Digges (a non-member). This wouldn’t of
course, merit coauthorship today. One hopes.


**Statistics from Glänzel and Schubert (2005) Analyzing scientific
networks through co-authorship, in Moed et al. (eds) Handbook of
quantitative science and technology research: the use of publication and
patent statistics in studies of S&T systems Kluwer, New York, NY,
pp. 257-276. When my writing book becomes available, there’s a longer discussion of coauthorship there.


***By which I mean, it’s even in Wikipedia.



Does mega-authorship matter? | Scientist Sees Squirrel

No comments:

Post a Comment