Thursday, 17 November 2022

Why misconduct could keep scientists from earning Highly Cited Researcher designations, and how our database plays a part

 Source: https://retractionwatch.com/2022/11/15/why-misconduct-could-keep-scientists-from-earning-highly-cited-researcher-designations-and-how-our-database-plays-a-part/

Why misconduct could keep scientists from earning Highly Cited Researcher designations, and how our database plays a part

Gali Halevi

Retraction Watch readers are likely familiar with Clarivate’s Highly Cited Researcher (HCR) designation, awarded to “who have demonstrated a disproportionate level of significant and broad influence in their field or fields of research.” And they might also recall that researchers whose work has come under significant scrutiny — or even retracted — can sometimes show up on that list.

As of this year, that is less likely to happen, thanks to a change Clarivate announced today along with the list of nearly 7,000 HCRs:

This year Clarivate partnered with Retraction Watch and extended the qualitative analysis of the Highly Cited Researchers list, addressing increasing concerns over potential misconduct (such as plagiarism, image manipulation, fake peer review).  With the assistance of Retraction Watch and its unparalleled database of retractions, Clarivate analysts searched for evidence of misconduct in all publications of those on the preliminary list of Highly Cited Researchers. Researchers found to have committed scientific misconduct in formal proceedings conducted by a researcher’s institution, a government agency, a funder or a publisher are excluded from the list of Highly Cited Researchers. 

We asked Gali Halevi, director of the Institute for Scientific Information at Clarivate, to answer a few questions about the change.

What prompted Clarivate to add a check for potential misconduct among Highly Cited Researchers this year?

In recent years we have felt the need to deepen our qualitative analysis behind the creation of the annual Highly Cited Researchers list, in an effort to navigate what appears to us to be increasing levels of research misconduct in the academic community as a whole.

The incentives to achieve Highly Cited Researcher status are in some nations and research systems quite high. This status often results in rewards for a researcher – respect, promotion, recruitment and financial bonus rewards are all commonplace. Not only are the personal rewards high, but institutional pressure is high to enter or remain on the list. Unfortunately, this results in a very small number of researchers using more ingenious gaming methods every year in order to be included.

In 2019 we began to exclude authors whose collection of highly cited papers revealed unusually high levels of self-citation. Inordinate self-citation and unusual collaborative group citation (citation circles or cabals) can seriously undermine the validity of the data analyzed for Highly Cited Researchers. These activities may represent efforts to game the system and create self-generated status.

Unfortunately, it appears to us that such activity is increasing, which warrants increased vigilance on our side in creating a list which accurately reflects genuine, community-wide research influence.

Can you estimate how many potential Highly Cited Researchers were flagged by the checks?

With the implementation of more filters this year, the number of potential Highly Cited Researcher candidates excluded from our final list increased from some 300 in 2021 to about 550 in 2022.

It is worrying to think that in a few years perhaps up to 10% of those we are identifying through our algorithms may be engaged in publication and citation gaming or misconduct – which is all the more reason to set up proper methods for identifying such behavior now, and to raise awareness of these issues so others within the community can take necessary action as well.  

Were there particular kinds of misconduct that showed up more often in the checks?

Common types of misconduct include plagiarism, fabrications of data or findings, data or image manipulation, false reporting of results, and extreme self-citation.
We have always excluded retracted highly cited papers from our analysis. This year we also used the Retraction Watch database to look for retractions for reasons of misconduct among an author’s publications that were not highly cited. (It is important to note that some retractions are the result of publishing errors or corrections, so we did not consider these as evidence for exclusion in our deeper level of analysis.) This proved useful for identifying researchers to exclude from our list, and we will continue with this procedure in the future.

We have noted more ingenious gaming methods lately that require greater scrutiny of the publication and citation records of putative Highly Cited Researchers. For example, outsized output, in which individuals publish two or three papers per week over long periods, by relying on international networks of co-authors, raise the possibility that an individual’s high citation counts may result from co-authors alone when publishing without the individual in question. If more than half of a researcher’s citations derive from co-authors, for example, we consider this narrow rather than community-wide influence, and that is not the type of evidence we look for in naming Highly Cited Researchers. Any author publishing two or three papers per week strains our understanding of the normative standards of authorship and credit.

Do you think that knowing Clarivate will be checking for evidence of misconduct could deter researchers from certain kinds of behavior that might increase their citations artificially?

Our analysts use many different filters to identify and exclude researchers whose publication and citation activity is unusual and suspect. We will not enumerate all the checks and filters being deployed in the interest of staying ahead of those attempting to game our identification of Highly Cited Researchers.

We hope this increased vigilance will deter some people, but in reality the issue is systemic and growing, from our perspective. Retraction Watch itself has lately reported how some publishers have uncovered schemes that require hundreds of retractions for specific titles. This then is an explicit call for the research community to police itself through more thorough peer review and other internationally recognized procedures to ensure integrity in research and its publication. 

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

No comments:

Post a Comment