Wednesday, 5 October 2022

Guiding Principles (Do-s and Don't-s)

 Source: https://www.liverpool.ac.uk/open-research/responsible-metrics/guiding-principles/?

Guiding Principles (Do-s and Don't-s)

Metrics are limited and should be used only as an addition to a thorough expert assessment. Carefully selected metrics can provide supporting evidence in decision making as long as they are utilized in the right context and not in isolation.

When we use metrics, we should:

  • Use metrics related to publications (article-based metrics e.g. field weighted citation ratio) rather than the venue of publication (journal-based metrics e.g. Journal Impact Factor™, SJR or SNIP) or the author (e.g. h-index).
  • Be clear and transparent in the metric methodology we use. If a source does not give information about the origins of the dataset (such as e.g. Google Scholar), it isn't seen as reliable.
  • Be explicit about any criteria or metrics being used and make it clear that the content of the paper is more important than where it has been published.
  • Use metrics consistently - don't mix and match the same metric from different products in the same statement.

For example: don't use article metrics from Scopus for one set of researchers and article metrics from Web of Science for another set of researchers.

  • Compare Like with Like - an early career researcher's output profile will not be the same as that of an established professor, so raw citation numbers are not comparable.

For example: the h-index does not compare like-for-like as it favours researchers who have been working in their field for a long time with no career breaks.

  • Consider the value and impact of all research outputs, such as datasets, rather than focussing solely on research publications, and consider a broad range of impact, such as influencing policy.

Which metrics should I use and why?

Field Weighted Citation Impact (FCWI, Scopus)

Can be sourced from SciVal, using data from Scopus.

Pros:

  • It measures the citation impact of the output itself, not the journal in which it is published.
  • It attempts to compare like-with-like by comparing an output's citations with those of other outputs of the same age and type classed by Scopus as being in the main subject area. This side-steps the problems inherent in using one measure to compare articles in different disciplines - an FWCI of 1.44 is just as good in History as in Oncology.

Cons:

  • It could be seen as disadvantaging work that is purposefully multi- and cross-disciplinary.

Publications in top percentiles of cited publications - field-weighted (Scopus)

Can be sourced from SciVal, using data from Scopus.

Pros:

  • Measures include the top 1%, 5%, 10% or 25% of most cited documents worldwide.
  • Should be field-weighted from within SciVal to benchmark groups of researchers.
  • Should use 'percentage of papers in top percentile(s)' rather than 'total value of papers in top percentile(s)' when benchmarking entities of different sizes. Counts and ranks the number of citations for all outputs worldwide covered by the Scopus dataset.
  • Percentile boundaries are calculated for each year, meaning an output is compared to the percentile boundaries for its publication year.
  • Can be used to distinguish between entities where other metrics such as no. of outputs or citations per output are similar.

Cons:

  • Data are more robust as the sample size increases; comparing a unit to one of a similar size is more meaningful than comparing one researcher to another.

Altmetrics

Can be sourced from altmetric.com's Explorer for Institutions; any UoL user can access this resource. These metrics are also displayed in Liverpool Elements and the Institutional Repository for a publication with a DOI.

Pros:

  • Can give an indication of the wider impact of outputs, tracking their use in policy documents, news items, and so on.
  • Can provide an early indicator of the likely impact of a paper, before it has had time to be cited in the future - there's a correlation between number of Mendeley readers saving a paper (which can be tracked via Altmetric) and eventual number of citations.

Cons:

  • Open to being artificially influenced. Altmetric Explorer will discard where someone has repeatedly tweeted about research for example, but may not be sophisticated enough to detect where multiple accounts have tweeted a DOI just to increase an Altmetric score.

Which metrics should I avoid and why?

H-index

In external material - can be sourced from Scopus to cover full range of career rather than SciVal which only covers from 1996. Other sources of h-indices are Web of Science and Google Scholar.

Cons:

  • Is focused on the impact of an individual researcher, rather than on venue of publication.
  • Is not skewed by a single highly-cited paper, nor by a large number of poorly-cited documents. Not recommended as an indicator of research performance because of its bias against early career researchers and those who have had career breaks.
  • The h-index is also meaningless without context within the author's discipline.
  • There is too much temptation to pick and choose h-indices from different sources to select the highest one. h-indices can differ significantly between different sources due to their different datasets – there is no such thing as a definitive h-index.

JIF = Journal Impact Factor 

Only available from Clarivate Analytics. Dataset is those journals indexed by the Web of Science citation indices (Science Citation Index, Social Science Citation Index, Arts and Humanities Citation Index) and output types are only articles and reviews.

Cons:

  • Citation distributions within journals are extremely skewed - the average number of citations an article in a specific journal might get can be a very different number to the typical number of citations an article in a specific journal might get.
  • The JIF is nothing more than the mean average number of citations to articles in a journal, and thus highly susceptible to outliers.
  • Journal metrics do not well reflect new/emerging fields of research.

CiteScore

Only available from Elsevier. Dataset is those journals indexed by the Scopus citation indices and covers all output types. It covers a wider range of item types than the Impact Factor.

Cons:

  • Citation distributions within journals are extremely skewed – the average number of citations an article in a specific journal might get can be a very different number to the typical number of citations an article in a specific journal might get.
  • As with JIF, the CiteScore is nothing more than the mean average number of citations to articles in a journal, and thus highly susceptible to outliers.
  • Journal metrics do not well reflect new/emerging fields of research.

SNIP = Source normalised impact per paper 

Owned by CWTS and sourced from SciVal based on Scopus data.  Covers articles, conference papers and reviews.

Pros:

  • SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact.
  • SNIP comes with a 'stability interval' which reflects the reliability of the indicated - the wider the stability interval, the less reliable the indicator.

Cons:

  • Although consideration is taken to correct for differences in fields, the SNIP is still a journal-based metric and thus the metric applies to the place that an output is published rather than the merits of the output itself.
  • Journal metrics do not well reflect new/emerging fields of research.

SJR = Scimago Journal Rank

Owned by Scimago Institutions Rankings and based on Scopus data.  Covers articles, conference papers and reviews.

Cons:

  • Citations are weighted based on the source that they come from. The subject field, quality and reputation of the journal directly affect the value of a citation.
  • The SJR is a journal-based metric and thus the metric applies to the place that an output is published rather than the merits of the output itself.
  • Journal metrics do not well reflect new/emerging fields of research.

Raw citation count - unless comparing like-for-like

Can be sourced from SciVal/Scopus, Web of Science, PubMed, etc. Most publishers' websites will display a citation count, either sourced from a provider such as Scopus or from their own databases.

Pros:

  • A simple-to-read measure of attention when comparing outputs of the same type and age within the same field.

Cons:

  • Citation practice varies across fields; the same number of citations could be considered low in one field e.g. immunology but high in another e.g. maths.
  • Certain output types such as Review Articles will frequently be more highly cited than other types.
  • As an example of how citation counts can be artificially inflated, the paper "Effective Strategies for Increasing Citation Frequency" lists 33 different ways to increase citations.

No comments:

Post a Comment