Saturday 6 June 2015

Getting the mix right for assessing research impact



 Source: http://theconversation.com/getting-the-mix-right-for-assessing-research-impact-18469

Getting the mix right for assessing research impact 

 Perhaps the best way to measure research impact is a judicious mix-and-match approach.
eepingtime_ca/Flickr


Outside professional sport, few industries measure the performance of their workforces more intensively than academia.



There’s particular scrutiny on how much difference academic research
has made to the world. But objectively measuring research impact is fraught with difficulty.



As the vast majority of research output is in the form of published
reports, there’s been considerable effort over the past few decades to
develop ways of measuring their impact and researchers' work more
generally by counting publications and their characteristics.



These bibliometric indices have long been used to guide decisions
about individuals or projects that should receive funding – but they
have started to fall into disrepute.



No more impact factor or publication counts

Increasing awareness of the limitations of bibliometrics has led to
major shifts in assessment practice. Most prominent has been the
rejection of the “journal impact factor” by many funding bodies,
including Australia’s National Health and Medical Research Council
(NHMRC).



Crudely, journal impact factor is the average number of times papers
in a journal are cited within two years of publication. Researchers were
once encouraged to quote the impact factor of the journal they
published in, as a means of demonstrating the worth of their research.



The main – and irrefutable – argument against the journal impact
factor is that it’s a composite; it shows how often all the work
published in the journal is referenced, so cannot demonstrate the impact
of any single piece of research.



It has also been a target for manipulation by journals seeking a higher profile.



Another way researchers have been assessed in the past is by counting
their publications – the number of individual papers they have written.



Publication counts are now also officially rejected by many funding
organisations. They’re seen as potentially reflecting verbosity or
fragmentation in output, as much as real contribution to advancing
knowledge.



Do citation counts solve the problem?

One bibliometric index of individual impact that has retained
standing in assessment guidelines is the citation count, which may be
calculated for a single paper, or a group of papers. Citation count is
the number of times a work is referenced in subsequent scientific
publications.



Citation counts are seen as a purer measure of impact because they’re
entirely based on the individual’s work (as opposed to the journals it
appeared in), but it too has its limitations.



Counts will most likely be higher in very popular fields of research,
where there are many authors currently active, than in more esoteric
fields. And the most highly referenced paper on a topic may be a recent
update of findings from other authors, rather than an original piece of
research.






Objectively measuring research impact is fraught with difficulty.
Shutterstock



But perhaps its biggest limitation is that it can take several years
for the impact of a paper to be translated into citation counts.



For someone trying to get a job after finishing her doctoral thesis,
or returning to work after several years of full-time parenting, or even
a well-established researcher needing to demonstrate recent
achievement, an important new publication may be simply too recent to
have accrued more than a handful of citations.



So, what’s the best way to predict the impact of research? Well, if
you were going to have to put money on it, you’d best go with journal
impact factor.



Journals have a high impact factor because they have developed
strategies for attracting and selecting work that draws the attention of
people in the field.



In my own career, I’ve had many papers in both high and low-impact
journals that did not garner many citations. But virtually all of my
well-cited papers were in high impact factor journals.



As for the humble publication count, it comes into its own when analysing the potential of early-career researchers.



The chance of being awarded a post-doctoral fellowship by the NHMRC
is low if you have less than half a dozen publications where you’re the
first author, and goes up quickly on the other side of ten papers.



This is an entirely reasonable way to judge a researcher embarking on
the academic road. Few will have made any measurable contribution to
their discipline in terms of external impact, but demonstrating the
ability to write and publish consistently is almost certainly a
prerequisite for success.



Make the best use of available tools

All bibliometric indices have their limitations, and are hopelessly
inadequate when trying to assess “real” impact – the changes to people’s
lives research makes. So we need to continue to look for other
approaches, such as impact on policy and practice.



But we will keep calculating and using bibliometrics, so rather than
selecting or rejecting one or another, perhaps the best way to measure
it is a judicious mix-and-match approach.



Here are some suggestions:



  1. Those applying for research funding should be allowed to cite
    journal impact factors for any publication less than two years old.
  2. When asked to comment on their most impactful work, researchers
    should be explicitly reminded to report both citation counts and
    influence each paper had on practice or policy.
  3. Leave early-career applicants in no doubt that, as much as their
    other contributions to academic life may be valued, they will be judged
    largely on how many papers they have published as first author – no
    matter where they were published or how many times they were cited.
There’s no simple solution to the problem of objectively assessing
research impact. And researchers will keep talking about high impact
factor journals and publication counts, so it’s probably useful to
acknowledge that they still have a legitimate role to play, as long as
they are not the whole story.



Getting the mix right for assessing research impact

No comments:

Post a Comment