Research metrics
Researchers |
Performance metricsResearch is assessed on a number of criteria already and with the Webnow providing the opportunity for the development of new tools and techniques for measuring 'things to do with research' the list of possible assessment criteria is growing. Of course, assessment of an individual's performance for, say, tenure will take into account things such as grants awarded, prizes and medals, patents, student mentoring, teaching duties, offices held, and other measures of contribution towards institutional life. Qualitative measures may include collaborations, some form of peer review, r esponsibilities and nowadays some of the so-called Web 2.0-enabled activities (social network indicators). BibliometricsAs well as these, there will be attention on bibliographic measuresthat reflect that individual's publications record - the evidence of his or her research output trail. The Journal Impact factor (JIF)Until recently, there was only really one such measure usedubiquitously and that is the Journal Impact Factor. Just the name of that measure serves to signal how absurd an application it is as a metric to measure the performance of an individual: it measures the impact of journals, not people, yet its significance in shaping publishing practices, directions of research, funding and academic careers cannot be overestimated. Its use has been widespread and it has been employed in the most misguided of ways, even at national research policy level. The JIF is a metric developed by the Institute for Scientific Information (ISI, now part of Thomson Reuters) calculated through an algorithm based on the total number of citations that have been accrued by articles in a journal over a two-year period after publication. The total citations for that journal over the period of a given year are divided by the number of articles published in that year and the result is the JIF. This is published for around 9000 journals every year in the Journal Citation Index, an index eagerly awaited by publishers and journal editors and editorial boards because a culture of competing on JIFs has grown up in this community. To have a good JIF is considered a measure of success, despite all the flaws of the system and the opportunities for manipulation that exist. And to have a good JIF - and there is no such thing as a straightforwardly good JIF, only a relatively good JIF since the measure is a measure or relativity - may be a fine thing for a journal and its publishers and editors. It is not a good thing for authors, since authors cannot have a JIF: it is a journal-based metric. Yet authors are measured, commonly and seriously, on the JIF of the journals in which they publish their work. The absurdity of this can be likened to awarding a candidate a university place on the basis of how many other students from his high school have been awarded places. It ignores the performance of the individual and rewards him or her on the basis of a collective measure. New bibliometricsThe JIF was a metric developed in the era when journal existed onlyin print form and there was only one database large and comprehensive enough to use for such a calculation - the one assembled by ISI. ISI still produces the Journal Citation Index each year but other, new, bibliometrics are also emerging now that there are substantial bodies of literature held in collections elsewhere. The growing Open Access literature provides huge opportunities in this respect. If all research outputs are open to analysis, useful new measures can be developed encompassing not only research papers, but datasets and other types of output from research activity. This is an exciting area for future development. But even with a focus just on research articles there are many things that can be done to assess impact now that the Open Access literature is growing. There are two main bases for developing measures for the research literature:
Usage metricsSome examples of usage metrics are:
Citation metricsSome examples of citation-analysis systems are:
The Open Access literature provides opportunities for the development of a much richer array of bibliometrics, too. Things such as indices of citation latency (how long citations continue to be made to articles), immediacy (how soon citations occur), decay index (the pattern of citations to an article), cited-by and co-citation measures and so forth will be tools that enable bibliometricians to explore the literature in new ways and gain greater understanding of how research is communicated, especially when coupled with semantic analysis technologies. This understanding will help to improve research communication in the future. Research indicators in developmentA number of large-scale projects are underway to study the potentialfor the development of new indicators in different areas of research. Some examples include: The Humanities Indicators Project run by the National Humanities Alliance in the US The European Educational Research Quality Indicators Project, funded by the European Union The European Reference Index for the Humanities, funded by the European Science Foundation See also: Citation impact |
Research metrics
No comments:
Post a Comment