To evaluate the work of scholars
objectively, funding agencies and tenure committees may attempt to
quantify both its quality and impact. Quantifying scholarly work is
fraught with danger, but the current emphasis on assessment in academe
suggests that such measures can only become more important. There are a
number of descriptive statistics associated with scholarly productivity.
These fall broadly into two categories: those that describe individual
researchers and those that describe journals.
Rating Researchers
Raw Citation Counts
One way to measure the impact of a paper is to simply count how manytimes it has been cited by others. This can be accomplished by finding
the paper in Google Scholar and
noting the "Cited by" value beneath the citation. Such numbers may be
added together, or perhaps averaged over a period of years, to provide
an informal assessment of scholarly productivity. Better yet, use Google Scholar Citations
to keep a running list of your publications and their "cited by"
numbers. For more information on determining where, by whom, and how
often an article has been cited, see IC Library's guide on Cited Reference Searching.
H-index
The h-index, created by Jorge E. Hirsh of the University of California, San Diego, is described by its creator as follows:A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np - h) papers have no more than h citations each.1In other words, if I have an h-index of 5, that means that my five
most-cited papers each have been cited five or more times. This can be
visualized by a graph, on which each point represents a paper. The
scholar's papers are ranked along the x-axis by decreasing
number of citing papers, while the actual number of citing papers is
shown by the point's position along the y-axis. The grey line
represents the equality of paper rank and number of citating articles.
The h-index is equal to the number of points above the grey line.
The value of h will depend on the database used to calculate it. 2
Thomson Reuter's Web of Science and Elsevier's Scopus (neither is
available at IC) offer automated tools for calculating this value. In
November of 2011, Google Scholar Citations became generally available. This will calculate h based on the Google Scholar database. An add-on for Firefox called the Scholar H-Index Calculator is also based on Google Scholar data.
Comparisons of h are only valid within a discipline, since
standards of productivity vary widely between fields. Researchers in the
life sciences, for instance, will generally have higher h values than those in physics.1
A large number of modifications to the h-index have been proposed, many attempting to correct for factors such as length of career and co-authorship.
ImpactStory (currently in beta)
is a service that attempts to show the impact of research not only
through citations but through social media (i.e., how often an article
has been tweeted about, saved to social bookmarking services, etc.).
Rating Journals
Rightly or wrongly, the quality of a paper is sometimes judged by thereputation of the journal in which it is published. Various metrics
have been devised to describe the importance of a journal.
Impact Factor
The Impact Factor (IF) is a proprietary measure calculated annually by Thomson Reuters(formerly by ISI). This figure is based on how often papers published
in a given journal in the preceding two years are cited during the
current year. This number is divided by the number of "citable items"
published by that journal during the preceding two years to arrive at
the IF. Weaknesses of this metric include sensitivity to inflation
caused by extensive self-citation within a journal and by single,
highly-cited articles. For more information about the IF, see the essays of Dr. Eugene Garfield,
founder of ISI. Determining a journal's IF requires access to Thomson
Reuters Journal Citation Reports, not available at IC Library.
Eigenfactor
The eigenfactor is a more recent, and freely-available metric,devised at the University of Washington by Jevin West and Carl
Bergstrom.3
Where the IF counts all citations to a given article as being equal,
the eigenfactor weights citations based on the impact of the citing
journal. Its creators assert that it can be viewed as "a rough estimate
of how often a journal will be used by scholars." Eigenfactor values are
freely avialable at eigenfactor.org.
SCImago Journal Rank Indicator
The SCImago Journal Rank indicator (SJR) is another open-source metric.4It uses an algorithm similar to Google's PageRank. Currently, this
metric is only available for those journals covered in Elsevier's Scopus
database. Values may be found at scimagojr.com.
References
1.
Hirsch, J.E. An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America 102, 16569 -16572 (2005).
2.
Meho, L.I.
& Yang, K. Impact of data sources on citation counts and rankings
of LIS faculty: Web of science versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58, 2105-2125 (2007).
& Yang, K. Impact of data sources on citation counts and rankings
of LIS faculty: Web of science versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58, 2105-2125 (2007).
3.
Bergstrom, C. Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News 68, (2007).
4.
González-Pereira, B., Guerrero-Bote, V.P. & Moya-Anegón, F. The SJR indicator: A new indicator of journals' scientific prestige.
Assessing Scholarly Productivity: The Numbers Game - Ithaca College Library
No comments:
Post a Comment