Friday, 17 April 2015

Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else? | Mamidala Jagadesh Kumar

 Source: https://mamidala.wordpress.com/2011/07/10/25/

Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else?

Evaluating Scientists: Citations, Impact Factor, h-Index,

Online Page Hits and What Else?
M Jagadesh Kumar

Editor-in-Chief, IETE Technical Review, Department of Electrical

Engineering, IIT, Hauz Khas, New Delhi-110 016, India
How to cite this article:

M. J. Kumar, “Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else?” IETE Technical Review, Vol. 26, pp.165-168,  2009


Identifying the key-performance parameters for active ­scientists has
always remained a problematic issue. Evaluating and comparing
researchers working in a given area have become a necessity since these
competing scientists vie for the same limited resources, promotions,
awards or fellowships of scientific academies. Whatever method we choose
for evaluating the worth of a scientist’s individual research
contribution, it should be simple, fair and transparent. A tall order
indeed!


One common approach that has been used for a long time is to
calculate the number of citations for the publications of a scientist
and also see the impact factor of journals in which these publications
have appeared. This approach, universally used as a decision-making
tool, does have its limitations.


1. Citation Count


The number of citations for each publication of a scientist is
readily available from different sources, e.g., Web of Science, Google
Scholar and Scopus. It is generally believed that the impact of a
researcher’s work is significant on a given field if his or her papers
are frequently cited by other researchers. Usually self-citations are
not included in such citation counts. However, using citation count
alone to judge the quality of research contributions can be unfair to
some researchers. It is quite likely that a researcher will have poor
citation metrics (i) if he or she is working in a very narrow area
(therefore with fewer citations)or (ii) if he or she is publishing
mostly in a language other than English or mainly in books or book
chapters (since most citation tools do not capture such citations).


2. Impact Factor


Publishing in a journal, such as Nature or Science, which
has a high impact factor is considered very prestigious. In our
profession, which deals with electronics and communications, it is a
dream for many to publish in IEEE journals because some of the IEEE
journals do have a high impact factor and their reviewing procedure is
very tough. Impact factor is a measure of how frequently the papers
published in a journal are cited in scientific literature. Impact
factors are released each year in the Journal Citation Report by the
Institute of Scientific Information (ISI) [1]. Since its first
publication in 1972, the impact factors have acquired wide acceptability
in the absence of any other metric to evaluate the worth of a journal.


However, there are limitations in using the impact factor as a
measure of the quality of a journal, and hence the quality of research
of a scientist who publishes in a high-impact factor journal. For
example, many people may read and use the research findings appearing in
a given paper but may not cite these because they do not publish their
work. In other words, impact factor measures the usefulness of a journal
to only those who read and cite the paper in their publications,
leaving out a large number of other practitioners of the profession who
have not published but yet benefited from the research findings of a
paper published in that journal [2] .


There are more than 100,000 journals published from around the world.
However, ISI database includes only a small percentage of these
journals. Therefore, if you publish in any journal which is not a part
of the ISI database or if your papers are cited in the journals not
listed in the ISI database, it will not add up to the impact factor
calculation. Impact factors can also be manipulated. For example, in
some journals, authors are forced in a subtle way to cite other papers
published in the same journal. Therefore, blind usage of citation and
impact factor indicators may not result in a correct evaluation of the
scientific merit of a researcher.


3. The h-index


To overcome the problems associated with the citation metric and
impact factor, in 2005, Jorge Hirsch of the University of California at
San Diego suggested a simple method to quantify the impact of a
scientist’s research output in a given area [3], [4]. The measure he
suggested is called the h-index. In the last few years, it has quickly
become a widely used measure of a researcher’s scientific output.
Without getting into the mathematical rigor of this approach, the
meaning of the h-index can be explained as follows. Suppose a researcher
has 15 publications. If 10 of these publications are cited at least 10
times by other researchers, the h-index of the scientist is 10,
indicating that the other 5 publications may have less than 10
citations. If one of these 10, out of the 15, publications receives, let
us say, 100 citations, the h-index still remains 10. If each of these
15 papers receives 10 citations, the h-index is again only 10. The
h-index will reach 15, only if each of all the 15 papers receives a
minimum of 15 citations. Therefore, to calculate the h-index of a
scientist, find the citations of each publication, rank them according
to the number of citations received, and identify the first ‘h’
publications having at least ‘h’ citations. To have a reasonably good
h-index it is not sufficient to have a few publications with hundreds of
citations. The use of h-index aims at identifying researchers with more
papers and relevant impact over a period of time.


3.1 Limitations of the h-index


Caution needs to be exercised while calculating the h-index. The
value of the h-index you get depends on the database used for
calculating the number of citations. If you are using the ISI database,
the same limitations that we have seen for calculating the impact factor
will also apply here since ISI database considers only those citations
in the journals listed in the ISI database. In general, it is found that
Google Scholar gives a higher h-index for the same scientist when
compared to Scopus or Web of Science. The scientific impact of any
researcher can be calculated using Harzing’s freely downloadable tool
called “publish or perish”[5].


There are several studies in literature to make the h-index more
universally valid, but there is no consensus on using these corrections.
For example, the introduction of the g-index is an effort to give some
weightage to the highly cited papers [6], [7], [8]. In a recent study,
Liu has pointed out the case of two Nobel prize winners, each of whose
h-index is less than that corresponding to a “successful scientist” [9].
However, they still got the Nobel prize. Young researchers, whose
research time span is short, are bound to have lower h-index values. A
further limitation of the h-index is that it does not diminish with time
and therefore cannot detect the declining research output of a
scientist. Sometimes, the h-index may give rise to misleading
information about a scientist’s contribution. For example, a researcher
with 10,000 citations may have an h-index of 10 because only 10 of
his/her papers have received a minimum of 10 citations; while another
researcher with 650 citations may have an h-index of 25 because each of
his/her 25 publications has received a minimum of 25 citations. In spite
of all these limitations, there is now enough evidence to show that the
use of the h-index has become popular and acceptable.


3.2 Finding Your h-index


One way of overcoming the limitations of the database used by the Web
of Science, Google Scholar and Scopus is to first develop a habit of
periodically collecting all the citations of your papers from different
sources, including the above three sources. You can then rank them and
pick up the top ‘h’ publications with a minimum of ‘h’ citations. This
will give you the h-index of your scientific output. You however have to
maintain a list of all your citations and the complete bibliographic
information on the citing source, irrespective of whether it is a book,
conference paper, journal paper, PhD thesis, patent or non-English
source. The carefully maintained bibliographic data will be a proof for
the reliability and authenticity of your h-index calculation. Just to
give you an idea, the peak h-index of many Nobel prize winners in
physics during the last two decades is around 35 to 40 [4]


4. Mentoring Abilities


Recently, Jeang has argued that in addition to the above performance
metrics, we should also measure the mentoring abilities of a scientist
[10]. If the coauthors of a scientist are his or her own trainees or
students and if they continue to make a scientific impact after leaving
their supervisor, it does point to the quality of the mentoring by the
scientist and to the impact made by the scientist, as a result of
his/her mentoring abilities, in a given area during a given period. This
is a very important but totally neglected aspect of the contribution
made by a scientist or an academic. However, we do not yet have a well
worked out formula to measure such mentoring abilities.


5. Online Page Hits


In recent times, most journals have gone online, with open access,
and it is very easy to keep track of the number of visitors to the
journal’s website. For example, in IETE Technical Review, you can see
how many times an article has been viewed, emailed or printed. A recent
study shows that high viewership does lead to high citations, and highly
cited articles do not necessarily have high viewership. The online
viewership data includes (i) those who simply read and (ii) those that
read and also publish citing the paper they have read [10]. The citation
data includes only the latter group, while the viewership data includes
both the groups. Therefore, it may be appropriate to use the number of
views for a paper as a measure of its impact and popularity provided the
website avoids counting the repeat page hits from the same computer
within a given period.


6. Skewed Performance Metrics


Whatever performance metrics we may use, it appears that authors from
developing countries do face certain constraints in terms of achieving
higher performance indices and therefore recognition for themselves and
their country. It is quite possible that authors from advanced countries
may tend to cite publications from organizations located in their own
countries, leading to a disadvantage for authors working in difficult
situations, with less funding opportunities [11]. This is bound to
affect the h-index of scientists working in developing countries. Since
there is a limited page budget and increased competition in many
“high-profile” journals, it is not always possible to publish in these
journals. One way to overcome this problem is to encourage and give
value to papers published in national journals. There are many
scientists from developing countries such as India working in highly
developed countries with advanced scientific infrastructure and huge
funding. These scientists should seriously consider publishing their
work in journals originating from their native countries. This will
bring an international flavor to the national journals, attracting more
international authors and ultimately making them mainstream
international journals. When these journals become more visible and
easily accessible through their online versions, there is a chance that
papers published in these journals are more often cited. This way, the
skewed calculation of the h-index and other performance metrics for
scientists from developing countries may be minimized.


7. Conclusion


Exuberant  dependence on single numbers to quantify scientists’
contribution and make administrative decisions can affect their career
progression or may force people to somehow enhance their h-index instead
of focusing on their more legitimate activity, i.e., doing good
science. Considering the complex issues associated with the calculation
of scientific performance metrics, it is clear that a comprehensive
approach should be used to evaluate the research worth of a scientist.
We should not rely excessively on a single metric. Since the h-index is
now becoming more popular and is simple to calculate, we should use it
judiciously by combining it with other metrics discussed here.


As always, please do not hesitate to contact me and let me know your views.


REFERENCES


1. Available from: http://science.thomsonreuters.com/index.html


2. O. Yoshiko, and A. Makoto, “Pitfalls of citation and journal impact factor devises in research evaluation,” Journal of Science Policy and Research  Management , vol. 20, pp. 239-58, 2005.


3. Available from: http://arxiv.org/abs/physics/0508025


4. J.E. Hirsch, “An index to quantify an individual′s scientific research output,” Proceedings of the National Academy of Sciences of the USA , vol. 102 (46), pp. 16569-72, 2005.


5. Available from: http://www.harzing.com/pop.htm


6. L. Egghe, “How to improve the h-index,” The Scientist , vol. 20 (3),

p. 14, 2006.


7. L. Egghe, “An improvement of the h-index: The g-index,” ISSI

Newsletter
, vol. 2 (1), pp. 8-9, 2006.


8. L. Egghe, “Theory and practice of the g-index,” Scientometrics ,

vol.69 (1), pp. 131-52, 2006.


9. S.V. Liu, “Real discrepancy between h-Index and Nobel prize-winning,” Logical Biology, vol. 5 (4), pp. 320-1, 2005.


10. K.T. Jeang, “H-index, mentoring-index, highly-cited and highly-accessed: how to evaluate scientists?” Retrovirology , vol. 5, Article Number: 106, Nov 2008.


11. A.W.A. Kellener, and L.C.M.O. Ponciano, “H-index in the Brazilian Academy of Sciences – comments and concerns,” Anais da Academia Brasileira de Ciκncias (Annals of the Brazilian Academy of Sciences) , vol.80 (4), pp. 771-81, Dec 2008.



Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else? | Mamidala Jagadesh Kumar

No comments:

Post a Comment