The Impact of Print Media and Wikipedia on Citation Rates of Academic Articles
Guest post by Daniel Price. Daniel lives
in Israel, has an MA in Library and Information Science from Bar Ilan
University, and works as a librarian at Shalem College in Jerusalem.
There is a clear desirability to publish a paper that has a strong
scholarly impact, both for personal satisfaction knowing that one’s
research has been viewed and built upon, and for professional reasons,
since the number of citations a paper can correlate to promotion and
tenure - the ubiquitous “publish or perish” (Miller, Taylor and Bedeian,
2011), which has now become an international phenomena (De Meis,
Leopoldo, et al, 2003; De Rond and Miller, 2005; Min, Abdullah and
Mohamed, 2013; Osuna, Cruz-Castro and Sanz-Menéndez, 2010; Qiu, 2010;
Rotich and Muskali, 2013), increased salary and external funding
(Browman and Stergiou, 2008; Diamond, 1986; Gomez-Mejia and Balkin,
1992; Monastersky, 2005; Schoonbaert and Roelants, 1996) and even the
chance of winning professional prizes such as a Nobel Prize (Pendlebury,
2013).
Understandably then many studies have been carried out to discover the
characteristics of highly cited papers (Aksnes, 2003) and the factors
that influence citation counts. It is widely accepted that it is not
just the quality of the science that affect the citation rate, but
bibliometric parameters of papers such as its length (Abt, 1998; Ball,
2008; Falagas et. al. 2013; Hamrick, Fricker and Brown, 2010), number of
references (Corbyn, 2010; Kostoff, 2007; Vieira and Gomes, 2010;
Webster et. al., 2009), number of authors (Aksnes, 2003; Borsuk et. al.,
2009; Gazni and Didegah, 2011; Wuchty et al., 2007), length of titles
(Habibzadeh and Yadollahie, 2010; Jacques and Sebire, 2010), colons in
titles (Jamali and Nikzad, 2011; van Wesel, Wyatt & ten Haa, 2014;
Rostami, Mohammadpoorasl, and Hajizadeh, 2014).
A variety of external considerations is also known to influence the
citation rate of academic papers. Intuitively a paper that has been
publicised in the popular print media will be cited more as its
publicity makes researchers more aware of it; however it can be argued
that quality newspapers only cite valuable articles that would garner a
significant number of citations in any case. That the first assumption
was true was proven thirteen years ago in 1991 by comparing how many
more citations articles published in the New England Journal of Medicine
received if they were quoted in the New York Times during a 12 week
period in 1978 when copies of the paper were printed but not distributed
due to a strike compared to the following year of 1979. The results
showed that articles covered by the Times received 72.8% more citations
during the first year after their publication but only those discussed
when the paper was actually distributed. Articles covered by the Times
during the strike period received no more citations that articles not
referenced by the Times, thus proving that exposure in the Times is a
cause of citation (“the publicity hypothesis”) and not a forecast of
future trends (the “earmark hypothesis”) (Phillips, 1991).
Phillips’ findings articles that covered in the New York Times receive
more citations was confirmed in another study conducted 11 years later
which also found however that exposure in less “elite” daily newspapers
(but not in evening broadcasts of mainstream USA television networks)
during a twelve month period from mid-1997 to mid-1998 also correlated
with higher citation rates of a wider range of scientific papers, thus
showing that scientific communication is not just carried out through
elite channels. Importantly though, the author notes that his study does
not prove the “publicity hypothesis” as the articles that were
publicised could have been more intrinsically important and were only
cited for this reason, although it does cast doubt on the “earmark
hypothesis” since many articles that were not mentioned were cited
(Kiernan, 2003).
In the present day much scholarly communication takes place on Web 2.0
tools and in the emerging field of “altmetrics” (Konkiel, 2013; Priem,
2014; Thelwall, 2013), studies focus on parametrics including whether it
has been cited and discussed on academic blogs (Shema, Bar-Ilan and
Thelwall 2014), tweeted (Eysenbach, 2011), and uploaded to a social
media platform such as Mendley (Li and Thelwall, 2012).
Research has also investigated whether articles cited on the decidedly
non-elitist Wikipedia. A study conducted in the beginning of 2010 found
that 0.54% of approximately nineteen million Wikipedia pages cited a
PubMed journal article, which corresponds to about 0.08% of all Pubmed
articles. The researchers showed that journal articles that were cited
in Wikipedia were cited more and had higher F1000 scores than a random
subset of non-cited articles, a phenomenon they explained according to
their hypothesised that Wikipedia users would only cite important
articles that present novel and ground-breaking research (Evans and
Krauthammer, 2011).
A larger study carried out two and half years later came to the same
conclusion that academic papers in the field of computer science which
are cited on Wikipedia would be more likely to be cited because the
Wikipedia entries are written by talented authors who are careful to
cite reputable authors and trending research topics (Shuai, Jiang, Liu
and Bollen, 2013).
These conclusions support the “earmark hypothesis” that Phillips
rejected and Kiernan doubted. Wikipedians are credited with identifying
high impact journal articles soon after they are published and
recommending them to other users.
In order to preserve a careful dialectic of both sides of the
publicity/earmark hypotheses though, the possibility should be
entertained that the large number of Wikipedia users may include
researchers who, flooded with an information overload of thousands of
articles, are motivated to read and quote certain articles because they
saw them quoted on Wikipedia. Future research could investigate the
information behavior of a large number of researchers, specifically
their use of Wikipedia.
Bibliography:
Abt, H. A. (1998). Why some papers have long citation lifetimes. Nature, 395, 756-757.
Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159-170.
Ale
Ebrahim, N., Salehi, H., Embi, M. A., Habibi Tanha, F., Gholizadeh, H.,
Seyed Mohammad, M., & Ordi, A. (2013). Effective strategies for
increasing citation frequency. International Education Studies, 6(11),
93-99.
Ball, P. (2008). A longer paper gathers more citations. Nature, 455(7211), 274-275.
Borsuk, R. M., Budden, A. E., Leimu, R., Aarssen, L. W., & Lortie,
C. J. (2009). The influence of author gender, national language and
number of authors on citation rate in ecology. Open Ecology Journal, 2,
25-28.
Browman, H. I., & Stergiou, K. I. (2008). Factors and indices are
one thing, deciding who is scholarly, why they are scholarly, and the
relative value of their scholarship is something else entirely. Ethics
in Science and Environmental Politics, 8(1), 1-3.
Corbyn, Z. (2010). An easy way to boost a paper's citations. Nature. Available at http://dx.doi.org/10.1038/news.2010.406
Evans, P., & Krauthammer, M. (2011). Exploring the use of social
media to measure journal article impact. In AMIA Annual Symposium
Proceedings (Vol. 2011, p. 374). American Medical Informatics
Association.
Eysenbach, G. (2011). Can tweets predict citations? Metrics of social
impact based on twitter and correlation with traditional metrics of
scientific impact. Journal of Medical Internet Research, 13(4).
Falagas, M. E., Zarkali, A., Karageorgopoulos, D. E., Bardakas, V.,
& Mavros, M. N. (2013). The impact of article length on the number
of future citations: a bibliometric analysis of general medicine
journals. PloS one, 8(2), e49476.
Gazni, A., & Didegah, F. (2011). Investigating different types of
research collaboration and citation impact: a case study of Harvard
University’s publications. Scientometrics, 87(2), 251-265.
Gomez-Mejia, L. R., & Balkin, D. B. (1992). Determinants of faculty
pay: an agency theory perspective. Academy of Management Journal, 35(5),
921-955.
Habibzadeh, F., & Yadollahie, M. (2010). Are shorter article titles
more attractive for citations? Crosssectional study of 22 scientific
journals. Croatian medical journal, 51(2), 165-170.
Hamrick, T. A., Fricker, R. D., & Brown, G. G. (2010). Assessing
what distinguishes highly cited from less-cited papers published in
interfaces. Interfaces, 40(6), 454-464.
Jacques, T. S., & Sebire, N. J. (2010). The impact of article titles
on citation hits: an analysis of general and specialist medical
journals. JRSM short reports, 1(1).
Jamali, H. R., & Nikzad, M. (2011). Article title type and its
relation with the number of downloads and citations. Scientometrics,
88(2), 653-661.
Kiernan, V. (2003). Diffusion of news about research. Science Communication,25(1), 3-13.
Konkiel, S. (2013). Altmetrics: A 21st‐century solution to determining research quality. Online Searcher, 37(4), 10‐15.
Kostoff, R. N. (2007). The difference between highly and poorly cited
medical articles in the journal Lancet. Scientometrics, 72(3), 513-520.
Li, X., & Thelwall, M. (2012). F1000, Mendeley and traditional
bibliometric indicators. In Proceedings of the 17th International
Conference on Science and Technology Indicators (Vol. 2, pp. 451-551).
Monastersky, R. (2005). The number that’s devouring science. The chronicle of higher education, 52(8), A12.
Osuna, C., Cruz-Castro, L., & Sanz-Menéndez, L. (2011). Overturning
some assumptions about the effects of evaluation systems on publication
performance. Scientometrics, 86(3), 575-592.
Phillips, D. P., Kanter, E. J., Bednarczyk, B., & Tastad, P. L.
(1991). Importance of the lay press in the transmission of medical
knowledge to the scientific community. The New England Journal of
Medicine, 325(16), 1180-1183.
Price, D. (2014). A bibliographic study of articles published in twelve
humanities journals. Available at
https://www.academia.edu/7820799/A_Bibliographic_Study_of_Articles_Published_in_Twelve_Humanities_Journals
Priem, J. (2014). Altmetrics. In B. Cronin and C. R. Sugimoto (Eds.)
Beyond bibliometrics: harnessing multidimensional indicators of
scholarly impact (pp. 263-287).
Rostami, F., Mohammadpoorasl, A., & Hajizadeh, M. (2014). The effect
of characteristics of title on citation rates of articles.
Scientometrics, 98(3), 2007-2010.
Schloegl, C., & Gorraiz, J. (2011). Global usage versus global
citation metrics: the case of pharmacology journals. Journal of the
American Society for Information Science and Technology, 62(1), 161-170.
Schoonbaert, D., & Roelants, G. (1996). Citation analysis for
measuring the value of scientific publications: quality assessment tool
or comedy of errors? Tropical Medicine & International Health, 1(6),
739-752.
Shema, H., Bar‐Ilan, J., & Thelwall, M. (2014). Do blog citations
correlate with a higher number of future citations? Research blogs as a
potential source for alternative metrics. Journal of the Association for
Information Science and Technology.
Shuai, X., Jiang, Z., Liu, X., & Bollen, J. (2013). A comparative
study of academic and Wikipedia ranking. In Proceedings of the 13th
ACM/IEEE-CS joint conference on Digital libraries (pp. 25-28).
Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013).
Do Altmetrics Work? Twitter and Ten Other Social Web Services. PloS
one, 8(5), e64841.
van Wesel, M., Wyatt, S., & ten Haaf, J. (2014). What a difference a
colon makes: how superficial factors influence subsequent citation.
Scientometrics,98(3), 1601-1615.
Vieira, E.S., & Gomes, J.A.N.F. (2010). Citation to scientific
articles: Its distribution and dependence on the article features.
Journal of Informetrics, 4 (1), 1-13.
Webster, G. D., Jonason, P. K., & Schember, T. O. (2009). Hot topics
and popular papers in evolutionary psychology: analyses of title words
and citation counts in evolution and human behavior, 1979–2008.
Evolutionary Psychology, 7(3), 348-362.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing
dominance of teams in production of knowledge. Science, 316(5827),
1036-1039.
in Israel, has an MA in Library and Information Science from Bar Ilan
University, and works as a librarian at Shalem College in Jerusalem.
There is a clear desirability to publish a paper that has a strong
scholarly impact, both for personal satisfaction knowing that one’s
research has been viewed and built upon, and for professional reasons,
since the number of citations a paper can correlate to promotion and
tenure - the ubiquitous “publish or perish” (Miller, Taylor and Bedeian,
2011), which has now become an international phenomena (De Meis,
Leopoldo, et al, 2003; De Rond and Miller, 2005; Min, Abdullah and
Mohamed, 2013; Osuna, Cruz-Castro and Sanz-Menéndez, 2010; Qiu, 2010;
Rotich and Muskali, 2013), increased salary and external funding
(Browman and Stergiou, 2008; Diamond, 1986; Gomez-Mejia and Balkin,
1992; Monastersky, 2005; Schoonbaert and Roelants, 1996) and even the
chance of winning professional prizes such as a Nobel Prize (Pendlebury,
2013).
Understandably then many studies have been carried out to discover the
characteristics of highly cited papers (Aksnes, 2003) and the factors
that influence citation counts. It is widely accepted that it is not
just the quality of the science that affect the citation rate, but
bibliometric parameters of papers such as its length (Abt, 1998; Ball,
2008; Falagas et. al. 2013; Hamrick, Fricker and Brown, 2010), number of
references (Corbyn, 2010; Kostoff, 2007; Vieira and Gomes, 2010;
Webster et. al., 2009), number of authors (Aksnes, 2003; Borsuk et. al.,
2009; Gazni and Didegah, 2011; Wuchty et al., 2007), length of titles
(Habibzadeh and Yadollahie, 2010; Jacques and Sebire, 2010), colons in
titles (Jamali and Nikzad, 2011; van Wesel, Wyatt & ten Haa, 2014;
Rostami, Mohammadpoorasl, and Hajizadeh, 2014).
A variety of external considerations is also known to influence the
citation rate of academic papers. Intuitively a paper that has been
publicised in the popular print media will be cited more as its
publicity makes researchers more aware of it; however it can be argued
that quality newspapers only cite valuable articles that would garner a
significant number of citations in any case. That the first assumption
was true was proven thirteen years ago in 1991 by comparing how many
more citations articles published in the New England Journal of Medicine
received if they were quoted in the New York Times during a 12 week
period in 1978 when copies of the paper were printed but not distributed
due to a strike compared to the following year of 1979. The results
showed that articles covered by the Times received 72.8% more citations
during the first year after their publication but only those discussed
when the paper was actually distributed. Articles covered by the Times
during the strike period received no more citations that articles not
referenced by the Times, thus proving that exposure in the Times is a
cause of citation (“the publicity hypothesis”) and not a forecast of
future trends (the “earmark hypothesis”) (Phillips, 1991).
Phillips’ findings articles that covered in the New York Times receive
more citations was confirmed in another study conducted 11 years later
which also found however that exposure in less “elite” daily newspapers
(but not in evening broadcasts of mainstream USA television networks)
during a twelve month period from mid-1997 to mid-1998 also correlated
with higher citation rates of a wider range of scientific papers, thus
showing that scientific communication is not just carried out through
elite channels. Importantly though, the author notes that his study does
not prove the “publicity hypothesis” as the articles that were
publicised could have been more intrinsically important and were only
cited for this reason, although it does cast doubt on the “earmark
hypothesis” since many articles that were not mentioned were cited
(Kiernan, 2003).
In the present day much scholarly communication takes place on Web 2.0
tools and in the emerging field of “altmetrics” (Konkiel, 2013; Priem,
2014; Thelwall, 2013), studies focus on parametrics including whether it
has been cited and discussed on academic blogs (Shema, Bar-Ilan and
Thelwall 2014), tweeted (Eysenbach, 2011), and uploaded to a social
media platform such as Mendley (Li and Thelwall, 2012).
Research has also investigated whether articles cited on the decidedly
non-elitist Wikipedia. A study conducted in the beginning of 2010 found
that 0.54% of approximately nineteen million Wikipedia pages cited a
PubMed journal article, which corresponds to about 0.08% of all Pubmed
articles. The researchers showed that journal articles that were cited
in Wikipedia were cited more and had higher F1000 scores than a random
subset of non-cited articles, a phenomenon they explained according to
their hypothesised that Wikipedia users would only cite important
articles that present novel and ground-breaking research (Evans and
Krauthammer, 2011).
A larger study carried out two and half years later came to the same
conclusion that academic papers in the field of computer science which
are cited on Wikipedia would be more likely to be cited because the
Wikipedia entries are written by talented authors who are careful to
cite reputable authors and trending research topics (Shuai, Jiang, Liu
and Bollen, 2013).
These conclusions support the “earmark hypothesis” that Phillips
rejected and Kiernan doubted. Wikipedians are credited with identifying
high impact journal articles soon after they are published and
recommending them to other users.
In order to preserve a careful dialectic of both sides of the
publicity/earmark hypotheses though, the possibility should be
entertained that the large number of Wikipedia users may include
researchers who, flooded with an information overload of thousands of
articles, are motivated to read and quote certain articles because they
saw them quoted on Wikipedia. Future research could investigate the
information behavior of a large number of researchers, specifically
their use of Wikipedia.
Bibliography:
Abt, H. A. (1998). Why some papers have long citation lifetimes. Nature, 395, 756-757.
Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159-170.
Ale
Ebrahim, N., Salehi, H., Embi, M. A., Habibi Tanha, F., Gholizadeh, H.,
Seyed Mohammad, M., & Ordi, A. (2013). Effective strategies for
increasing citation frequency. International Education Studies, 6(11),
93-99.
Ball, P. (2008). A longer paper gathers more citations. Nature, 455(7211), 274-275.
Borsuk, R. M., Budden, A. E., Leimu, R., Aarssen, L. W., & Lortie,
C. J. (2009). The influence of author gender, national language and
number of authors on citation rate in ecology. Open Ecology Journal, 2,
25-28.
Browman, H. I., & Stergiou, K. I. (2008). Factors and indices are
one thing, deciding who is scholarly, why they are scholarly, and the
relative value of their scholarship is something else entirely. Ethics
in Science and Environmental Politics, 8(1), 1-3.
Corbyn, Z. (2010). An easy way to boost a paper's citations. Nature. Available at http://dx.doi.org/10.1038/news.2010.406
Evans, P., & Krauthammer, M. (2011). Exploring the use of social
media to measure journal article impact. In AMIA Annual Symposium
Proceedings (Vol. 2011, p. 374). American Medical Informatics
Association.
Eysenbach, G. (2011). Can tweets predict citations? Metrics of social
impact based on twitter and correlation with traditional metrics of
scientific impact. Journal of Medical Internet Research, 13(4).
Falagas, M. E., Zarkali, A., Karageorgopoulos, D. E., Bardakas, V.,
& Mavros, M. N. (2013). The impact of article length on the number
of future citations: a bibliometric analysis of general medicine
journals. PloS one, 8(2), e49476.
Gazni, A., & Didegah, F. (2011). Investigating different types of
research collaboration and citation impact: a case study of Harvard
University’s publications. Scientometrics, 87(2), 251-265.
Gomez-Mejia, L. R., & Balkin, D. B. (1992). Determinants of faculty
pay: an agency theory perspective. Academy of Management Journal, 35(5),
921-955.
Habibzadeh, F., & Yadollahie, M. (2010). Are shorter article titles
more attractive for citations? Crosssectional study of 22 scientific
journals. Croatian medical journal, 51(2), 165-170.
Hamrick, T. A., Fricker, R. D., & Brown, G. G. (2010). Assessing
what distinguishes highly cited from less-cited papers published in
interfaces. Interfaces, 40(6), 454-464.
Jacques, T. S., & Sebire, N. J. (2010). The impact of article titles
on citation hits: an analysis of general and specialist medical
journals. JRSM short reports, 1(1).
Jamali, H. R., & Nikzad, M. (2011). Article title type and its
relation with the number of downloads and citations. Scientometrics,
88(2), 653-661.
Kiernan, V. (2003). Diffusion of news about research. Science Communication,25(1), 3-13.
Konkiel, S. (2013). Altmetrics: A 21st‐century solution to determining research quality. Online Searcher, 37(4), 10‐15.
Kostoff, R. N. (2007). The difference between highly and poorly cited
medical articles in the journal Lancet. Scientometrics, 72(3), 513-520.
Li, X., & Thelwall, M. (2012). F1000, Mendeley and traditional
bibliometric indicators. In Proceedings of the 17th International
Conference on Science and Technology Indicators (Vol. 2, pp. 451-551).
Monastersky, R. (2005). The number that’s devouring science. The chronicle of higher education, 52(8), A12.
Osuna, C., Cruz-Castro, L., & Sanz-Menéndez, L. (2011). Overturning
some assumptions about the effects of evaluation systems on publication
performance. Scientometrics, 86(3), 575-592.
Phillips, D. P., Kanter, E. J., Bednarczyk, B., & Tastad, P. L.
(1991). Importance of the lay press in the transmission of medical
knowledge to the scientific community. The New England Journal of
Medicine, 325(16), 1180-1183.
Price, D. (2014). A bibliographic study of articles published in twelve
humanities journals. Available at
https://www.academia.edu/7820799/A_Bibliographic_Study_of_Articles_Published_in_Twelve_Humanities_Journals
Priem, J. (2014). Altmetrics. In B. Cronin and C. R. Sugimoto (Eds.)
Beyond bibliometrics: harnessing multidimensional indicators of
scholarly impact (pp. 263-287).
Rostami, F., Mohammadpoorasl, A., & Hajizadeh, M. (2014). The effect
of characteristics of title on citation rates of articles.
Scientometrics, 98(3), 2007-2010.
Schloegl, C., & Gorraiz, J. (2011). Global usage versus global
citation metrics: the case of pharmacology journals. Journal of the
American Society for Information Science and Technology, 62(1), 161-170.
Schoonbaert, D., & Roelants, G. (1996). Citation analysis for
measuring the value of scientific publications: quality assessment tool
or comedy of errors? Tropical Medicine & International Health, 1(6),
739-752.
Shema, H., Bar‐Ilan, J., & Thelwall, M. (2014). Do blog citations
correlate with a higher number of future citations? Research blogs as a
potential source for alternative metrics. Journal of the Association for
Information Science and Technology.
Shuai, X., Jiang, Z., Liu, X., & Bollen, J. (2013). A comparative
study of academic and Wikipedia ranking. In Proceedings of the 13th
ACM/IEEE-CS joint conference on Digital libraries (pp. 25-28).
Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013).
Do Altmetrics Work? Twitter and Ten Other Social Web Services. PloS
one, 8(5), e64841.
van Wesel, M., Wyatt, S., & ten Haaf, J. (2014). What a difference a
colon makes: how superficial factors influence subsequent citation.
Scientometrics,98(3), 1601-1615.
Vieira, E.S., & Gomes, J.A.N.F. (2010). Citation to scientific
articles: Its distribution and dependence on the article features.
Journal of Informetrics, 4 (1), 1-13.
Webster, G. D., Jonason, P. K., & Schember, T. O. (2009). Hot topics
and popular papers in evolutionary psychology: analyses of title words
and citation counts in evolution and human behavior, 1979–2008.
Evolutionary Psychology, 7(3), 348-362.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing
dominance of teams in production of knowledge. Science, 316(5827),
1036-1039.
The Impact of Print Media and Wikipedia on Citation Rates of Academic Articles ~ libfocus - Irish library blog
0
comments:
Post a Comment