Using Content Marketing Metrics for Academic Impact
Academic contributions start from concepts and ideas. When theircontent is relevant and of a high quality they can be published in
renowned, peer-reviewed journals. Researchers are increasingly using
online full text databases from institutional repositories or online
open access journals to disseminate their findings. The web has surely
helped to enhance fruitful collaborative relationships among academia.
The internet has brought increased engagement among peers, over email or
video. In addition, they may share their knowledge with colleagues as
they present their papers in seminars and conferences. After
publication, their contributions may be cited by other scholars.
The researchers’ visibility does not only rely on the number of
publications. Both academic researchers and their institutions are
continuously being rated and classified. Their citations may result from
highly reputable journals or well-linked homepages providing scientific
content. Publications are usually ranked through metrics that will
assess individual researchers and their organisational performance.
Bibliometrics and citations may be considered as part of the academic
reward system. Highly cited authors are usually endorsed by their peers
for their significant contribution to knowledge by society. As a matter
of fact, citations are at the core of scientometric methods as they have
been used to measure the visibility and impact of scholarly work (Moed,
2006; Borgman, 2000). This contribution explores extant literature that
explain how the visibility of individual researchers’content may be
related to their academic clout. Therefore, it examines the
communication structures and processes of scholarly communications
(Kousha and Thelwall, 2007; Borgmann and Furner 2002). It presents
relevant theoretical underpinnings on bibliometric studies and considers
different methods that can analyse the individual researchers’ or their
academic publications’ impact (Wilson, 1999; Tague-Sutcliffe, 1992).
Citation Analysis
The symbolic role of citation in representing the content of a document
is an extensive dimension of information retrieval. Citation analysis
expands the scope of information seeking by retrieving publications that
have been cited in previous works. This methodology offers enormous
possibilities for tracing trends and developments in different research
areas. Citation analysis has become the de-facto standard in the
evaluation of research. In fact, previous publications can be simply
evaluated on the number of citations and the relatively good
availability of citation data for such purposes (Knoth and Herrmannova,
2014). However, citations are merely one of the attributes of
publications. By themselves, they do not provide adequate and sufficient
evidence of impact, quality and research contribution. This may be due
to a wide range of characteristics they exhibit; including the semantics
of the citation (Knoth and Herrmannova, 2014), the motives for citing
(Nicolaisen, 2007), the variations in sentiment (Athar, 2014), the
context of the citation (He, Pei, Kifer, Mitra and Giles, 2010), the
popularity of topics, the size of research communities (Brumback, 2009;
Seglen, 1997), the time delay for citations to show up (Priem and
Hemminger, 2010), the skewness of their distribution (Seglen, 1992), the
difference in the types of research papers (Seglen, 1997) and finally
the ability to game / manipulate citations (Arnold and Fowler, 2010).
Impact Factors (IFs)
Scholarly impact is measure of frequency in which an “average article”
has been cited over a defined time period in a journal (Glanzel and
Moed, 2002). Journal citations reports are published in June, every year
by Thomson-Reuters’ Institute of Scientific Information (ISI). These
reports also feature data for ranking the Immediacy Index of articles,
which measure the number of times an article appeared in academic
citations (Harter, 1996). Publishers of core scientific journals
consider IF indicators in their evaluations of prospective
contributions. In Despite there are severe limitations in the IF’s
methodology, it is still the most common instrument that ranks
international journals in any given field of study. Yet, impact factors
have often been subject to ongoing criticism by researchers for their
methodological and procedural imperfections. Commentators often debate
about how IFs should be used. Whilst a higher impact factor may indicate
journals that are considered to be more prestigious, it does not
necessarily reflect the quality or impact of an individual article or
researcher. This may be attributable to the large number of journals,
the volume of research contributions, and also the rapidly changing
nature of certain research fields and the increasing representation of
researchers. Hence, other metrics have been developed to provide
alternative measures to impact factors.
h-index
The h-index attempts to calculate the citation impact of the academic
publications of researchers. Therefore, this index measures the scholars
productivity by taking into account their most cited papers and the
number of citations that they received in other publications. This index
can also be applied to measure the impact and productivity of a
scholarly journal, as well as a group of scientists, such as a
department or university or country (Jones, Huggett and Kamalski, 2011).
The (Hirsch) h-index was originally developed in 2005 to estimate the
importance, significance and the broad impact of an academic’s
researcher’s cumulative research contributions. Initially, the h-index
was designed to overcome the limitations of other measures of quality
and productivity of researchers. It consists of a single number that
reports on an author’s academic contributions that have at least the
equivalent number of citations. For instance, an h-index of 3 would
indicate that the author has published at least three papers that have
been cited three times or more. Therefore, the most productive
researcher may possibly obtaining a high h-index. Moreover, the best
papers in terms of quality will be mostly cited. Interestingly, this
issue is driving more researchers to publish in open access journals.
Webometrics
The science of webometrics (also cybermetrics) is still in an
experimental phase. Björneborn and Ingwersen (2004) indicated that
webometrics involves an assessment of different types of hyperlinks.
They argued that relevant links may help to improve the impact of
academic publications. Therefore, webometrics refer to the quantitative
analysis of activity on the world wide web, such as downloads (Davidson,
Newton, Ferguson, Daly, Elliott, Homer, Duffield and Jackson, 2014).
Webometrics recognise that the internet is a repository for a massive
number of documents. It disseminates knowledge to wide audiences. The
webometric ranking involves the measurement of volume, visibility, and
the impact of web pages. Webometrics emphasise on scientific output
including peer-reviewed papers, conference presentations, preprints,
monographs, theses, and reports. However, these kind of electronic
metrics also analyse other academic material (including courseware,
seminar documentation, digital libraries, databases, multimedia,
personal pages and blogs among others). Moreover, webometrics consider
online information on the educational institution, its departments,
research groups, supporting services, and the level of students
attending courses.
Web 2.0 and Social Media
Internet users are increasingly creating and publishing their
content online. Never before has it been so easy for academics to engage
with their peers on both current affairs and scientific findings. The
influence of social media has changed the academic publishing scenario.
As a matter of fact, recently there has been an increased recognition
for measures of scholarly impact to be drawn from Web 2.0 data (Priem
and Hemminger, 2010).
The web has not only revolutionised how data is gathered, stored and
shared but also provided a mechanism of measuring access to information.
Moreover, academics are also using personal web sites and blogs to
enhance the visibility of their publications. This medium improves their
content marketing in addition to traditional bibliometrics. Social
media networks are providing blogging platforms that allows users to
communicate to anyone with online access. For instance, Twitter is
rapidly becoming used for work related purposes, particularly scholarly
communication, as a method of sharing and disseminating information
which is central to the work of an academic (Java, Song, Finin and Tseng
B, 2007). Recently, there has been rapid growth in the uptake of
Twitter by academics to network, share ideas and common interests, and
promote their scientific findings (Davidson et al., 2014).
Conclusions and Implications
There are various sources of bibliometric data, each possess their
own strengths and limitations. Evidently, there is no single
bibliometric measure that is perfect. Multiple approaches to evaluation
are highly recommended. Moreover, bibliometric approaches should not be
the only measures upon which academic and scholarly performance ought to
be evaluated. Sometimes, it may appear that bibliometrics can reduce
the publications’ impact to a quantitative, numerical score. Many
commentators have argued that when viewed in isolation these metrics may
not necessarily be representative of a researcher’s performance or
capacity. In taking this view, one would consider bibliometric measures
as only one aspect of performance upon which research can be judged.
Nonetheless, this chapter indicated that bibliometrics still have their
high utility in academia. It is very likely that metrics will to
continue to be in use because they represent a relatively simple and
accurate data source. For the time being, bibliometrics are an essential
aspect of measuring academic clout and organisational performance. A
number of systematic ways of assessment have been identified in this
regard; including citation analysis, impact factor, h-index and
webometrics among others. Notwithstanding, the changes in academic
behaviours and their use of content marketing on internet have
challenged traditional metrics. Evidently, the measurement of impact
beyond citation metrics is an increasing focus among researchers, with
social media networks representing the most contemporary way of
establishing performance and impact. In conclusion, this contribution
suggests that these bibliometrics as well as recognition by peers can
help to boost the researchers’, research groups’ and universities’
productivity and their quality of research.
References
Arnold, D. N., & Fowler, K. K. (2011). Nefarious numbers. Notices of the AMS, 58(3), 434-437.
Athar, A. (2014). Sentiment analysis of scientific citations.
University of Cambridge, Computer Laboratory, Technical Report,
(UCAM-CL-TR-856).
Borgman, C. L. (2000). Digital libraries and the continuum of scholarly communication. Journal of documentation, 56(4), 412-430.
Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics.
Bornmann, L., & Daniel, H. D. (2005). Does the h-index for
ranking of scientists really work?. Scientometrics, 65(3), 391-392.
Bornmann, L., & Daniel, H. D. (2007). What do we know about the h
index?. Journal of the American Society for Information Science and
technology, 58(9), 1381-1385.
Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework
for webometrics. Journal of the American Society for Information Science
and Technology, 55(14), 1216-1227.
Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171-193.
Harter, S. (1996). Historical roots of contemporary issues involving self-concept.
He, Q., Pei, J., Kifer, D., Mitra, P., & Giles, L. (2010, April).
Context-aware citation recommendation. In Proceedings of the 19th
international conference on World wide web (pp. 421-430). ACM.
Java, A., Song, X., Finin, T., & Tseng, B. (2007, August). Why we
twitter: understanding microblogging usage and communities. In
Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web
mining and social network analysis (pp. 56-65). ACM. http://scholar.google.com/scholar?q=http://dx.doi.org/10.1145/1348549.1348556
Knoth, P., & Herrmannova, D. (2014). Towards Semantometrics: A
New Semantic Similarity Based Measure for Assessing a Research
Publication’s Contribution. D-Lib Magazine, 20(11), 8.
Kousha, K., & Thelwall, M. (2007). Google Scholar citations and
Google Web/URL citations: A multi‐discipline exploratory analysis.
Journal of the American Society for Information Science and Technology,
58(7), 1055-1065.
Moed, H. F. (2006). Citation analysis in research evaluation (Vol. 9). Springer Science & Business Media.
Nicolaisen, J. (2007). Citation analysis. Annual review of information science and technology, 41(1), 609-641.
Priem, J., & Hemminger, B. H. (2010). Scientometrics 2.0: New
metrics of scholarly impact on the social Web. First Monday, 15(7).
Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628-638.
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. Bmj, 314(7079), 497.
Tague-Sutcliffe, J. (1992). An introduction to informetrics. Information processing & management, 28(1), 1-3.
Wilson, C. S. (1999). Informetrics. Annual Review of Information Science and Technology (ARIST), 34, 107-247.
Using Content Marketing Metrics for Academic Impact | Substantia Mea
No comments:
Post a Comment