Tuesday 10 May 2016

Measuring the Academic Impact of Higher Education Institutions and Research Centres | Substantia Mea

 Source: https://drmarkcamilleri.com/2016/05/09/measuring-the-academic-impact-of-higher-education-institutions-and-research-centres/

Measuring the Academic Impact of Higher Education Institutions and Research Centres

uni


Although research impact metrics can be used to evaluate individual
academics, there are other measures that could be used to rank and
compare academic institutions. Several international ranking schemes for
universities use citations to estimate the institutions’ impact.
Nevertheless, there have been ongoing debates about whether bibliometric
methods should be used for the ranking of academic institutions.


The most productive universities are increasingly enclosing the link
to their papers online. Yet, many commentators argue that hyperlinks
could be unreliable indicators of journal impact (Kenekayoro, Buckley
& Thelwall, 2014; Vaughan & Hysen, 2002). Notwithstanding, the
web helps to promote research funding initiatives and to advertise
academic related jobs. The webometrics could also monitor the extent of
mutual awareness in particular research areas (Thelwall, Klitkou,
Verbeek, Stuart & Vincent, 2010).


Moreover, there are other uses of webometric indicators in
policy-relevant contexts within the European Union (Thelwall et al.,
2010; Hoekman, Frenken & Tijssen, 2010). The webometrics refer to
the quantitative analysis of web activity, including profile views and
downloads (Davidson, Newton, Ferguson, Daly, Elliott, Homer, Duffield
& Jackson, 2014). Therefore, webometric ranking involves the
measurement of volume, visibility and impact of web pages. These metrics
seem to emphasise on scientific output including peer-reviewed papers,
conference presentations, preprints, monographs, theses and reports.
They also analyse other academic material including courseware, seminar
documentation, digital libraries, databases, multimedia, personal pages
and blogs among others (Thelwall, 2009; Kousha & Thelwall, 2015;
Mas-Bleda, Thelwall, Kousha & Aguillo, 2014a; Mas-Bleda, Thelwall,
Kousha & Aguillo, 2014b; Orduna-Malea & Ontalba-Ruipérez, 2013).
Thelwall and Kousha (2013) have identified and explained the
methodology of five well-known institutional ranking schemes:


  • “QS World University Rankings aims to rank universities based upon
    academic reputation (40%, from a global survey), employer reputation
    (10%, from a global survey), faculty-student ratio (20%), citations per
    faculty (20%, from Scopus), the proportion of international students
    (5%), and the proportion of international faculty (5%).
  • The World University Rankings: aims to judge world class
    universities across all of their core missions – teaching, research,
    knowledge transfer and international outlook by using the Web of
    Science, an international survey of senior academics and self-reported
    data. The results are based on field-normalised citations for five years
    of publications (30%), research reputation from a survey (18%),
    teaching reputation (15%), various indicators of the quality of the
    learning environment (15%), field-normalised publications per faculty
    (8%), field-normalised income per faculty (8%), income from industry per
    faculty (2.5%); and indicators for the proportion of international
    staff (2.5%), students (2.5%), and internationally co-authored
    publications (2.5%, field-normalised).
  • The Academic Ranking of World Universities (ARWU) aims to rank the
    “world top 500 universities” based upon the number of alumni and staff
    winning Nobel Prizes and Fields Medals, number of highly cited
    researchers selected by Thomson Scientific, number of articles published
    in journals of Nature and Science, number of articles indexed in
    Science Citation Index – Expanded and Social Sciences Citation Index,
    and per capita performance with respect to the size of an institution.
  • The CWTS Leiden Ranking aims to measure “the scientific performance”
    of universities using bibliometric indicators based upon Web of Science
    data through a series of separate size- and field-normalised indicators
    for different aspects of performance rather than a combined overall
    ranking. For example, one is “the proportion of the publications of a
    university that, compared with other publications in the same field and
    in the same year, belong to the top 10% most frequently cited” and
    another is “the average number of citations of the publications of a
    university, normalised for field differences and publication year.”
  • The Webometrics Ranking of World Universities Webometrics Ranking
    aims to show “the commitment of the institutions to [open access
    publishing] through carefully selected web indicators”: hyperlinks from
    the rest of the web (1/2), web site size according to Google (1/6), and
    the number of files in the website in “rich file formats” according to
    Google Scholar (1/6), but also the field-normalised number of articles
    in the most highly cited 10% of Scopus publications (1/6)” (Thelwall
    & Kousha, 2013).
Evidently, the university ranking systems use a variety of factors in
their calculations, including their web presence, the number of
publications, the citations to publications and peer judgements
(Thelwall and Kousha, 2013; Aguillo, Bar-Ilan, Levene, & Ortega,
2010). These metrics typically reflect a combination of different
factors, as shown above. Although they may have different objectives,
they tend to give similar rankings. It may appear that the universities
that produce good research also tend to have an extensive web presence,
perform well on teaching-related indicators, and attract many citations
(Matson et al., 2003).


On the other hand, the webometrics may not necessarily provide robust
indicators of knowledge flows or research impact. In contrast to
citation analysis, the quality of webometric indicators is not high
unless irrelevant content is filtered out, manually. Moreover, it may
prove hard to interpret certain webometric indicators as they could
reflect a range of phenomena ranging from spam to post publication
material. Webometric analyses can support science policy decisions on
individual fields. However, for the time being, it is difficult to
tackle the issue of web heterogeneity in lower field levels (Thelwall
& Harries, 2004; Wilkinson, Harries, Thelwall & Price, 2003).
Moreover, Thelwall et al., (2010) held that webometrics would not have
the same relevance for every field of study. It is very likely that fast
moving or new research fields could not be adequately covered by
webometric indicators due to publication time lags. Thelwall et al.
(2010) argued that it could take up to two years to start a research and
to have it published. This would therefore increase the relative value
of webometrics as research groups can publish general information online
about their research.


This is an excerpt from: Camilleri, M.A. (2016) Utilising
Content Marketing and Social Networks for Academic Visibility. In
Cabrera, M. & Lloret, N. Digital Tools for Academic Branding and
Self-Promotion. IGI Global (Forthcoming).




Measuring the Academic Impact of Higher Education Institutions and Research Centres | Substantia Mea

No comments:

Post a Comment