Source: https://blog.scielo.org/en/2013/08/15/indicators-of-academic-productivity-in-university-rankings-criteria-and-methodologies/#.Xb3JfdWhU2w
The first ranking of North – American universities dates from 1983, and owes its origins to studies which began in 1870, when bodies with connections to the university system of that country began to evaluate their institutes of higher education. The first international ranking of institutes of higher education was carried out by the Shanghai Jiao Tong University, located in Shanghai, China and was known as The Academic Ranking of World Universities’ (ARWU). Its publication caused a certain amount of disquiet, especially in Europe, because institutions in the United States and the United Kingdom were dominant in the listings for both the 20 and 100 best universities. 2004 saw the creation of the European response to the ARWU in the form of The Times Higher Education Supplement World University Ranking, known thereafter simply as Times Higher Education (THE).
Since then, new international rankings have appeared on the scene instigated at the initiative of private companies, organized by great vehicles of communication or institutes of higher education and research, but differing both in the methodology and indicators used as well as in the way the results are presented. People are particularly predisposed to viewing results in the form of tables which arrange institutions according to “indicators of excellence”. These are known as League Tables analogous with the classification of teams in sporting championships. There are other ways of presenting the results gleaned by the various indicators which do not, however, classify institutions in order of excellence. Results can be derived from an overall scoring based on questions such as the quality of the teaching body and the number of publications appearing in high-end journals as well as the infrastructure of the particular institution and the presence of foreign students.
The following is a presentation and discussion of the indicators used to evaluate the academic output of institutions appearing in the major international rankings of universities.
Academic Ranking of World Universities
The first international ranking of universities was created in 2003 at the initiative of the Shanghai Jiao Tong University, located in Shanghai, China. It is known as the Academic Ranking of World Universities (ARWU) and is updated annually. The indicators used to measure academic output include the number of articles published in the high-end journals Nature and Science (representing 20% of the total) and The Social Science Citation Index (SCCI), Thomson Reuters (20%) and the number of researchers most cited by Thomson Scientific (also 20%). In this system of ranking, however, academic output is responsible for around 60% of the weighting of the indicators used in the evaluation process.
In addition to world ranking statistics, ARWU also publishes evaluations arranged by country and area of knowledge.
Times Higher Education
The second international ranking, Times Higher Education (THE) was published in 2004, as a counterpart to the ARWU which had been created the previous year. Between 2004 and 2009, it used Quacquarelly-Symonds (QS) to harvest and process the data. After 2009, The THE began to use data from Thomson Reuters. The number of articles published in journals which are indexed by this data base are standardized by the number of researchers and by subject and provides data on how proficient the institution is in getting the results of its research output published in high-end peer reviewed journals. This indicator represents 6% of the total.
The citations measure the impact of institutions’ research, and in the THE ranking they represent 30% of the evaluation points. This concerns the single most significant indicator of all, the citations, evaluated by means of the 12 thousand journals which make up part of the Thomson Reuters database, assessed over a period of five years to also take into account subject areas whose citation half life is greater, as in the case of the social sciences and humanities. Adjustments are also made so as not to favor institutions which specialize in subject areas which are known to generate a high number of citations, such as the health sciences, physics and mathematics.
QS World University Rankings
The multinational company Quacquarelli-Symonds, headquartered in London, England, which originally provided the data for the THE ranking, has since 2004, been publishing the Guide TopUniversities which lists the best institutions world wide. The indicators of academic output include article level citations (with adjustments made for those disciplines which attract a small number of citations), worth 4% of the points available, and the number of articles published per researcher which is also worth 4% of the available points. Both sets of statistics are collected by the Scopus database, a company affiliated to the multinational publisher Elsevier.
The ranking also provides lists arranged by region and the category QS Stars, in which institutions are evaluated not only by their proficiency in research and teaching, but also by their facilites, innovation and engagement with the region in which they are situated. This allows newer universities or those in developing countries, which according to the criteria used by the majority of rankings, would probably not appear in the top 500 institutions, to be highlighted.
Leiden Ranking
The Centre for Science and Technology Studies (CWTS) of the University of Leiden , Holland has developed its own methodology for measuring, from 2008 onwards, academic impact and other indicators, with the objective of selecting the 500 best institutions in the world.
The bibliometric data is provided by the Web of Science database which collects together the number of publications produced by a particular institution over the previous five years. The citations are calculated using an algorithm which takes into consideration the citations received over a previous five year period and is standardized according to different fields of knowledge and number of journals. Author self – citations are excluded.
The CWTS also provides information on cooperation between universities and industry and makes available maps showing the collaboration between universities which form part of the ranking.
U-Map
This initiative owes its origin to a project developed on the part of The European Classification of Higher Education Institutions which was conceived in 2005 as an alternative to rankings which are based on research productivity, and which offers a “multidimensional “ ranking of institutions and European universities (excluding however, the United Kingdom), grounded in a wide range of indicators.
The principal products of the ranking, which provides a panorama of the diverse nature of European institutions , include the ProfileFinder, a list of Institutes of Higher Education which can be compared according to predetermined characteristics, and ProfileViewer which provides an institutional activity profile which can be used to compare institutions.
The indicators of academic productivity are the annual number of academic publications which are submitted for peer review relative to the number of researchers working in the institution in question, plus other types of publications which are the products of research. There is also an indicator relative to the number of academics, which do not form part of the previous category.
U-Multirank
This new university ranking, created with financing from the European Union, was launched in January of 2013 and will have its first ranking list published at the beginning of 2014. The focus of this project is to initially evaluate institutions in Europe, United Kingdom, United States, Asia and Australia.
Its approach, which differs from other rankings that are focused primarily on research excellence, includes indicators such as the reputation in research, quality of education and learning, international perspective, knowledge transfer, and contribution to regional growth.
The European Commission and those responsible for the project have yet to define the sources for the indicators on research productivity, but state that they will use the databases of Thomson Reuters (Web of Science) and Elsevier (Scopus).
Webometrics
The Webometrics Ranking of World Universities was launched in 2004 as an initiative of the Cybernetics Laboratory of the National Research Council of Spain (Consejo Superior de Investigaciones Científicas (CSIC)). The project was conceived to promote dissemination through the open access publication on the Web of articles and other documents.
Web indicators are tools used for evaluations in general, however Webometrics does not use the number of accesses or the navigability of sites as an indicator of the performance and visibility of institutions, instead it uses the volume, visibility and impact of the institutions on the Web with emphasis on research results.
Like other rankings, this one also has as its major focus the impact of the research production of institutions. What differentiates it, however, is that there are other forms of publication available on the Web such as repositories, online only journals, as well as informal media in scholarly communication such as blogs, wikis among others. In the final analysis, the ranking seeks to motivate academics to put themselves out on the Web, attracting the attention of the research community and of society as whole.
The ranking includes institutions of higher education, hospitals, and research centers in all continents as well as the BRIC and CIVET country groupings, in addition to analyses by knowledge areas and a world ranking of repositories.
As of 2005, data are updated online every six months. The Web Impact Factor is how the institutions are ranked. The ranking is based on the log-normalization of the groups of indicators activities/presence and visibility/impact on the Web in a one-to-one relation.
SCImago Institutions Ranking
In 2012, SCImago created SCImago Institutions Ranking (SIR) using the Scopus database, an integral part of the multinational publisher Elsevier. The SRI publishes two reports per year, one dealing with the Ibero-American Institutions and the other report is global in nature.
The SIR has different characteristics to other university rankings. It does not produce institution lists ordered by their prestige, called league tables, but instead a comprehensive compendium that presents the analysis of the results of research in Ibero-America and the world. The way in which results are presented consists of tables that contain a richness of information including the position of an institution according to the established criteria, the total of documents published in a period of five years, normalized citation indicators, number of articles in high impact journals, and the excellence rate percentage , derived from the number of articles in proportion to the 10% most cited articles in the respective field.
The SIR presents an innovative methodology to rank universities that are located outside of the USA-UK axis, and which would not be included in the league table rankings, thus allowing for a fair and appropriate analysis of the profiles of these institutions.
University Ranking of Folha
As a result of the large increase over the past few years in the number of institutions of higher education in Brazil, a demand for a national ranking of universities appropriate to the realities of the Brazilian context emerged.
On the initiative of the newspaper Folha de São Paulo, Datafolha, under the supervision of the SciELO researcher and expert in the analysis of academic output Rogério Meneghini, developed the Folha University Ranking (RUF). The first edition was published in 2012.
The academic output indicators used in the ranking and that count for 55% of the total points were extracted from the Web of Science (Thomson Reuters), and include the total number of publications, citations received, and articles with international cooperation. These data are normalized by the number of lecturers at the institution. Articles in the database SciELO are also tabulated which endows the RUF with a broader approach in the context of Brazilian academic ouput.
Quantitative assessments tend to be more easily understood and used compared to qualitative ones, just as research impact indicators rank journals, university rankings list institutions. This parallel, however, includes an alert about the trustworthiness of these indicators, as well as recent controversies about the indiscriminate use of the Impact Factor1 .
There are a countless number of problems pointed out by academic output indicators in the rankings, such as: articles disadvantaged in citations because they are published in a language other than English; the a priori reputation of institutions in North America, the UK and Europe which makes them subject to better evaluations; the inherent differences between results in the life sciences and social sciences; the use of the Impact Factor of journals in which the academic output of the institution is disseminated; the different forms of the peer review process used by different journals, and so on.
The researcher Ellen Hazelhorn of the Dublin Institute of Technology, in her book Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence makes sharp criticism of the frequent use of rankings by decision makers and research funding agencies which she also presented at a conference organized by UNESCO in 2011 titled Rankings and Accountability in Higher Education: Uses and Misuses. Ellen states that rankings take into account less than 1% of the existing institutions in the world, thus giving the false impression that cultural, economic and health development depends on the universities at the head of the list.
On the same occasion, the Vice-Rector of Malaysia’s National University, Sharifah Shahabudin, declared that more important than the position of a university in a ranking is its principal function “to constantly anticipate and lead through innovation, creating new values, as well as a new social, environmental and financial order for the university, the nation and the region.” In her vision, the indicators should measure the impact of the university, which must still be created and perfected, on the society in which it finds itself.
RAUHVARGERS, A. Global University Rankings and their Impact. Brussels: European University Association, 2011. [viewed 16 August 2013]. Available from: http://www.google.com.br/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CEQQFjAA&url=http%3A%2F%2F www.eua.be%2Fpubs%2FGlobal_University_Rankings_and_Their_Impact.pdf&ei=ZLoMUu6BF9L RiALJxYHICQ&usg=AFQjCNGVKgtKX1TQP811f-Eblozz0T_b2A&sig2=Olv15o64Or D7Bp-DZl3znw&bvm=bv.50768961,d.cGE&cad=rja
UNESCO Global Forum: Rankings and Accountability in Higher Education: Uses and Misuses, Paris, 16-17 May 2011. UNESCO. [viewed 16 August 2013]. Available from: http://www.unesco.org/new/en/education/themes/strengthening-education-systems/higher-education/quality-assurance/rankings-forum/
WALTMAN, L., et al. The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. 2012. [viewed 16 August 2013]. Available from: arXiv:1202.3941. http://arxiv.org/abs/1202.3941
Translated from the original in Portuguese by Nicholas Cop Consulting.
These days students, academics, and researchers often seek
opportunities in institutes of higher education in countries other than
their own. They are in search of educational excellence , career
progression or they may wish to specialize in a specific subject area.
In this process, indicators of the quality of universities and research
centers are points of reference for a suitable choice. On the other
hand, universities are affected by having their reputation made
available for all to see and may even be quizzed about the ranking they
have received.
Como citar este post [ISO 690/2010]:
Indicators of academic productivity in University rankings: criteria and methodologies
The first ranking of North – American universities dates from 1983, and owes its origins to studies which began in 1870, when bodies with connections to the university system of that country began to evaluate their institutes of higher education. The first international ranking of institutes of higher education was carried out by the Shanghai Jiao Tong University, located in Shanghai, China and was known as The Academic Ranking of World Universities’ (ARWU). Its publication caused a certain amount of disquiet, especially in Europe, because institutions in the United States and the United Kingdom were dominant in the listings for both the 20 and 100 best universities. 2004 saw the creation of the European response to the ARWU in the form of The Times Higher Education Supplement World University Ranking, known thereafter simply as Times Higher Education (THE).
Since then, new international rankings have appeared on the scene instigated at the initiative of private companies, organized by great vehicles of communication or institutes of higher education and research, but differing both in the methodology and indicators used as well as in the way the results are presented. People are particularly predisposed to viewing results in the form of tables which arrange institutions according to “indicators of excellence”. These are known as League Tables analogous with the classification of teams in sporting championships. There are other ways of presenting the results gleaned by the various indicators which do not, however, classify institutions in order of excellence. Results can be derived from an overall scoring based on questions such as the quality of the teaching body and the number of publications appearing in high-end journals as well as the infrastructure of the particular institution and the presence of foreign students.
The following is a presentation and discussion of the indicators used to evaluate the academic output of institutions appearing in the major international rankings of universities.
Academic Ranking of World Universities
The first international ranking of universities was created in 2003 at the initiative of the Shanghai Jiao Tong University, located in Shanghai, China. It is known as the Academic Ranking of World Universities (ARWU) and is updated annually. The indicators used to measure academic output include the number of articles published in the high-end journals Nature and Science (representing 20% of the total) and The Social Science Citation Index (SCCI), Thomson Reuters (20%) and the number of researchers most cited by Thomson Scientific (also 20%). In this system of ranking, however, academic output is responsible for around 60% of the weighting of the indicators used in the evaluation process.
In addition to world ranking statistics, ARWU also publishes evaluations arranged by country and area of knowledge.
Times Higher Education
The second international ranking, Times Higher Education (THE) was published in 2004, as a counterpart to the ARWU which had been created the previous year. Between 2004 and 2009, it used Quacquarelly-Symonds (QS) to harvest and process the data. After 2009, The THE began to use data from Thomson Reuters. The number of articles published in journals which are indexed by this data base are standardized by the number of researchers and by subject and provides data on how proficient the institution is in getting the results of its research output published in high-end peer reviewed journals. This indicator represents 6% of the total.
The citations measure the impact of institutions’ research, and in the THE ranking they represent 30% of the evaluation points. This concerns the single most significant indicator of all, the citations, evaluated by means of the 12 thousand journals which make up part of the Thomson Reuters database, assessed over a period of five years to also take into account subject areas whose citation half life is greater, as in the case of the social sciences and humanities. Adjustments are also made so as not to favor institutions which specialize in subject areas which are known to generate a high number of citations, such as the health sciences, physics and mathematics.
QS World University Rankings
The multinational company Quacquarelli-Symonds, headquartered in London, England, which originally provided the data for the THE ranking, has since 2004, been publishing the Guide TopUniversities which lists the best institutions world wide. The indicators of academic output include article level citations (with adjustments made for those disciplines which attract a small number of citations), worth 4% of the points available, and the number of articles published per researcher which is also worth 4% of the available points. Both sets of statistics are collected by the Scopus database, a company affiliated to the multinational publisher Elsevier.
The ranking also provides lists arranged by region and the category QS Stars, in which institutions are evaluated not only by their proficiency in research and teaching, but also by their facilites, innovation and engagement with the region in which they are situated. This allows newer universities or those in developing countries, which according to the criteria used by the majority of rankings, would probably not appear in the top 500 institutions, to be highlighted.
Leiden Ranking
The Centre for Science and Technology Studies (CWTS) of the University of Leiden , Holland has developed its own methodology for measuring, from 2008 onwards, academic impact and other indicators, with the objective of selecting the 500 best institutions in the world.
The bibliometric data is provided by the Web of Science database which collects together the number of publications produced by a particular institution over the previous five years. The citations are calculated using an algorithm which takes into consideration the citations received over a previous five year period and is standardized according to different fields of knowledge and number of journals. Author self – citations are excluded.
The CWTS also provides information on cooperation between universities and industry and makes available maps showing the collaboration between universities which form part of the ranking.
U-Map
This initiative owes its origin to a project developed on the part of The European Classification of Higher Education Institutions which was conceived in 2005 as an alternative to rankings which are based on research productivity, and which offers a “multidimensional “ ranking of institutions and European universities (excluding however, the United Kingdom), grounded in a wide range of indicators.
The principal products of the ranking, which provides a panorama of the diverse nature of European institutions , include the ProfileFinder, a list of Institutes of Higher Education which can be compared according to predetermined characteristics, and ProfileViewer which provides an institutional activity profile which can be used to compare institutions.
The indicators of academic productivity are the annual number of academic publications which are submitted for peer review relative to the number of researchers working in the institution in question, plus other types of publications which are the products of research. There is also an indicator relative to the number of academics, which do not form part of the previous category.
U-Multirank
This new university ranking, created with financing from the European Union, was launched in January of 2013 and will have its first ranking list published at the beginning of 2014. The focus of this project is to initially evaluate institutions in Europe, United Kingdom, United States, Asia and Australia.
Its approach, which differs from other rankings that are focused primarily on research excellence, includes indicators such as the reputation in research, quality of education and learning, international perspective, knowledge transfer, and contribution to regional growth.
The European Commission and those responsible for the project have yet to define the sources for the indicators on research productivity, but state that they will use the databases of Thomson Reuters (Web of Science) and Elsevier (Scopus).
Webometrics
The Webometrics Ranking of World Universities was launched in 2004 as an initiative of the Cybernetics Laboratory of the National Research Council of Spain (Consejo Superior de Investigaciones Científicas (CSIC)). The project was conceived to promote dissemination through the open access publication on the Web of articles and other documents.
Web indicators are tools used for evaluations in general, however Webometrics does not use the number of accesses or the navigability of sites as an indicator of the performance and visibility of institutions, instead it uses the volume, visibility and impact of the institutions on the Web with emphasis on research results.
Like other rankings, this one also has as its major focus the impact of the research production of institutions. What differentiates it, however, is that there are other forms of publication available on the Web such as repositories, online only journals, as well as informal media in scholarly communication such as blogs, wikis among others. In the final analysis, the ranking seeks to motivate academics to put themselves out on the Web, attracting the attention of the research community and of society as whole.
The ranking includes institutions of higher education, hospitals, and research centers in all continents as well as the BRIC and CIVET country groupings, in addition to analyses by knowledge areas and a world ranking of repositories.
As of 2005, data are updated online every six months. The Web Impact Factor is how the institutions are ranked. The ranking is based on the log-normalization of the groups of indicators activities/presence and visibility/impact on the Web in a one-to-one relation.
SCImago Institutions Ranking
In 2012, SCImago created SCImago Institutions Ranking (SIR) using the Scopus database, an integral part of the multinational publisher Elsevier. The SRI publishes two reports per year, one dealing with the Ibero-American Institutions and the other report is global in nature.
The SIR has different characteristics to other university rankings. It does not produce institution lists ordered by their prestige, called league tables, but instead a comprehensive compendium that presents the analysis of the results of research in Ibero-America and the world. The way in which results are presented consists of tables that contain a richness of information including the position of an institution according to the established criteria, the total of documents published in a period of five years, normalized citation indicators, number of articles in high impact journals, and the excellence rate percentage , derived from the number of articles in proportion to the 10% most cited articles in the respective field.
The SIR presents an innovative methodology to rank universities that are located outside of the USA-UK axis, and which would not be included in the league table rankings, thus allowing for a fair and appropriate analysis of the profiles of these institutions.
University Ranking of Folha
As a result of the large increase over the past few years in the number of institutions of higher education in Brazil, a demand for a national ranking of universities appropriate to the realities of the Brazilian context emerged.
On the initiative of the newspaper Folha de São Paulo, Datafolha, under the supervision of the SciELO researcher and expert in the analysis of academic output Rogério Meneghini, developed the Folha University Ranking (RUF). The first edition was published in 2012.
The academic output indicators used in the ranking and that count for 55% of the total points were extracted from the Web of Science (Thomson Reuters), and include the total number of publications, citations received, and articles with international cooperation. These data are normalized by the number of lecturers at the institution. Articles in the database SciELO are also tabulated which endows the RUF with a broader approach in the context of Brazilian academic ouput.
Final Considerations
University rankings that had their beginnings in the 2000’s came to fill an existing gap to guide the choice of students and academics in search of quality teaching and research around the world.Quantitative assessments tend to be more easily understood and used compared to qualitative ones, just as research impact indicators rank journals, university rankings list institutions. This parallel, however, includes an alert about the trustworthiness of these indicators, as well as recent controversies about the indiscriminate use of the Impact Factor1 .
There are a countless number of problems pointed out by academic output indicators in the rankings, such as: articles disadvantaged in citations because they are published in a language other than English; the a priori reputation of institutions in North America, the UK and Europe which makes them subject to better evaluations; the inherent differences between results in the life sciences and social sciences; the use of the Impact Factor of journals in which the academic output of the institution is disseminated; the different forms of the peer review process used by different journals, and so on.
The researcher Ellen Hazelhorn of the Dublin Institute of Technology, in her book Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence makes sharp criticism of the frequent use of rankings by decision makers and research funding agencies which she also presented at a conference organized by UNESCO in 2011 titled Rankings and Accountability in Higher Education: Uses and Misuses. Ellen states that rankings take into account less than 1% of the existing institutions in the world, thus giving the false impression that cultural, economic and health development depends on the universities at the head of the list.
On the same occasion, the Vice-Rector of Malaysia’s National University, Sharifah Shahabudin, declared that more important than the position of a university in a ranking is its principal function “to constantly anticipate and lead through innovation, creating new values, as well as a new social, environmental and financial order for the university, the nation and the region.” In her vision, the indicators should measure the impact of the university, which must still be created and perfected, on the society in which it finds itself.
Notes
1 Declaração recomenda eliminar o uso do Fator de Impacto na Avaliação de Pesquisa. SciELO em Perspectiva. [viewed 16 August 2013]. Available from: http://blog.scielo.org/blog/2013/07/16/declaracao-recomenda-eliminar-o-uso-do-fator-de-impacto-na-avaliacao-de-pesquisa/References
HAZELHORN, E. Rankings and the Reshaping of Higher Education: the battle for World-Class Excellence. London: MacMillan Publishers Ltd., 2011.RAUHVARGERS, A. Global University Rankings and their Impact. Brussels: European University Association, 2011. [viewed 16 August 2013]. Available from: http://www.google.com.br/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CEQQFjAA&url=http%3A%2F%2F www.eua.be%2Fpubs%2FGlobal_University_Rankings_and_Their_Impact.pdf&ei=ZLoMUu6BF9L RiALJxYHICQ&usg=AFQjCNGVKgtKX1TQP811f-Eblozz0T_b2A&sig2=Olv15o64Or D7Bp-DZl3znw&bvm=bv.50768961,d.cGE&cad=rja
UNESCO Global Forum: Rankings and Accountability in Higher Education: Uses and Misuses, Paris, 16-17 May 2011. UNESCO. [viewed 16 August 2013]. Available from: http://www.unesco.org/new/en/education/themes/strengthening-education-systems/higher-education/quality-assurance/rankings-forum/
WALTMAN, L., et al. The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. 2012. [viewed 16 August 2013]. Available from: arXiv:1202.3941. http://arxiv.org/abs/1202.3941
About Lilian Nassi-Calò
Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.Translated from the original in Portuguese by Nicholas Cop Consulting.
Como citar este post [ISO 690/2010]:
NASSI-CALÒ, L. Indicators of academic productivity in University rankings: criteria and methodologies [online]. SciELO in Perspective, 2013 [viewed
02 November 2019]. Available from:
https://blog.scielo.org/en/2013/08/15/indicators-of-academic-productivity-in-university-rankings-criteria-and-methodologies/
No comments:
Post a Comment