Tuesday, 5 August 2014

Visibility and Impact of Research Publications, Research World: Volume 10, Article S10.1 (2013)

Source: http://www1.ximb.ac.in/RW.nsf/pages/S10.1

Article S10.1



Visibility and Impact of Research Publications



Seminar Leader: D. P. Dash

Faculty of Business and Design

Swinburne University of Technology Sarawak Campus, Malaysia

ddash[at]swinburne.edu.my



Published Online: April 24, 2013



Note. This is a reflective report on a seminar led by Professor D. P. Dash at Xavier Institute of Management on December 26, 2012.



The discussions started from bibliometric
measures of research impact and gradually moved towards visualising
alternative notions of impact, highlighting the need for research and
innovation in this domain. The need for assessing research quality and
impact arises in several institutional contexts, where decisions need to
be made on research funding, recruitment/tenure/promotion of research
staff, strategies for enhancing research quality, and so forth.
Traditionally, assessment by knowledgeable peers has been the common way
to evaluate either quality or impact of research. However, academic and
research institutions have a need for a more objective and transparent
way to assess research. This need has been served by the so-called
Journal Impact Factor (or simply Impact Factor, developed by The
Institute for Scientific Information, which is known as Thomson Reuters
now); it uses citation data from the journals listed in the
Web of ScienceSM database--the data are presented annually in the Journal Citation Reports®.
Impact Factors are calculated for every journal listed in the database,
by dividing the number of current year citations by the number of
peer-reviewed items published in that journal during the previous 2
years.




Over the years, some alternatives have
emerged to the citation-based impact factor computed by Thomson Reuters.
However, citation-based assessment of the quality and impact of
research publications has become a norm in academic and research
institutions. Journals with high impact factors are perceived as
high-quality journals and getting published in such journals is
considered a matter of prestige. However, a number of doubts have been
raised as to the validity and usefulness of this measure, especially if
this journal-level measure is applied at the author-level, such as in
recruitment or promotion decisions (for an overview of this issue, see
Dash & Ulrich, 2012, Section 4, “Formal Methods of Journal
Assessment” and Section 5, “Beyond Bibliometrics”).




First of all, the excessive importance given by academic institutions to the impact factor has led to the phenomenon of “impact factor engineering,”
a fact acknowledged by Thomson Reuters. This includes a variety of
legitimate and illegitimate methods of increasing citation counts, such
as adding more editorials, letters, and responses (and such other items
which attract citations but are not counted in the denominator of the
impact factor ratio), publishing more review articles (which tend to
receive more citations than other types of article), and other alarming
trends, such as encouraging “citation cartels,” and requiring authors to
cite specific articles, a phenomenon known as “coercive citation.”




Due to such drawbacks, one needs to be
careful while using impact factors. Assessing the quality and impact of a
research outcome--whether a peer-reviewed journal article or a curated
artefact, which is considered a legitimate research outcome in the art
and design field--need not be restricted to counting citations alone.
Relevance and impact of research can also be visualised in many other
ways, such as (a) relevance in research education and research practice,
(b) relevance in professional development and professional practice,
and (c) relevance in public policy and administration.




The goal of increasing visibility,
relevance, and impact of our research can be facilitated by developing
constructive engagement with potential users/beneficiaries of our
research from an early stage (rather than developing research outcomes
independently and hoping that these will be noticed and used by others,
although even this may not be ruled out). One way to do this will be to
involve knowledgeable peers (including peers from industry, professions,
and government) as reviewers of our work-in-progress. This or any other
form of
peer-engagement can contribute to enhance our research.



Publishing our research in digital
repositories (such as open-access journals) is likely to increase both
visibility and use. Thanks to the initiative of Google Scholar
Citations, gradually more and more author-level citation data are
becoming available, as well as author-level quality indicators such as
h-index. In this environment, it is strategically important to ensure
that our research outcomes are visible/accessible through the Web.




Comments From the Audience



Concept Hierarchy.
Different research articles relate to different levels of
understanding. Some articles may relate to fundamental and root-level
concepts (e.g., articles contributing to the main conceptual categories
in a field of research), whereas some may relate to concepts at a
logically narrower level (e.g., subcategories). Citations to these
different groups of articles should not be counted at par.




Need to Verify Data and Assumptions.
Sometimes interesting and useful articles are found in average journals
(i.e., journals with no or low impact factor). The converse is also
true: Sometimes highly ranked journals publish articles of doubtful
merit. There is a need to verify the quality of the data and the
correctness of the assumptions underlying citation-based ranking of
journals (MacRoberts & MacRoberts, 1989). This could even be a topic
for doctoral research.




Direct Impact on Society.
Rather than counting the number of citations, can we think of research
articles as vehicles for direct impact on society and environment? Can
we create new criteria for measuring impact, relating the research
output to the direct impact on the ground? Thousands of journals are
publishing million-plus articles every year (see, e.g., Jinha, 2010;
Larsen & von Ins, 2010). How to measure the impact of this activity
on our lives? This could be a topic for doctoral research too.




Politics of Knowledge.
It is possible to interpret the topic from a postcolonial perspective.
What does the geographical origins of the high-impact-factor journals
tell us about the politics of knowledge? Can the impact factor be seen
as a force of imperialism that serves to keep the less-developed
countries from developing their scholarship? Can these countries come up
with alternative standards of research quality which are more aligned
with their specific needs at the current stage of their historical
evolution?




Impact Through Discussion Forums and Social Media.
While discussing research impact, we should not ignore the numerous
other ways researchers influence each other, for example, through
discussion forums, peer networks, and collaborative projects, nowadays
facilitated by social media. For research thinking and research results
to be conveyed more successfully, the language of research communication
has to be made more accessible to a wider range of interested persons.




Publishing in Highly-Ranked Journals.
The experience of publishing in highly-ranked journals is worth
discussing. On the one hand, it demands focused hard work and
perseverance, thus leading to the formation of important research
capabilities. However, it also requires institutions to enable their
research-oriented staff with time, resources, and incentives, which
requires high investment with uncertain returns.




Impact of Consulting.
When the results of consulting influence our thinking about a
discipline or its theories, we may recognise it as a form of research.
Of course, since consulting is paid work, giving it credit as research
too frequently could lead academics away from the non-commercial pursuit
of knowledge.




References



Dash, D. P., & Ulrich, W. (2012).
Introducing new editorial roles and measures: Making the Journal of
Research Practice relevant to researchers.
Journal of Research Practice, 8(1), Article E1. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/314/249



Jinha, A. E. (2010). Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing, 23(3), 258-263.



Larsen, P. O., & von Ins, M. (2010). The
rate of growth in scientific publication and the decline in coverage
provided by Science Citation Index.
Scientometrics, 84(3), 575-603. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909426/



MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342-349.





This version is based on the original report
by Sandip Anand and D. P. Dash, with inputs from, Krishna Priya, Biresh
Sahoo, Banikanta Misha, C. Shambu Prasad, Satyendra C. Pandey, and
Mahendra K. Shukla; edited by D. P. Dash. [April 24, 2013]






Copyleft The article may be used freely, for a noncommercial purpose, as long as the original source is properly acknowledged.




Xavier Institute of Management, Xavier Square, Bhubaneswar 751013, India

Research World (ISSN 0974-2379) http://www1.ximb.ac.in/RW.nsf/pages/Home


Research World: Volume 10, Article S10.1 (2013)

No comments:

Post a Comment