Saturday, 22 August 2020

Research Engagement and Impact

 Source: https://blogs.unimelb.edu.au/23researchthings/2020/07/22/thing-12-research-engagement-and-impact/

Thing 12: Research Engagement and Impact

In recent years there has been shift away from ‘traditional’ impact metrics (such as citation counts), in favour of an increased focus on identifying and assessing the real-world impact of research. This is often referred to as the ‘impact agenda’. In this post Kristijan Causovski, Justin Shearer, and Joann Cattlin investigate different types of impact, and how integrate them to into your research workflows.

Getting started

The Australian Research Council (ARC) defines research impact as ‘the contribution that research makes to the economy, society, environment or culture, beyond the contribution to academic research’. While the nature of impact varies across disciplines and research methodologies, there are a number of broad categories through which to achieve impact:

  • Instrumental impact, which is direct impact on policy and practice decisions.
  • Conceptual impact, which is impact on knowledge, understanding, and attitudes of policymakers and practitioners, and wider stakeholders.
  • Capacity-building impact, such as education, training, and skills.
  • Connectivity between researchers and users subsequent to a funded piece of work: they may stay in touch, visit, and perhaps work together, enhancing the likelihood of internalisation of research findings and thus impact.
  • Changes in attitudes towards knowledge, knowledge exchange and research impact more broadly.

In Australia, research impact has become a requirement of some funding bodies. The National Health and Medical Research Council (NHMRC) and Medical Research Future Fund grants are introducing requirements for research translation and impact plans, while ARC grant applications now require a national interest statement. The ARC also assesses universities periodically through the Engagement and Impact assessment exercise, and engagement is part of the mission of many universities (e.g. Advancing Melbourne 2030). Researchers should consider how their research project or program can inform, benefit or provide value beyond academia and address this in their project plan.

That Thing you do: integration into practice

Engaging your stakeholders

A good, healthy stakeholder relationship is one of the most effective ways of creating impact. You can achieve this by planning engagement activities as part of a research project involving those who could be potentially affected by, use, or be interested in, the research outcomes. Engagement can take the form of:

  • Collaboration and co-produced research through partnerships with community members, business, industry or government. Collaboration is a two-way process where the exchange of information between researchers and stakeholders generates new knowledge.
  • Dissemination and communication of research, using a variety of tools and mediums to reach different types of audiences. This can include social media and websites, briefings, podcasts, videos, meetings, and plain language material.

In the research planning stage, it is a good idea to do a stakeholder analysis in order to map out the individuals or organisations with a potential interest in your research. Useful resources to help get you started include the Knowledge Translation Planning Primer and the UK’s National Co-ordinating Centre for Public Engagement.

Evidencing your impact

It is important to build evaluation mechanisms into your engagement activities so you can determine whether you are having an impact, and what the impact is. Evaluation can involve surveys, interviews, data collected from website hits and social media activity (altmetrics), emails and communications from stakeholders, as well as academic citations. Creating a plan for evaluation means that you are well prepared to collect and analyse this information, and you can adapt your approach as you conduct the research.

Bibliometrics

Various statistical methods are used to analyse authors, publications, and topics, often to measure their impact within a portfolio or subject area. Evaluation of research impact and engagement should be a balanced judgement, comprising an interpretation by peers in conjunction with research metrics, rather than a predominant or exclusive reliance on the latter. The metrics below are commonly used examples to provide supporting evidence for grant or promotion applications:

  • Article-level metrics refer to a variety of measures that provide insight into the reach of individual articles. Citation counts and alternative metrics such as downloads, views and media/social media mentions are examples of article-level metrics. Web of Science, Scopus and Google Scholar are commonly used as a source for citation counts, but the results from each database may be different as these counts are dependent on the indexed publications.
  • Journal-level metrics aim to measure the influence of a journal. There are many journal-level metrics based on different methods of calculation and datasets. The Web of Science impact factors and Scopus journal metrics (CiteScore, SJR and SNIP) are commonly-used measures or rankings of purported journal quality, which rely on the number of citations the journal has received in Web of Science and Scopus respectively.
  • Author-level metrics such as such as the h-index try to measure the academic impact of individual researchers. A researcher’s h-index can be calculated manually by locating citation counts for all published papers, and ranking them numerically by the times cited: a researcher has an index of h if h of their papers have been cited at least h times each.
  • Alternative article-level metrics (altmetrics) are complementary to the “traditional” metrics provided by citation counts. They are based on online activity, mined or gathered from online tools and social media (e.g. tweets, mentions, shares or links).

It is important to note that not all indicators are suitable for all disciplines. Differences in scholarly disciplines and individuals’ career pathways need to be factored into assessment. For example, the aforementioned h-index is well known to disadvantage early-career researchers (as the h-index can only grow over time), those with non-traditional career pathways, or those in low-citation count disciplines. Similarly, the journal impact factor is often used as an ersatz article-level metric – a purpose for which it is not intended. The best practice approach is to use a variety of indicators and only use them as part of an evaluation discussion to inform the conversation. No indicators are perfect.

In recognition of the challenges presented by the impact evaluation landscape, the University of Melbourne is a signatory to the San Francisco Declaration on Research Assessment (DORA) – an initiative created with the aim of improving the ways in which the outputs of scholarly research are evaluated.

Learn More

About the Authors

Kristijan Causovski is Digital Preservation Coordinator and Liaison Librarian, Business and Economics, Scholarly Services, at the University of Melbourne.

Justin Shearer is Associate Director, Research Information and Engagement, Scholarly Services, at the University of Melbourne.

Joann Cattlin is Manager, Research Engagement & Impact, Melbourne Law School.

Want more from 23 Research Things? Sign up to our mailing list to never miss a post.

Image: Arek Socha from Pixabay

No comments:

Post a Comment