Thursday, 19 January 2017

Citations Per Dollar as a Measure of Productivity | NIH Extramural Nexus

 Source: https://archives.nih.gov/asites/nexus/06-23-2016/all/2016/04/28/citations-per-dollar/index.html


Citations Per Dollar as a Measure of Productivity



NIH grants reflect research investments that we hope will lead to
advancement of fundamental knowledge and/or application of that
knowledge to efforts to improve health and well-being. In February, we
published a blog on the publication impact of NIH funded research
We were gratified to hear your many thoughtful comments and questions.
Some of you suggested that we should not only focus on output (e.g.
highly cited papers), but also on cost – or as one of you mentioned
“citations per dollar.”  Indeed, my colleagues and I have previously
taken a preliminary look
at this question in the world of cardiovascular research.  Today I’d
like to share our exploration of citations per dollar using a sample of
R01 grants across NIH’s research portfolio. What we found has an
interesting policy implication for maximizing NIH’s return on investment
in research.


To think about impact per dollar across the NIH research portfolio,
let’s look at a sample of 37,909 NIH R01 grants that were first funded
between 2000 and 2010. When thinking about citations per dollar, one
important consideration is whether we are largely looking at human or
non-human studies.


Table 1 shows some of the characteristics of these grants according
to whether or not they included human subjects.  Continuous variables
are shown in the table in the format a b c, where a is the lower
quartile, b is the median, and c is the upper quartile. The total award
amount includes direct and indirect costs across all awarded years and
is shown in 2015 constant dollars (with inflation adjustment by the BRDPI).  “Prior NIH funding” refers to total NIH funding the PI received prior to this specific award.


For this data in a readable (Excel spreadsheet) format, visit: https://RePORT.nih.gov/FileLink.aspx?rid=930


As might be expected, grants supporting research on human subjects
were more expensive and more likely to involve multiple PI’s.  Human
studies were less likely to be renewed at least once.


Next, let’s look at publishing and citation outcomes for the same
group of grants, broken out by whether the study involves humans are
not. Similar to what I showed in my prior blog,
I show a  “normalized citation impact”, a citation impact measure that
accounts for varying citation behavior across different scientific
disciplines, but now divide that by total dollars spent. We’ll do this
using box and violin plots
to show the distribution of normalized citation impact per million
dollars according to whether or not the grant included human subjects.


Citation Impact per $M Human Study


The shaded area shows the distribution of NIH-supported papers
ranging from the most highly cited (100 percentile) to least cited. Note
that the Y-axis is displayed on a logarithmic scale.  This is an
important point – scientific productivity follows a highly skewed “heavy-tailed”
logarithmic distribution, not a simple normal distribution like human
height.  The log-normal distribution of grant productivity is evident,
though with “tails” of grants that yielded minimal productivity.  The
log-normal distribution also reflects that there are a small – but not
very small – number of grants with extraordinarily high productivity
(e.g. those that produced the equivalent of 10 or more highly cited
papers). We also see that by this measure, grants that focus on human
studies– in aggregate – have less normalized citation impact per dollar
than other grants.


Another approach to describing the association of citation impact
with budget is to produce a “production plot,” in which we examine how
changes in inputs (in this case dollars) are associated with changes in
output (in this case, citation impact).  Figure 2 below shows such a
production plot in which both axes (total award on the X-axis and
citation impact on the Y-axis) are logarithmically scaled.  This kind of
plot allows us to ask the question, “does a 10% increase in input
(here, total grant award funding) predict a 10% increase in output
(citations, normalized as described earlier)?”  If there is a 1:1
relationship between the input and the output, and a 10% increase in
funding yields a 10% increase in citations, we’d expect a plot with a
slope of exactly 1.The trendlines/curves are based on loess smoothers,
with shaded areas representing 95% confidence intervals. We see that
the association between the logarithm of grant citation impact and the
logarithm of grant total costs is nearly linear. We also see that over
95% of the projects have total costs greater than $1 million, and less
than $10 million for the lifetime of the grant, and in this range the
association is linear with a slope of < 1 (whereas the dotted line
which has a slope of exactly 1).  Not only is this pattern consistent
with prior literature, it is illustrative of an important point:  Research productivity follows (to some extent) a “power law,” meaning that productivity is a function of the power of funding.


Graph plotting normalized citation impact and total award (adjusted for BRPDI) in $M on logarithmic axes. For Excel spreadsheet of values for this graph, visit https://report.nih.gov/FileLink.aspx?rid=930
The association between impact and funding follows the association →I = aFb, where I = impact, a is a constant, F is funding, and b is an exponent (that in this case is < 1).
There are important policy implications of the power law as it applies to research. In cases in which power laws apply, extreme observations are not as infrequent as one might think
In other words, extreme events may be uncommon, but they are not rare.
Extreme events in biomedical science certainly happen – from the
discoveries of evolution and the genetic code to the development of
vaccines that have saved millions (if not billions) of lives to the findings of the transformative Women’s Health Initiative Trial to the more recent developments in targeted treatments and immunotherapy for cancer
Because extreme events happen more often than we might think, the best
way to maximize the chance of such extreme transformative discoveries
is, as some thought leaders
have argued, to do all we can to fund as many scientists as possible. 
We cannot predict where or when the next great discovery will happen,
but we can predict that if we fund more scientists or more projects we
increase our ability to maximize the number of great discoveries as a
function of the dollars we invest.


Email this to someoneTweet about this on TwitterShare on Facebook0Share on LinkedIn93Share on Google+3Pin on Pinterest0Print this page


Citations Per Dollar as a Measure of Productivity | NIH Extramural Nexus

Maximizing your Research Impact | IESE Libray

Source: http://blog.iese.edu/newsletter-library/65-research-guide/





Maximizing your Research Impact




Measuring and describing the impact of research is becoming
increasingly important in academia. For both their career development
and grant funding researchers need an indication of the quality and
impact of their research, as well as be able to show its return on
investment. Furthermore, as research becomes more international, the use
of metrics to measure researchers’ output from different institutions
and countries has gained importance. The Library has selected some
resources that will help you maximize your research impact.



  1. Publish Strategically


  • Choose the right journal

Journal Citation Reports – WoS

(“Select your group or region” > “Federation of Spain by FECYT”. Please log out after use!)

JCR helps to measure research influence and impact at the journal level,
and shows the relationships between citing and cited journals:


Impact Factor


– Immediacy Index


– Eigenfactor Metrics


– Quartile Score

This compares the impact factors of journals in the same field.
To view the a journal’s quartile score from a journal record, click on
the “Journal Rank in Categories”, “Journal Ranking” option in the
“Journal Information” section table.


Go to Web of Science > Journal Citation Reports


SCOPUS Journal Metrics

Elsevier provides three views of the citation impact a journal makes:


– SCImago Journal Rank (SJR)


– Source Normalized Impact per Paper (SNIP)


– The Impact per Publication (IPP)


FT Journal List

This list details the 45 journals used by the Financial Times in compiling the Business School research rank, included in both the Global MBA and EMBA rankings. 


ABS Academic Journal Quality Guide 2015

A comprehensive guide to the range, subject matter
and relative quality of journals in which business and management
academics publish their research.


Journal Quality List

Compiled by Prof. Anne-Wil Harzing, this is a
collation of journal rankings from a variety of sources. The list
comprises academic journals in Economics, Finance, Accounting,
Management, and Marketing.


CARHUS Plus+

CARHUS Plus+ is a system for the classification of
scientific journals in the areas of Social Sciences and Humanities
published on a local, national and international level.



  • Standardization of Author’s names

ORCID

Open Researcher ID provides a persistent digital identifier
that distinguishes you from every other researcher by creating one
single register.


ResearcherID – WoS

Web of Science’s ResearcherID assigns each member a unique
identifier to enable them to manage their publication lists, track their
times cited counts and h-index, identify potential collaborators and
avoid author misidentification.


Scopus Author ID

The Scopus author identifier distinguishes you from other
authors by assigning you a unique number and then grouping all your
documents together.


My Citations Google Scholar

Google Scholar Citations provides a simple way for authors to keep track of citations to their articles.

Important! Remember to sign up for a Google Scholar Citations profile.



  • Publishing your research

Elsevier Guide to publication

Information including guides and/or workshops to help authors to get work published in Elsevier scholarly journals.


Emerald’s Insider Guide to Getting Published

Information including guides and/or workshops to help authors to get work published in Emerald scholarly journals.


SAGE: How to get published

Information including guides and/or workshops to help authors to get work published in Sage scholarly journals.


Taylor and Francis Instructions for Authors

Information including guides to help authors to get work published in Taylor and Francis scholarly journals.


Sherpa/Romeo

Open Access publishing. Publisher copyright policies & self-archiving.


Dulcinea

Open Access publishing. Copyright policies and conditions of self-archiving in Spanish scientific journals.



  1. Measure Your Research Impact: Publications and Authors

Google Scholar Citations

This is a simple way to broadly search for scholarly literature.

Important! You must sign up first for a Google Scholar Citations profile to track citations of your articles.


Scopus – Citation Tools

Abstract and citation database of peer-reviewed
literature, including scientific journals, books and conference
proceedings. Scopus retrieves and analyzes academic citations.


Web of Science – Citation Tools

Search for journal articles, conference proceedings
and citations across a very broad range of subjects relating to science,
technology, social sciences and medicine. WoS retrieves and analyzes
academic citations.


Publish or Perish

Publish or Perish is a software program that retrieves and analyzes academic citations. It uses Google Scholar and Microsoft Academic Search to obtain the raw citations, then analyzes these and presents some metrics.


H-Index

Web of Science uses the H-Index to quantify research
output by measuring author productivity and impact. H-Index = number of
papers (h) with a citation number ≥ h.


G-Index

The G-index was proposed by Leo Egghe in his paper “Theory and Practice of the G-Index” in 2006 as an improvement on the H-Index.


i10-Index

Created by Google Scholar and used in Google’s My Citations
feature.  i10-Index = the number of publications with at least 10
citations.



  1. Organizing: Content Management Tools and Citation Styles

Create your own database of bibliographic references. Import
references from web pages, databases, catalogues or other sources, share
them with other users, add them to your work and/or create
bibliographies.


Mendeley

Elsevier’s desktop and web program for managing and
sharing research papers, discovering research data and collaborating
online. Access available to the IESE Community.


Refworks

Online research management, writing and collaboration tool. Access available to the IESE Community.


Zotero

A free, open-source research tool that helps you collect, organize and analyze research and share it in a variety of ways.


Referencias bibliográficas: cómo citar correctamente

University of Navarra Library Services Citation Guide (in Spanish).


Harvard Business School Citation Guide 2015-2016

Citation conventions for HBS students to use when writing research papers.


UB CRAI Citations and Reference Management

University of Barcelona Library Services Guide contains
information and examples of how to cite documents in a bibliography
according to different referencing standards.


Bibme Citations Guide

An automatic citation creator that supports MLA, APA, and Chicago formatting.



  1. Networking: Finding Other Colleagues to Collaborate With

Academia.edu

Platform for academics to share research papers. The company’s primary mission is to accelerate global research.


Emerald Research Connections

Emerald’s online meeting place for the academic and corporate research communities.


Methodspace

Home of the research methods community. SAGE provides a
space for collaboration and discussion as well as information about
jobs, funding and awards.


Piirus

Quick and easy tool to help you make contacts and find
collaborators, managed by the University of Warwick. You’ll need to sign
up using your academic email address.


ResearchGate

Free professional networking platform designed to connect
researchers and make it easy for them to share and access scientific
output, knowledge, and expertise.



  1. Research Funding

EC Funding Opportunities

Find the European Union funding opportunities and search for new or closed calls of EC programs.


EC Calls – Research Participant Portal

EC Calls for Proposals searcher.


Spain. MINECO Ayudas a la Investigación

Ministerio de Economía y Competitividad


University of Navarra- Researcher Services – All the Calls

Funding Opportunities Announcements


Catalan Agency for Management of University and Research Grants

Generalitat de Catalunya


Fons x Recerca

Guide on funding programs for research. Xarxa Vives d’Universitats (in Catalan).






Maximizing your Research Impact | IESE Libray

The Benefits of Twitter for Scientists » American Scientist

 Source: http://www.americanscientist.org/blog/pub/the-benefits-of-twitter-for-scientists

The Benefits of Twitter for Scientists

David ShiffmanJan 13, 2017
Click to Enlarge Image
Despite frequent

claims

to the contrary, social media tools such as Twitter can be incredibly

valuable

for scholars. My own research (and years of personal experience) has shown
that if properly used, Twitter makes it possible for scholars

to follow along with cutting-edge research

in their discipline as it is presented at conferences on the other side of
the world,

to directly share their expertise

with policy makers and journalists, and

to get feedback from expert peers

as they work on their own research projects.




New

research
from the writers of the Fisheries Blog has
revealed another professional benefit of social media usage for scholars.
“We found that the number of tweets about a primary ecology research
article was significantly correlated to the number of citations that the
paper received,” said Brandon Peoples, assistant professor of fisheries
ecology at Clemson University and the paper’s lead author. This new
analysis notes that Twitter activity related to a paper predicted citations
more than the 5-year impact factor of the journal where that paper was
published, at least for ecology-focused journals.




This PLoS One

paper

, “Twitter predicts citation rates of ecological research,” is not the
first to address this question, and past studies have found mixed results.
However, Peoples noted that his new study took a different and more complex
approach. “Several studies have looked at the relationship between various altmetrics,
measures of activity on, for instance, Facebook, Twitter, or blog posts)
and citations. Most of them have used simple bivariate correlations and
have found weak relationships,” Peoples said. “What we did differently was
account for other important sources of variation in the same model: time
since publication, journal impact factor, and random variation among
journals. This allowed us to identify the ‘signal’ of Twitter over the
‘noise’ produced by the other variables. You can’t do that with simple
correlation analyses.”




Other researchers caution reading too much into these results. “Tweets can
be manipulated too easily by a coauthor with a high number of Twitter
followers,” said Trevor Branch, associate
professor of aquatic and fishery sciences at the University of Washington.
“I regularly tweet about my papers. This has a big influence on the
altmetric score, since my tweets are often retweeted and each retweet
counts as another tweet.” He notes that he has 4 of the 10 papers with the
highest altmetric scores for the journal Fish and Fisheries, but
that none of those papers are among the most cited. (As a scientist with
lots of Twitter followers, I agree; many of my papers are among the most
tweeted in the history of those journals, but they are certainly not the
most cited).




Peoples points out, “Our research suggests that Twitter and citations are
related, not that tweets cause citations. I wouldn’t advise researchers to
tweet about their research simply to increase citations. Tweeting about
your paper will help to introduce it to the online community, but it
probably won’t be well-discussed on Twitter if it’s not interesting.” While
Twitter doesn’t automatically increase citations, it is changing the way
that scholars communicate with one another, with journalists, and with
nonscientists of all kinds.




Social media tools have changed how science is communicated, and so Peoples
believes they can be incredibly useful for the scholars who learn how to
use them. “Twitter provides a global forum where scientists from all career
levels can meet and discuss—a kind of conversation that is hard to find in
a traditional conference setting,” he said. “As a scientist, you should
always be prepared to publicly defend critiques of your work. But on
Twitter, it happens in real time. If you tweet your paper, be ready to
discuss it instantaneously in front of a global audience.”




While Twitter is not a shortcut to increasing the number of citations for
your paper, researchers who effectively use this communications tool will
experience numerous other personal and professional benefits. Twitter has
made me a better scientist, a better communicator, and a better educator.
If you have any questions about how to use Twitter more effectively, you
can find me in the Twitterverse at @WhySharksMatter.



[UPDATE on 1/16/17: An earlier version of the
graphic gave an incorrect number for the relationship between the median
number of Twitter followers and the median size of a university
department. It has been corrected to 7.3.
]


This post is published in Macroscope



The Benefits of Twitter for Scientists » American Scientist

Wednesday, 18 January 2017

Simple Different ways to Maximize Your Citation Count: A Guideline / My New WordPress Site

 Source: http://www.companyman.ca/2016/07/12/simple-different-ways-to-maximize-your-citation/


Simple Different ways to Maximize Your Citation Count: A Guideline

Simple Different ways to Maximize Your Citation Count: A Guideline


The sheer number of paperwork you distribute is recommended on your vocation.
Post very early and often” is read frequently in exploration. All the
same, the amount of situations your work is reported is really important
just as well mainly http://essayhelper.biz/
because can often mean the have an impact on that your choice of
research has about the area. Increasing your citation number can in
addition have a really good influence on your job for the reason that
funds specialists all too often analyze a combination of the sheer
number of written documents and the quantity of citations when reaching
grant choices. To elevate your citation count to optimize impact,
consider these 10 basic processes: 1. Report your recent operate when
it is related to a new manuscript. In spite of this, tend not to
blueprint all old fashioned paper one has prepared basically to elevate
your citation count. 2. Very carefully opt for keywords. Decide upon
key words that scientists with your profession will be seeking out so
that your cardstock can look at a storage system browse. 3. Make use of
a keywords and key phrases inside of your label and consistently to
your abstract. Reiterating keywords and key phrases will heighten the
probability your paper can be on top of an internet search engine
listing, that makes it certainly going to be browse through. 4. Make
use of a ongoing sort of your name on every one of your reports.
Utilizing the same designation on all of your current paperwork will
make it easier for some to uncover all of the circulated effort. If your
main identify is incredibly very common, look at buying a basic
research identifier, most notably an ORCID or perhaps ResearcherID. You
can possibly offer you your ResearcherID in your e mail unique and
connection that Identification to the distribution catalog to make sure
people you electronic mail has access to your magazines. 5. Make
certain that your facts are proper. Make certain your name and
affiliation are appropriate at the finalized proofs of the manuscript
and check that a paper’s facts are dependable in database search
queries. 6. Help make your manuscript easy to access. In the event the
document is absolutely not submitted in an start-entry journal, posting
your pre- or write-publication pictures towards a repository. Review
SHERPA RoMEO to find out your publisher’s trademark and self-archiving
coverages with respect to discussing your published manuscript. 7.
Discuss your data. You can find some evidence that discussing your data
can increase your citations. Look at putting up to documents giving
webpages, for example figshare or SlideShare. or contributing to
Wikipedia and delivering back-links for the produced manuscripts. 8.
Reward the work at conferences. Although conference reports are not
cited by other the rest, this will make your research a lot more visible
towards the school and studies residential areas. 10. Attempt to
advertise your labor. Speak with other investigators of your old
fashioned paper, even models not to your discipline, and e-mail copies
to your pieces of paper to experts who may perhaps be curious. Design a
weblog and even a homepage focused on your homework and impart it.
Doctor. Ebbs facilitates AJE people by giving techniques to questions on
penning and posting manuscripts. She is a member of the consumer
Expertise Workers Connection. Before you start subscribing to AJE,
Doctor. Ebbs acquired a PhD in Biochemistry and Molecular Biology from
Johns Hopkins University. She also executed exploration at Duke School
and trained biology at Elon College. AJE is specializing in improving
the way new information and findings are revealed. We have been an
increasing club of specialists, research workers, vocabulary pros,
program coders, and publishing niche vets working together to discover
new different ways to advice investigators become successful.




Simple Different ways to Maximize Your Citation Count: A Guideline / My New WordPress Site

Tuesday, 17 January 2017

Impact of Social Sciences – Mendeley reader counts offer early evidence of the scholarly impact of academic articles

 Source: http://blogs.lse.ac.uk/impactofsocialsciences/2017/01/17/mendeley-reader-counts-offer-early-evidence-of-the-scholarly-impact-of-academic-articles/







Mendeley reader counts offer early evidence of the scholarly impact of academic articles

mikethelwallAlthough
the use of citation counts as indicators of scholarly impact has
well-documented limitations, it does offer insight into what articles
are read and valued. However, one major disadvantage of citation counts
is that they are slow to accumulate. Mike Thelwall has
examined reader counts from Mendeley, the academic reference manager,
and found them to be a useful source of early impact information.
Mendeley reader data can be recorded from the moment an article appears
online and so avoids the publication cycle delays that so slow down the
visibility of citations.
Counts of citations to academic articles
are widely used as evidence to inform estimates of the impact of
academic publications. This is based on the belief that scientists often
cite works that have influenced their thinking and therefore that
citation counts are indicators of influence on future scholarship. In
the UK’s REF2014 research assessment exercise,
11 of the 36 subject panels drew upon citation counts to inform their
judgements of the quality of academic publications, for example by
arbitrating when two expert reviewers gave conflicting judgements.
Citation counts are also widely used internationally for hiring,
promotion, and grant applications and aggregated citation-based
statistics are used to assess the impact of the work of large groups of
scholars in departments, universities and even entire countries. On top
of this, there are many informal uses of citation counts by individual
scholars looking to assess whether their work is having an impact or to
decide which of their outputs is having the most impact.
mendeleyImage credit: Mendeley Desktop and iOS by Team Mendeley. This work is licensed under a CC BY 2.0 license.
Despite their many limitations, such as
obvious cases where they are misleading and entire fields for which they
are almost meaningless, citation counts can support the onerous task of
peer review and even substitute for it in certain cases where the
volume of outputs is such that peer review judgements are impractical.
At the level of the individual scholar, citation counts can be useful to
indicate whether papers are read and valued. This gives outputs a
visible afterlife once they have been published and helps to identify
avenues of research that have been unexpectedly successful, motivating
future similar work. It also gives scholars a sometimes-needed incentive
to look outwards at the wider community when writing an article and
consider how it might attract an audience that might cite it. Of course,
uncited does not equate to irrelevant and James Hartley has recently listed his rarely cited articles that he values,
which is a useful reminder of this. Nevertheless, even though I have
little idea why my most cited article has attracted interest, the
knowledge that it has found an audience has motivated me to conduct
follow-up studies and to fund PhDs on the subject, whilst dropping lines
of research that have disappointingly flown under the radar and (so
far) avoided notice.
One major disadvantage of citation counts
is that they are slow to accumulate. Once an article has been published,
even if someone reads it on the first day that it appears and
immediately uses it to inform a new study, it is likely to be 18 months
(depending on the discipline) before that study is complete, written up,
submitted to a journal, peer reviewed, revised, accepted and published
so that its citations appear in Google Scholar, Web of Science or
Scopus. Uses of citation counts in formal or informal research
evaluations may therefore lag by several years. This delay is a major
disadvantage for most applications of citation counts. There is a simple
solution that is effective in some contexts: Mendeley reader counts
(Figure 1).
figure1Figure
1: Mendeley readers typically appear at least a year before citations
due to delays between other researchers reading a paper and their new
study being published.
Mendeley
is a social reference sharing website that is free to join and acts as a
reference manager and sharer for academics and students. Those using it
can enter reference information for articles that they are reading or
intend to read (and this is what most users do, as shown by Ehsan Mohammadi,
whose PhD focused on Mendeley) and then Mendeley will help them to
build reference lists for their papers. As spotted by York University
(Toronto) librarian Xuemei Li, it is then possible to count the number of registered Mendeley readers for any given article and use it as impact evidence for that article. This reader count acts like a citation count in that it gives evidence of (primarily academic) interest in articles but readers accrue about a year in advance of citation counts, as shown by a recent article (Figure 2 – see also: Maflahi and Thelwall, 2016; Thelwall and Sud, 2016).
Mendeley data is available earlier as scholars can register details of
an article they are reading in Mendeley whilst they are reading it, and
so this information bypasses the publication cycle delays (Figure 1). An
article may even start to accumulate evidence of interest in Mendeley
in the week it is published if people recognise it as important and
immediately record it in Mendeley for current or future use.
figure2Figure
2: A comparison between average Scopus citations and Mendeley readers
for articles from journals in the Scopus Transportation category, as
recorded in November/December 2014. Mendeley reader counts are much
higher than Scopus citations for more recent articles, with Scopus
citations lagging by at least 18 months. Citation counts are higher than
reader counts for older articles, probably due to citations from older
articles that were written before Mendeley was widely used. Geometric
means are used because citation counts are highly skewed (data from Maflahi and Thelwall, 2016).
Mendeley is by far the best general source of early scholarly impact information. Download counts are not widely available, counts of Tweets are very unreliable as an impact indicator and other early impact indicators are much scarcer.
The main drawback is that, at present, anyone can set up multiple
accounts and register as a reader of selected articles, making it
possible to spam Mendeley. For this reason, Mendeley reader counts
cannot be used in the UK REF or any other research evaluation that
includes stakeholders with time to manipulate the outcomes. An
additional limitation is that Mendeley reader counts are biased towards
articles that attract the Mendeley user demographic, which has
international and seniority/age imbalances. It is therefore tricky to use Mendeley for international impact comparisons.
It is not hard to obtain evidence of
Mendeley readers for an article – just search for it by title in
Mendeley (e.g. try the query ‘Mendeley readership altmetrics for the
social sciences and humanities: Research evaluation and knowledge
flows’) or look for the Mendeley segment within the Altmetric.com donut
for the article (as in this example;
to find a page like this, Google the article and add
‘site:altmetric.com’ to the end of your query). For large groups of
articles, the free Mendeley API can also be used to automatically
download reader counts for large sets of articles via the (also free)
software Webometric Analyst.
If you already have a set of articles with citation counts, then it is
simple to add Mendeley reader count data to it using this software.
This blog post is based on the author’s article, co-written with Pardeep Sud, ‘Mendeley readership counts: An investigation of temporal and disciplinary differences’, published in the Journal of the Association for Information Science and Technology (DOI: 10.1002/asi.23559).
Note: This article gives the views of
the author, and not the position of the LSE Impact Blog, nor of the
London School of Economics. Please review our 
comments policy if you have any concerns on posting a comment below.
About the author
Mike Thelwall
is Professor of Information Science at the School of Mathematics and
Computing, University of Wolverhampton. His research interests include
big data: webometrics, social media metrics, and sentiment analysis;
developing quantitative web methods for Twitter, social networks,
YouTube, and various types of link and impact metrics; conducting impact
assessments for organisations, such as the UNDP. His ORCID iD is:
0000-0001-6065-205X.


Impact of Social Sciences – Mendeley reader counts offer early evidence of the scholarly impact of academic articles