Monday, 25 July 2016

Scientific impact |


Scientific impact

How can I increase the scientific impact of my work?

Publish or perish

Science is a collective, collaborative enterprise, which means that
researchers build on the results of other researchers (cf. "standing on
the shoulders of giants", the slogan adopted by Google Scholar).
That means that you would like any research you do to inform others, so
that they may refer to it, use it, and hopefully develop and improve
it. This will depend of course on the quality of your work: incorrect or vague ideas or unreliable data are unlikely to inspire anyone to make something good out of it.

However, it is not sufficient to produce good quality research, you
must also ensure that others are informed about your results, i.e. you
should "publish" it in the broadest sense of the term. This explains the
"publish or perish" motto that characterizes present-day research: an
unpublished document could as well not exist since nobody but you can
use it. And scientists who do not produce useable results, are
considered to be non-existent within the academic community. Therefore,
they are very unlikely to get or keep an academic position ("perish"),
or collect additional funding. Publication has become the primary criterion by which research activitity is evaluated.

Still, publishing in the sense of making publically available is far from sufficient: we are presently drowned in an ocean of available information,
and it is very difficult for any particular publication to stand out
sufficiently so as to be guaranteed that it will be noticed by the
people that matter, i.e. your peers: the scientific colleagues working in the same domain as you.

Before the advent of the net, publication was a slow, difficult and
expensive process, and therefore only few documents would reach that
stage. This provided some kind of indication that these documents were
really worthwhile. Therefore being published, especially being published
internationally, was already enough of a indication of quality or
impact for a scientific work.

Peer review

Nowadays, with publication technologies becoming ever easier and
cheaper to use, another "gold standard" of research quality has emerged:
being peer-reviewed,
i.e. having passed a strict selection procedure where anonymous experts
in the domain have judged that your document is good enough to be
published in the particular journal, book or website to which you have
submitted it.

The principle of using "peers", i.e. people with a similar expertise
as you, is motivated by the fact that in science there are no ultimate
authorities who can judge what is right and what is wrong. The only
people who have enough expertise to judge are the people who are doing
research in the same domain as you. This means that one time researcher A
may evaluate the work of B, but another time B may be invited to judge
the work of A. The anonymity of the referees (and more rarely of the
authors being refereed) allows them to express themselves more honestly,
without fearing to offend a colleague, friend, or even potential

Peer review, while being the best existing method for research evaluation, does have problems of its own.

  • One difficulty is to find competent experts: working in the same
    domain does not necessarily mean being skilled in the same methodologies
    or theories.
  • Another one is that even for a good expert, novel research is by
    definition difficult to evaluate, as there are no accepted standards to
    judge the worth of an idea or approach, and as scientific work by its
    nature tends to be very complex and abstract.
Therefore, it is unavoidable that referee reports are to some degree
subjective, and that different experts often disagree about the value of
the same work. To counter this, journal editors normally consult at
least three referees, so that there can be a clear majority for or
against publication of a submitted paper.

But even then, good papers can be rejected, because they do not
match the referees' or editor's view of how problems in that domains
should be approached. Resubmitting to another journal is often
sufficient to turn a rejected paper into an accepted one. But usually
referees provide detailed arguments why they reject a paper, and you are
strongly advised to take them into account when resubmitting. Even if
you disagree with the criticisms made, it is worth addressing them in a
revised version, so as to avoid similar misunderstandings.

Journal impact factors

With the proliferation of publication media, the standards of
acceptability can vary widely from journal to journal, depending on the
number of referees they use, the critical attitude of the referees, and
the fact that journals with many submission only can publish a fraction
of submitted papers. This leads to great variation, with some accepting
more than 50% of all submissions, others less than 5%.This created the
need for some kind of "quality score" to rank journals.

The most used standard is the impact factor
which measures how often on average papers in that journal have been
referred to in other journals. This gives an indication of the "impact"
of the journal on the scientific community, i.e. the visibility and
authority that typicals papers in that journal have. As could be
expected, more people submit papers to high impact journals and
therefore by necessity their acceptance rate is much lower. This means
they are more selective in the papers they publish, and therefore can be
expected to offer higher quality on average.

The impact factor, which is computed each year by the Institute for Scientific Information (ISI)
for those journals which it tracks, has become about the most used
indicator of the quality of a scientist's publications. Therefore, all
researchers should be motivated to publish their work in journals with
the highest impact factor possible. About the highest scoring journals
overall are Nature and Science. Within the cognitive sciences, Behavioral and Brain Sciences, and Trends in Cognitive Sciences, have the highest impact, so it is worth submitting a paper there. Impact factors for journals can be found in the Journal Citation Reports

However, the impact factor has a number of intrinsic shortcomings:

  • only a fraction of all journals are tracked by the ISI, and therefore many journals don't have an impact factor.
  • moreover, citations in journals not tracked by the ISI are not
    counted. Therefore even an ISI journal's impact factor may be
  • an impact factor is only a snapshot over a particular time period (typically 2 years)
  • depending on the discipline, there are large differences in the
    average amount of citations (e.g. medical researchers cite much more
    than mathematicians), and in the number of journals that have a high
    impact factor–or any impact factor at all. New domains, such as
    complexity science or memetics, will typically have few or no journals
    with a high impact. Therefore, impact factors can in general not be used
    to compare work across disciplines.
  • an impact factor only measures the average citation rate for all
    papers in a journal. Some papers may score much better, becoming
    "citation classics" that everyone in the field knows, while others don't
    get any citation at all.

Personal citation scores

For these reason, in the long term it seems better to aim at high
"personal citation scores" rather than high journal citation scores,
i.e. make sure that your paper/book/document is referred to by many
people, independently of the place where it has been published. For
example, books are typically cited more often than papers, but don't
have impact factors. Below, we'll discuss some methods to achieve that.
For the short-term, though, for young researchers who cannot afford to
wait several years before their works becomes widely known because they
need to renew their funding, the best strategy is to submit first to
high impact journals, and if this doesn't work, gradually go down in the
"pecking order" until you find a journal willing to accept your paper.

Personal citation scores are also tracked by the ISI, in their Web of Science,
including the Science Citation Index, Social Science Citation Index and
Arts and Humanities Citation Index. However, since the ISI database
dates from long before the explosion in cheap computers and networks,
they tend to keep minimal data: merely author's last name and initials,
abbreviated title, journal name or abbreviation, and volume, issue and
page number.

This makes it very difficult to find all your citations, especially
if you have a common name shared with many other authors: you'll have to
go one by one through the list of cited papers to eliminate all those
written by someone with the same family name and initials, and keep
track manually of all the times papers of yours have been cited. Also,
the same name can sometimes be spelled differently, e.g. a paper by "Van
Overwalle, Frank" can be listed under VANOVERWALLE F, VAN OVERWALLE F,
VAN OVERWALLE F P or sometimes even OVERWALLE F V. It is worth checking
all possible alternative spellings, or your citation rate can be
strongly underestimated. Still, it may be worth to go through the
exercise to convince potential sponsors of your research that your work
is well-recognized internationally.

An advantage of this database is that it also includes citation of
books or documents that were not published in one of the journals that
ISI tracks, so you are not limited to traditional publication outlets to
collect citation scores. However, the citations themselves come
exclusively from tracked journals, and thus may underestimate
non-traditional domains for which there are no established journals in
the database.

PageRank: impact on the web

The newest generation of publication outlets makes fully use of the
ease and computability of the web, providing "impact" scores that more
flexibly reflect the true influence of your work on others. The most
popular and probably most effective method is the PageRank algorithm that the Google search engine
uses to calculate the importance of a web page. This algorithm does not
calculate importance or impact only by the number of links (equivalent
to citations or references), but by importance of the pages that link to
you. Thus, importance is calculated recursively, and a document cited
only once, but by a very well known other webpage may still get a high
PageRank score, while a paper cited by many other papers that are
themselves hardly referred to by others may have a quite low PageRank.

This alleviates the problem of the time it takes for novel work to
become widely known: it can be sufficient to convince one respected
authority in the field to link to you in order to immediately become
much more visible (i.e. easy to find through the Google Search engine).

With the recent advent of Google Scholar,
a search engine only for the scientific literature, Google seems to
move towards an integation of (web-based) PageRanks, and (journal-based)
citation scores, though it is not clear how the algorithm weighs the
different contributions when finding the most "important" scientific
paper on a given topic. An advantage of Google Scholar is that it is not
limited to ISI-tracked journals as it also counts citations in working
papers published only on the web. But it remains unclear which
journal/papers are in Scholar's database, and which are not...

In the longer term, it seems likely that such recursive, web-based
methods will become increasingly important, as more literature becomes
available on the web, and the algorithms are further refined. Therefore,
it seems like a pretty safe bet that having a high (Scholar) PageRank
will be the most effective way to get your scientific work recognized.
Moreover, you don't lose anything by focusing on increasing this web
visibility, as people publishing papers in ISI journals also
increasingly use the web to find relevant literature. Therefore they are
more likely to cite a paper that is visible on the web, even if it was
never published in a journal, or even underwent peer review. But is it
not clear when funding authorities will start using web-based methods to
directly evaluate the worth/impact of your research...

Improving your impact

As has become clear, there are two major methods to make your research more authoritative:

  • publishing in high-impact journals
  • persuading other scientists to refer to your work
The first is the most traditional, but has the disadvantage that for
novel, interdisciplinary research it will be difficuld to find a journal
willing to accept a paper that does not fulfil its standard criteria of
subject, methodology, reliability, etc. Still, improving the quality of
your research and writing will definitely increase your chances of

For the second too, quality is primary, but here you have more
leeway in getting recognition for unusual approaches. The most
straighforward method is to look around in order to spot who is
interested in similar approaches. You can then introduce your ideas to
these peers, e.g. by:

  • presenting your work at conferences and seminars on the subject
  • participating in email lists that discuss related topics, and referring to your papers when relevant for the on-going discussion
  • directly contacting peers, and pointing them to your papers, e.g. as
    available on a website, preprint archive, or journal, or sending them a
    copy of your paper.
  • having your work linked to by websites that attract a large number
    of like-minded people. If the website is really well-known (e.g. Principia Cybernetica Web
    for ECCO people) it will have a high PageRank, which will in part
    "propagate" to your webpage, and thus increase your visibility in
  • making sure that your paper is well-structured, containing all the
    appropriate keywords, title, abstract, etc., so that people looking for
    such papers are likely to effectively find them
  • submitting your papers to "preprint archives" on related subjects (e.g ArXiv or CogPrints), which are typically scanned by many researchers looking for something that falls in their domain of interest
  • making sure your paper is listed in the publications or working papers
    page of your research group or university: usually the visibility of
    the institution is much larger than that of a single member
  • making your work known outside the academic community, e.g. by
    contacting science journalists, who typically reach a much wider, albeit
    less specialized audience. This will only work for non-technical work
    with broad implications.
None of these methods implies that you should broadcast your work as
far and wide as possible ("spamming"). People are more likely to get
irritated by "off-topic" announcements with little relevance to their
work, and are more likely to ignore your announcements later, even if
now they are relevant. The point is to be selective, and find the best
academic environments, where your ideas have most chance to take root...
For that, you'll need a keen eye for what's going in your field, and
frequently investigate who is using similar keywords, or referrring to
the same authors as you.

Scientific impact |

No comments:

Post a Comment