Sunday, 19 February 2017

Nader Ale Ebrahim (@aleebrahim) | Twitter


Nader Ale Ebrahim (@aleebrahim) | Twitter

Literature Review from Search to Publication, Part 2: Finding proper articles

Source: https://doi.org/10.6084/m9.figshare.4668241.v1

Literature Review from Search to Publication, Part 2: Finding proper articles

byNader Ale Ebrahim
“Research
Tools” can be defined as vehicles that broadly facilitate research and
related activities. “Research Tools” enable researchers to collect,
organize, analyze, visualize and publicized research  outputs. Dr. Nader
has collected over 700 tools that enable students to follow the correct
path in research and to ultimately produce high-quality research
outputs with more accuracy and efficiency. It is assembled as an
interactive Web-based mind map, titled “Research Tools”, which is
updated periodically.  “Research Tools” consists of a hierarchical set
of nodes. It has four main nodes: (1) Searching the literature, (2)
Writing a paper, (3) Targeting suitable journals, and  (4) Enhancing
visibility and impact of the research. This workshop continues the
previous one and some other tools from the part 1 (Searching the
literature) will be described. The e-skills learned from the workshop
are useful across various research disciplines and research
institutions.


Literature Review from Search to Publication, Part 2: Finding proper articles

Literature Review from Search to Publication, Part 1: Systematic Review

 Source: https://doi.org/10.6084/m9.figshare.4668232.v1

Literature Review from Search to Publication, Part 1: Systematic Review

byNader Ale Ebrahim
“Research
Tools” can be defined as vehicles that broadly facilitate research and
related activities. “Research Tools” enable researchers to collect,
organize, analyze, visualize and publicized research  outputs. Dr. Nader
has collected over 700 tools that enable students to follow the correct
path in research and to ultimately produce high-quality research
outputs with more accuracy and efficiency. It is assembled as an
interactive Web-based mind map, titled “Research Tools”, which is
updated periodically.  “Research Tools” consists of a hierarchical set
of nodes. It has four main nodes: (1) Searching the literature, (2)
Writing a paper, (3) Targeting suitable journals, and  (4) Enhancing
visibility and impact of the research. In this workshop some tools as an
example from the part 1 (Searching the literature) will be described.
The e-skills learned from the workshop are useful across various
research disciplines and research institutions.


Literature Review from Search to Publication, Part 1: Systematic Review

Using citation analysis to measure research impact | Editage Insights

Source: http://www.editage.com/insights/using-citation-analysis-to-measure-research-impact








Using citation analysis to measure research impact





Measuring research impact





The
landscape of science and research is rapidly evolving. Gone are the
days when all members of a university department would celebrate the
successful publication of a colleague’s paper.1 Earlier,
scientists would simply consider the number of papers they had published
as a measure of their academic standing. Today, the focus is
increasingly shifting from whether a researcher has published a paper to
where he/she has published it and the impact that piece of research has
on the scientific community and the world at large.2 

How
can you measure the quality of a research paper? More importantly, how
can you determine whether your research is making an impact and is
considered important? An objective way is through citation analysis. 



Citation analysis

Why
count citations in the first place? The list of references directing
readers to prior relevant research is considered a fundamental part of
any research paper.
3 A
reference or citation is a form of acknowledgment that one research
paper gives to another. Research is additive—scientists build on past
work to discover new knowledge. To identify gaps in existing research
and choose a research topic, researchers read the relevant published
research and use this existing material as a foundation for arguments
made in their own research papers.


11 reasons to cite previous work

  1. To direct readers to an authentic source of relevant information
  2. To help other researchers trace the genealogy of your ideas
  3. To acknowledge pioneers and peers
  4. To direct readers to previously used methods, and equipment
  5. To criticize or correct previous work
  6. To substantiate your claims and arguments with evidence
  7. To show that you have considered various opinions in framing your arguments
  8. To highlight the originality of your work in the context of previous work
  9. To guide other researchers in their work
  10. To build your credibility as an author
  11. Finally, because not citing sources can amount to plagiarism4
What are the various citation-based metrics?

Citation analyses can be grouped according to some broad types based on who/what is being evaluated.

  1. Ranking journals:
    Journals are ranked by counting the number of times their papers are
    cited in other journals. Journal-level metrics are generally meant to
    serve as an indicator of journal prestige. The most well known of these
    is the journal impact factor, from Journal Citation Reports
    ®(a
    product of Thomson Reuters). The journal impact factor is calculated as
    the average number of citations all articles in a journal receive over a
    specific period of time.
    5
  2. Ranking researchers:
    Various citation metrics are now used for this purpose. Researchers are
    ranked by counting the number of times their individual papers are
    cited in other published studies. These metrics are also used to
    evaluate researchers for hiring, tenure, and grant decisions. A
    researcher-level metric that is gaining popularity is the h index,
    6 which
    is calculated by considering a combination of the number of papers
    published by a researcher and the number of citations these papers have
    received.
  3. Ranking articles:
    Article-level citation counts may provide an accurate evaluation of the
    quality and impact of a specific piece of work, regardless of the
    author. Unfortunately though, such metrics are rarely considered because
    obtaining these data is tedious and time-consuming.
    7
  4. Ranking universities and countries:
    There are databases that rank universities and countries by considering
    their overall research output through criteria such as citable
    documents, citations per document, and total citations. These metrics
    help determine which universities and countries have the most and/or
    best scientific output. For example, Scimago Research Group (
    http://www.scimago.es/ ) releases annual reports of institution- and country-wise rankings.
How can citation analysis help you?

Researchers
today are faced with increasing pressure to get published. Academic
departments are expected to meet specific levels of publication output.
Clearly, there is a lot at stake in the assessment of research quality
for both individuals and institutions. Given this, governments, funding
agencies, and tenure and promotion committees are looking toward simple
and objective methods to assess increasing research volumes in the least
possible time. To this end, they are turning more and more to citation
analysis for objective parameters of impact assessment. 




Pitfalls of citation analysis

When using citation analysis, it is important to bear in mind some of its limitations3,7

  • It
    overlooks the disparity in discipline-wise citation rates, that is, the
    fact that citation patterns differ among disciplines and over time.
  • It
    ignores the fact that certain manuscript types such as letters and case
    reports offer inadequate scope for citation and typically have short
    reference lists. 
    The
    sentiment of the citation is not considered; that is a negative
    citation (one used to refute a prior claim) is given as much merit as a
    positive citation (one used to further the claim being made). So even a
    paper that has been cited simply to discredit it can work to the
    author’s advantage in citation analysis.
  • It
    does not account for author contribution on papers with multiple
    authors: such citations are as meritorious as those to single-author
    papers. Citation analysis attributes equal importance to all authors of a
    paper, regardless of their individual contribution.
Thus,
sole reliance on citation data provides an incomplete understanding of
research. Although citation analysis may be simple to apply, it should
be used with caution to avoid it coming under disrepute through
uncritical use.
3 Ideally,
citation analysis should be performed to supplement, not replace, a
robust system of expert review to determine the actual quality and
impact of published research.
8

Future of citation analysis

Given
the shift to online interfaces by more and more journals and
repositories, digital information is now available at a few clicks. With
the advent of linking tools and digital archives of research papers,
scientific literature is more easily retrievable than ever before.
Therefore, it is only to be expected that the population of researchers
turning to citation data will continue to grow. In such a scenario,
researchers cannot afford to undermine the importance of citation
analysis. 


So
next time you are preparing for a promotion or applying for a new
position, consider using citation analysis as a means to bolster your
eligibility. Use the citation count feature offered by online databases
like Web of Science to compile your citation data and employ multiple
citation metrics to highlight your research output.


Bibliography
  • Dodson MV (2008). Research paper citation record keeping: It is not for wimps. Journal of Animal Science, 86: 2795-2796.
  • Thomson Reuters. History of citation indexing. Essay in Free Scientific Resources. [http://thomsonreuters.com/products_services/science/free/essays/history_of_citation_indexing/] 
  • Smith L (1981). Citation analysis. Library Trends, 30: 83-106.
  • Garfield E. Citation indexing-Its Theory and Application in Science, Technology, and Humanities. New York: Wiley, 1979.
  • Garfield
    E (2006). The history and meaning of the journal impact factor. The
    Journal of the American Medical Association, 295: 90-93.
  • Hirsch
    JE (2005). An index to quantify an individual’s scientific research
    output. Proceedings of the National Academy of Sciences USA, 102:
    16569-16573.
  • Neylon C and Wu S (2009). Article-level metrics and the evolution of scientific impact. PLoS Biology, 7: 1-6.
  • Moed
    HF (2007) The future of research evaluation rests with an intelligent
    combination of advanced metrics and transparent peer review. Science and
    Public Policy, 34: 575-583.




Using citation analysis to measure research impact | Editage Insights

Tuesday, 14 February 2017

Impact of Social Sciences – Tracking the digital footprints to scholarly articles: the fast accumulation and rapid decay of social media referrals

Source: http://blogs.lse.ac.uk/impactofsocialsciences/2017/02/14/tracking-the-digital-footprints-to-scholarly-articles-the-fast-accumulation-and-rapid-decay-of-social-media-referrals






Tracking the digital footprints to scholarly articles: the fast accumulation and rapid decay of social media referrals

wangAcademics
are increasingly encouraged to share their scholarly articles via
social media, as part of a wider drive to maximize their dissemination
and engagement. But what effect does this have? Xianwen Wang
has studied the referral data of academic papers, with particular focus
on social media referrals and how these change over time. Referrals
from social media do indeed account for a significant number of visits
to articles, especially in the days immediately following publication.
But this fast initial accumulation soon gives way to a rapid decay.
PeerJ,
an open access, peer reviewed scholarly journal, provides data on the
referral source of visitors to all of its article pages. This is quite
unique as such data is not available on other publisher or journal
websites. These metrics are updated on a daily basis following an
article’s publication, meaning for the first time we are able to track
the digital footprints to scholarly articles and explore people’s
visiting patterns.
In our previous study examining referral data collected from PeerJ,
social network platforms were proven to be among the top referral
sources. Social media directs many visitors to scholarly articles. In
our more recent study, we used the daily updated referral data of 110 PeerJ articles collected over 90 days (22 January – 20 April 2016) to track the temporal trend of visits directed by social media.
footprintsImage credit: 20070912-16 by Matt Binns. This work is licensed under a CC BY 2.0 license.
Twitter and Facebook account for most social media referrals
During our observation period, 19 February
was the first day on which all 110 sample articles had visiting data,
with 20 April being the last day of the research period and the point at
which all papers in our sample had been published for at least 60 days.
According to the findings of our study, article visits directed by
social referrals account for more than 12% of all visits (as shown in
Figure 1). Twitter and Facebook are the two most important social
referrals directing people to scholarly articles; between them
accounting for more than 95% of all social referrals. Individually
Twitter and Facebook were roughly equivalent to one another, each
falling within the 42-54% range.
figure-1Figure 1: The proportion of article visits from social referrals on two specific days. Source: Wang et al, (2016). Tracking the digital footprints to scholarly articles from social mediaScientometrics. © Akadémiai Kiadó and republished here with permission.
Attention from social media: “easy come, easy go”
To track temporal trends in what
percentages of total visits to articles could be accounted for by social
media referrals, the daily visiting data of each article were grouped
according to the publish–harvest interval days (the number of days from
publication to data being recorded). The visiting dynamics analysis
(Figure 2) shows an obvious overall downward temporal trend in the
proportion of all visits originating from social media. Where papers had
been published for just one day, social referrals accounted for 20% of
all visits. After 90 days, this percentage falls to only 9%.
Overall, during the initial period
following a scholarly article’s publication, social attention comes very
quickly. In most cases, visits from social media are much faster to
accumulate than visits from other referrals, with most of those visits
directed by social referrals being concentrated in the few days
immediately following publication. About 77% of the visits from social
media are generated in the first week after publication. However – “easy
come, easy go” – social buzz around scholarly articles doesn’t last
long, leading to a rapid decay in the article visits from social
referrals.
figure-2Figure 2: Temporal trend of the proportion of visits from social media in the total visits. Source: Wang et al, (2016). Tracking the digital footprints to scholarly articles from social mediaScientometrics. © Akadémiai Kiadó and republished here with permission.
The role of social buzz in directing
people to scholarly articles can be illustrated by a specific example.
As shown in Figure 2, a small but noticeable increase occurs at the
middle part of the curve. We reviewed the data and discovered that this
small burst is attributable to a jump in visits from Twitter to paper 1605.
Paper 1605 was published on 2 February 2016. To 6 March, the number of
article visitors directed by Twitter had reached 381. On 7 March, a
particularly influential Twitter account
(with 1.97 million followers) tweeted about the paper. That tweet was
retweeted 11 times on the same day and is the reason the number of
article visitors from Twitter rose dramatically from 381 to 751 in only a
few days.
The fluctuation visible towards the end of
the curve is caused by the vast decrease in the number of samples with
sufficiently long time windows (in number of days since publication).
Synchronism between the number of tweets and article visitors from Twitter
figure-3Figure 3: Synchronism of temporal trend of tweets and their procured visits for the paper 1605. Source: Wang et al, (2016). Tracking the digital footprints to scholarly articles from social mediaScientometrics. © Akadémiai Kiadó and republished here with permission.
The synchronism of the growth in the
number of tweets and that in article visitors from Twitter testifies
partially that social mentions do direct people to read scholarly
articles, although we don’t know who is directed by which tweet. Article
visitors from social referrals may be researchers, students, or even
the general public. However, it does prove that the public attention on
social media can be transformed into the real clicks on scholarly
articles.
This blog post is based on the author’s co-written article, ‘Tracking the digital footprints to scholarly articles from social media’, published in Scientometrics (DOI: 10.1007/s11192-016-2086-z).
Note: This article gives the views of
the author, and not the position of the LSE Impact Blog, nor of the
London School of Economics. Please review our 
comments policy if you have any concerns on posting a comment below.
About the author
Xianwen Wang is a Professor at WISE Lab, Dalian University of Technology in China and an Associate Editor of Frontiers in Research Metrics and Analytics. His ORCID iD is 0000-0002-7236-9267.
Print Friendly


Impact of Social Sciences – Tracking the digital footprints to scholarly articles: the fast accumulation and rapid decay of social media referrals

Monday, 13 February 2017

Nuts and Bolts: The Super Long List of Things to Do When Starting a New Journal      - The Scholarly Kitchen

 Source: https://scholarlykitchen.sspnet.org/2016/08/04/nuts-and-bolts-the-super-long-list-of-things-to-do-when-starting-a-new-journal/




Nuts and Bolts: The Super Long List of Things to Do When Starting a New Journal     


Launch of the USS New Jersey in 1942. Image courtesy of the US Government.
Launch of the USS New Jersey in 1942. Image courtesy of the US Government.

This past May, I participated in a session at the Council of Science Editors Annual Meeting about starting a new journal. My role was to discuss the logistics and technical issues, or better titled, the Super Long List of Things to Do. There were two very good presentations that went along with mine. Cara Kaufman of Kaufman, Wills, Fusting, & Co. discussed when and how to decide whether to start a new journal. Katherine Bennett presented a case study for the launch of a new open access journal at the American Society of Radiation Oncology.


The idea of launching a new journal may seem easy with today’s
technology. Some may argue that all you need is a website with a content
management system. This may work for some communities but for a journal
that wants to meet the expectations of the typical journal user and/or
subscriber, there are many, many things that need to be done.


I have launched three journals in the last four years, none of which
are open access (OA) journals. I will try to differentiate between a
subscription journal and an OA journal where necessary but I honestly
think the process is pretty much the same, regardless of the business
model.


So let’s assume that the business case for starting a new journal has
been met and you already have an editor in place. Now you are tasked
with all of the details needed to actually launch a new journal. Here
are some things I have learned along the way.


In order to keep track of everything, I keep an Excel spreadsheet.
This was originally created by an über-organized coworker. The
spreadsheet has been refined and  now serves two purposes: first, to
record and keep track of deadlines and responsibilities; and second, to
share critical information with everyone who needs the information.


In order to maintain the integrity of the data, all questions that
come my way are answered with the spreadsheet — I literally send them
the sheet, not cut and paste information. I have seen too many instances
where retyping information results in errors. Of course this means that
your spreadsheet needs to be correct and updates noted.


Identifiers

The first part of the sheet contains what I call “identifiers.” These
are basic metadata elements that need to be correct and decided
relatively early.


Title — What to call a journal can change as more
people get involved with reviewing information; but, it’s important to
make the decision and stick with it. I did have a journal title change
half way through launch once and it required that I get new ISSNs, which
was another unnecessary delay. You should also include an abbreviated
title on your spreadsheet. Again, you want the same abbreviation to
appear everywhere. For my program at the American Society of Civil
Engineers (ASCE), we use the abbreviated title in our references and the
same abbreviations everywhere else.


Internal acronyms and codes — All of our journals
have a two-letter acronym. This acronym is part of our manuscript
numbering system and the URL for our manuscript submission sites. You
may also need a code for internal accounting purposes. Remember that you
probably need accounting codes for outgoing payments but also incoming
payments.


ISSN — Serial publications should have an International Standard Serial Number or ISSN.
Every format of the journal requires an ISSN. If you have a print and
an online format, you need to request two ISSNs. For forthcoming print
titles, an ISSN can be requested prior to the first issue being
published if you provide a journal masthead page. Once the first issue
is published, you will need to mail a copy to the Library of Congress in
order for your ISSN to move from provisional to final.


For online-only publications, you cannot request an ISSN until 5
papers have been published. A URL will be required in lieu of the print
masthead page. Note that many of the library holdings systems require
ISSNs so even OA journals should consider having an ISSN for the
libraries.


In the U.S., ISSNs are assigned by the Library of Congress. There are other ISSN granting institutions outside the U.S. An important note — an ISSN must be registered with the International ISSN Registry
in order for Scopus (and possibly others) to index the journal. ISSNs
from the Library of Congress are covered but some international ISSN
granting groups are not so careful about this.


CODEN — A CODEN
is a combination of six letters and numbers assigned by the Chemical
Abstract Services for cataloging serials. At ASCE, we have always had
CODENS, partly because our first online platform required them. We still
use CODENS as a unique journal identifier in places like the URL for
journals and in the DOI. A CODEN is not required and many journals
outside of the physical sciences do not use them.


DOI — Our Digital Object Identifiers,
or DOIs, have evolved over time. Because we have 36 journals, we like
to at least be able to identify the journal by just glancing at the DOI.
In the beginning, we had loads of information in the DOI, then we
switched to including ISSNs in the DOI string. With the delay in getting
an ISSN for online only journals, we were forced into another change
and now use the CODEN followed by the sequential number string. There
are no requirements to include identifying information in a DOI string
and, I would venture to guess that Crossref would probably rather you not do that anyway!


Format and Design

Frequency and schedule — If you intend to have
“issues,” which is still advantageous for journals that will be indexed
by Abstract and Indexing (A&I) services and others, you will need a
frequency. This information will also be needed if you are selling
subscriptions to the journal. Even if you intend to employ some form of
continuous publication (eFirst, Just Accepted, etc.), you will need to
set a frequency if issues are involved. The schedule for issues may be
fluid for some publications but with 36 journals, we attempt to balance
the number of issues coming out in any given month so as to not
overwhelm the production department.


Cover and interior — “Cover” may not be the correct
word in you have an online-only journal but you will need some branding
and likely something shaped like a cover. Have you ever wondered why
eBooks or online-only journals have a graphic that looks like a regular
cover? It’s because that’s what people expect to see in marketing
pieces. If it doesn’t have a cover, it’s not real. Also, many of the
“spaces” provided on off-the-shelf online platforms for a publication
image are the shape of a cover thumbnail. The spreadsheet should note
any color considerations for branding, additional logos that need to be
included, and notes about interior design.


Submission and Production Set-Up

Submission site — Note the URL for submissions when
available. This will be important for marketing the journal and the
call-for-papers campaign. This portion of the spreadsheet also includes
information about the review style (EIC, Associate Editors, Editorial
Board, Advisory Board, Single-blind, double-blind, open review, etc.). I
also note on this section whether we can pull information from an
existing site, such as a reviewer pool from another one of ASCE’s
journal that has related content.


Classifications and taxonomy — If you have a
taxonomy, it is important to review the taxonomy against the Aims and
Scope of the new journal to ensure that you have appropriate terms. We
use classifications for people and papers in our submission site so
identifying where those will come from and who will review them (likely
the editor) is important.


Article types and production issues — This section
could be quite extensive and perhaps warrant a whole other worksheet
depending on the journal. At ASCE, we try to keep the journals
standardized so I simply note whether there are any additional article
types that production needs to build into the XML metadata.


Metadata

Crossref and other indexing services — Depositing
DOIs with Crossref is an important step for discoverability. You should
inform Crossref and any other indices that a new title is forthcoming.
In order to deposit a DOI for an article, an ISSN is needed and as
mentioned earlier, you cannot apply for one for online only content
until at least 5 papers have been published. You are permitted to
deposit DOIs with a journal title level DOI
but those will need to be replaced when an ISSN is added. Either way,
it’s important to note that your DOIs will need to be deposited off
cycle and that getting the ISSN as soon as possible is important.


Web of Science— You should be sending Thomson
Reuters (or their apparent successor) a frequency chart each year with
any changes to frequency. New journals should be added even if you
haven’t applied for coverage yet. There is an application for getting a new journal indexed and you can apply immediately once you start publishing content.


Thomson Reuters takes timeliness of issues very seriously. Once you
have applied and have published three issues, you are encouraged to ask
for a status update. This will ensure that someone is actually
evaluating your content. You will need to provide access to Thomson
Reuters for evaluation. If your content is behind a paywall, you will
need to provide them with subscriber access. You can read more about the
evaluation criteria and process here.
Generally speaking, you will be informed if and when your journal is
indexed. This could take years. A journal will not be assigned an Impact
Factor until it is accepted into the appropriate database.


Scopus/Compendex — It is important to note that you
cannot apply for coverage in Elsevier’s databases until the journal has
been published for three years. Once the time has passed, there is an online application and evaluation process. The Scopus database is separate from the other Elsevier databases and as such two separate applications are required. More information can be found here. You will be informed if your journal has been accepted or denied. It can take more than a year to find out.


PubMed/Medline — For print journals, you must supply copies to Medline
for evaluation and you can start as soon as the first issue is out. For
online journals, you cannot apply for coverage until you have published
for 12 consecutive months and you have published 40 articles. Medline
requires access to content for evaluation purposes.


Google Scholar — While it may not be entirely necessary to inform Google Scholar
of a new journal, it certainly doesn’t hurt. Google Scholar is quite
accessible and appreciates it when publishers are proactive about their
plans.


Feed and crawler management — The spreadsheet should
indicate if there are any metadata feeds or crawler that the new
journal should be excluded from. If not, then you may actually need to
add this new title to the feeds you are managing (see next section on
Website).


Website Set-Up

Landing page — A new journal needs to be added to
the publication platform. All of the information needed in the
administrative tools for set up should be included in the spreadsheet.
You may need to decide when to make a journal landing page live and
whether having a “coming soon” page makes sense. For us, we include
cover art, editor, Aims and Scope, Submission information, and the
ability to sign up for Tables of Content Alerts. Whether on the platform
or not, potential authors will need access to the Aims and Scope as
well as editor information as early as possible.


In house web ads — Identify other web pages within the platform would be most appropriate for Call for Papers ads and announcements.


Turn feeds on or off — Depending on your platform,
you may need to manually include the journal in routine feeds of
metadata. Sometimes, you may need to suppress a feed until a later date
(like if you don’t have an ISSN yet for Crossref deposits).


Subject categories — If the journal platform has title level subject categories, these should be assigned at set up.


Contract and Notifications

You know you have them, you probably have lots of them. If your
contracts or agreements list the journal titles, you may need to reach
out to those partners with an addendum. You may need to adjust the
contracted number of papers being hosted or typeset depending on the
volume of new journal. Don’t forget to review any agreements with
A&I services as well as archive services like CLOCKSS and Portico.


Marketing

New journals require a serious amount of marketing support. We cover
this in separate meetings between marketing and journals. It is
important for the journals and production teams to know the schedule for
things like annual catalogs and maybe member journal renewals. Annual
meetings or conferences may also be the platform for announcing a new
journal. The marketing schedule should run parallel to the journal
launch schedule to maximize opportunities for promotion. Promotions we
have done for new journals include:


  • Call for Papers PDF flier (can print for conference booths and send to the editors for email distribution)
  • E-mail campaigns to authors or members that may be interested in the new title
  • Editor interview posted to organization website
  • Conference promotions (fliers, posters, etc.)
  • Editor solicitation cards (pocket-sized cards that members of the
    editorial board can use at conferences to solicit submissions from
    presenters)
  • Social media — post early, post often

Internal Communication

There are lots of people within your organization that need to know about new journals. Here is a list that I use:


  • Customer Service — make sure they can answer any questions that come
    in about the new title. You don’t want someone to call with a question
    and the customer service rep says that you don’t have a journal with
    that title.
  • Membership — the new journals should be included on things like a member renewal or services brochure.
  • Website Team — Our corporate website is separate from our
    publication website. It’s important to include the new journal on any
    corporate website pages that focus on publication titles.
  • IT and Accounting — If you pull sales reports on journals or track
    APCs paid per journal, then likely there is a report that needs to have
    the new journal added.
Without a doubt, the hardest part of launching a new journal is
getting the editorial staff or volunteers on board and then soliciting
content. For a subscription journal, constant and steady solicitation is
vitally important to ensure that quality peer-reviewed content is
served to subscribers in a timely fashion. For an OA journal, the
pressure for subscriptions is null but you still want to have a nice
showing of content for the marketing blitz.


There is a ton of competition with new journals being born all the
time. Starting a new journal is not to be taken lightly. Gone are the
days — if they ever existed — to “build it and they will come.” It’s a
lot of work.


In this post, I have tried to outline the more routine details — my “to do” list for starting a new journal. I hope you find the spreadsheet template and PowerPoint slides helpful and I look forward to your comments on how you manage the process.




Nuts and Bolts: The Super Long List of Things to Do When Starting a New Journal      - The Scholarly Kitchen

Promote your Research to General Audience through Online Magazine



Promote your Research to General Audience through Online Magazine

byNader Ale Ebrahim
The long run research
findings will be disseminated through publications. However, researchers may
have created some local content
which should be
circulated immediately. Online magazines can
be one solution through content curation to
immediately circulate the research findings. Content curation is not just
sharing all kinds of content you stumble upon or source. It’s most of all about
doing it in a smart and audience-centric way, by focusing on specific topics
you want to curate content about. This workshop
introduce various
tools on Publish online magazine (Content curation) to increase the
visibility and enhance the impact
of research work.


Promote your Research to General Audience through Online Magazine

Thursday, 9 February 2017

Web-application development projects by online communities Which practices favour innovation?

 Source: https://www.scopus.com/

Volume 117, Issue 1, 2017, Pages 166-197

Web-application development projects by online communities Which practices favour innovation?  (Article)


Faculty of Science and Technology, Free University of Bolzano-Bozen, Bolzano, Italy




Department of Computer, Control and Management Engineering, Sapienza University of Rome, Rome, Italy





Abstract

Purpose - The purpose
of this paper is to propose an in-depth analysis of online communities
of practice that support the innovative development of web applications.
The analysis is aimed at understanding the preeminent characteristics
of communities of practice that can favour the process of innovation
(conceptualisation and realization of a web application) and if these
characteristics differ in the diverse phases of a software development
project (requirement specification, design, implementation and
verification). Design/methodology/approach - The authors adopted a
multiple case study research design, selected 29 communities of practice
related to the development of web applications and classified them
recognizing the different practices that refer to the different phases
of the innovation process of web-applications software development.
Finally, the authors focussed on seven communities comparing five
important dimensions for each one. Findings - The results of the
empirical analysis show that the best practices are different,
considering the different phases of the project, and that these
practices can be strategies directed at members to attract them and
also, strategies directed at the community to permit collaboration.
Originality/value - The paper proposes an important and new insight into
the management of virtual communities of practice (VCoP). The authors
supposed that the ways to manage a VCoP could depend on project phases.
In particular, the management practices of community should differ
according to the different project phases, i.e. requirements
specification, design, implementation and verification of the software.
Literature in this sense presented only research focussed on the
different effects of virtualness on teams depending on the length of
team duration and on communication efforts. © Emerald Publishing
Limited.

Author keywords

Innovation; Online communities of practice; Project management; Web-application projects

Indexed keywords

Engineering controlled terms: Application programs;
Innovation; Online systems; Project management; Social networking
(online); Specifications; Verification; Virtual reality
Design/methodology/approach; On-line communities;
Requirement specification; Requirements specifications; Software
development projects; Virtual communities of practices; WEB application;
Web application development
Engineering main heading: Software design


ISSN: 02635577

CODEN: IMDSD
Source Type: Journal
Original language: English


DOI: 10.1108/IMDS-10-2015-0440
Document Type: Article
Publisher: Emerald Group Publishing Ltd.


Scopus - Document details

Tuesday, 7 February 2017

KoreaMed Synapse

 Source: https://synapse.koreamed.org/search.php?where=aview&id=10.3346/jkms.2017.32.2.173



ournal List > J Korean Med Sci > v.32(2); Feb 2017




 


Opinion  Open Access







































J Korean Med Sci. 2017 Feb;32(2):173-179. English.
Published online December 12, 2016.  https://doi.org/10.3346/jkms.2017.32.2.173


© 2017 The Korean Academy of Medical Sciences.



The Journal Impact Factor: Moving Toward an Alternative and Combined Scientometric Approach

Armen Yuri Gasparyan,1
Bekaidar Nurmashev,2
Marlen Yessirkepov,3
Elena E. Udovik,4
Aleksandr A. Baryshnikov,5
and George D. Kitas1,6

1Departments of Rheumatology and Research and Development,
Dudley Group NHS Foundation Trust (Teaching Trust of the University of
Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK.


2South Kazakhstan State Pharmaceutical Academy, Shymkent, Kazakhstan.


3Department of Biochemistry, Biology and Microbiology, South Kazakhstan State Pharmaceutical Academy, Shymkent, Kazakhstan.


4Department of Economy and Financial Management, Kuban State Technologiсal University, Krasnodar, Russian Federation.


5Department of Development and Exploitation of Oil and Gas Fields, Industrial University of Tyumen, Tyumen, Russian Federation.


6Arthritis Research UK Epidemiology Unit, University of Manchester, Manchester, UK.


Address
for Correspondence: Armen Yuri Gasparyan, MD. Departments of
Rheumatology and Research and Development, Dudley Group NHS Foundation
Trust (Teaching Trust of the University of Birmingham, UK), Russells
Hall Hospital, Pensnett Road, Dudley DY1 2HQ, West Midlands, UK. Email: a.gasparyan@gmail.com




Received November 14, 2016; Accepted November 27, 2016.



This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/)
which permits unrestricted non-commercial use, distribution, and
reproduction in any medium, provided the original work is properly
cited.






Abstract


The Journal Impact Factor (JIF) is a single citation metric,
which is widely employed for ranking journals and choosing target
journals, but is also misused as the proxy of the quality of individual
articles and academic achievements of authors. This article analyzes
Scopus-based publication activity on the JIF and overviews some of the
numerous misuses of the JIF, global initiatives to overcome the
‘obsession’ with impact factors, and emerging strategies to revise the
concept of the scholarly impact. The growing number of articles on the
JIF, most of which are in English, reflects interest of experts in
journal editing and scientometrics toward its uses, misuses, and options
to overcome related problems. Solely displaying values of the JIFs on
the journal websites is criticized by experts as these average metrics
do not reflect skewness of citation distribution of individual articles.
Emerging strategies suggest to complement the JIFs with citation plots
and alternative metrics, reflecting uses of individual articles in terms
of downloads and distribution of related information through social
media and networking platforms. It is also proposed to revise the
original formula of the JIF calculation and embrace the concept of the
impact and importance of individual articles. The latter is largely
dependent on ethical soundness of the journal instructions, proper
editing and structuring of articles, efforts to promote related
information through social media, and endorsements of professional
societies.




Keywords: Journal Impact Factor; Periodicals as Topic; Editorial Policies; Publishing; Publication Ethics; Science Communication


INTRODUCTION
The Journal Impact Factor (JIF) is the brainchild of Eugene Garfield,
the founder of the Institute for Scientific Information, who devised
this citation metric in 1955 to help librarians prioritize their
purchases of the most important journals. The idea of quantifying the
‘impact’ by counting citations led to the creation of the prestigious
journal rankings, which have been recorded annually in the Science
Citation Index since 1961 (1). The JIFs are currently calculated by Thomson Reuters annually and published in the Journal Citation Reports (JCR).


The original formula of the JIF measures the average impact of
articles published in a journal with a citation window of one year
(numerator). The ‘citable’ articles, which are counted in the
denominator of the formula, are published during the 2 preceding years.
To get the JIF, a journal should be accepted for coverage by citation
databases of Thomson Reuters, such as the Science Citation Index
Expanded, and remain in the system for at least three years. Although
there are no publicized criteria, influential new journals occasionally
get their first (partial) JIF for a shorter period of indexing by
Thomson Reuters databases (2).


Thomson Reuters' citation databases were initially designed to serve
regional interests of their users from the U.S. English sources were
preferentially accepted for coverage, and the JIFs were published to
compare the ‘importance’ of journals within a scientific discipline.
Nonetheless, the JIFs have gradually become yardsticks for ranking
scholarly journals worldwide, and their use has expanded far beyond the
initial regional and disciplinary limits (3).



PUBLICATION ACTIVITY
The issue of uses and misuses of the JIFs is a hot topic itself. The
dynamics and patterns of global interest to the issue can be explored by
a snapshot analysis of searches through Scopus, which is the most
comprehensive multidisciplinary database. As of November 6, 2016, there
are 4,003 indexed items, which are tagged with the term “Journal Impact
Factor (JIF)” in their titles, abstracts, or keywords, with date range
of 1983 to 2016. A steady increase of the indexed items starts from 2000
(n = 10) and reaches its pick in 2013 (n = 645) (Fig. 1). Top 5 periodicals that actively publish relevant articles are PLOS One (n = 111), Scientometrics (n = 105), Nature (n = 50), J Informetrics (n = 41), and J Am Soc Inform Sci Technol
(n = 26). Top 3 prolific authors in the field are the following
renowned experts in research evaluation and scientometrics: Bornmann L
(n = 22), Smith DR (n = 17), and Leydesdorff L (n = 14). Among the most
prolific countries, the U.S.A. is the absolute leader with 904 published
documents. Importantly, the absolute majority of the articles covers
issues in the medical sciences (n = 2,968, 74%). A large proportion of
the items are editorials (n = 1,477, 37%). The absolute majority of the
documents are in English (n = 3,595), followed by those in Spanish (n =
167), German (n = 110), Portuguese (n = 79), and French (n = 39).
Finally, 2 top-cited articles on the JIFs (893 and 391 times) are
authored by its creator, Eugene Garfield (1, 4).









Fig. 1

Number of Scopus-indexed items tagged with the term “Journal Impact Factor (JIF)” (as of November 6, 2016).


Click for larger imageDownload as PowerPoint slide



MISUSES
The JIFs and related journal rankings in the JCR have enourmously
influenced editorial policies across academic disciplines over the past
few decades. The growing importance of journals published from the
U.S.A. and Western Europe has marked a shift in the prioritization of
English articles (5, 6),
sending a strong message to non-English periodicals — change the
language, cover issues of global interest, or perish. A large number of
articles across scientific disciplines from non-Anglophone countries,
and particularly those with a country name in the title, abstract, or
keywords, unduly end up in low-impact periodicals and do not appeal to
the authors, who cite references in high-impact journals (7, 8).


Editors and publishers, who encounter the harsh competition in the
publishing market, are forced to change their priorities in line with
the citation chances of scholarly articles and ‘hot’ topics (9). Several quantitative analyses have demonstrated that randomized controlled trials (10) and methodological articles are highly cited (11), and that systematic reviews receive more citations than narrative ones (12).
Relying on these analyses, most journal editors have embarked on
rejecting ‘unattractive’ scientific topics and certain types of
articles. High-impact journals, and particularly those from the U.S.,
have boosted their JIFs by preferentially accepting authoritative
submissions of ‘big names’ in science, systematic reviews and
meta-analyses, reports on large cohorts and multicenter trials, and
practice guidelines.


Some established publishers have also decided to limit or ban
entirely items that receive few citations (e.g., short communications,
preliminary scientific reports, case studies) (13).
Clinical case reports with enormous educational value for medical
students and physicians but low citation records have been fallen out of
favor and disappeared in most high-impact medical journals. And many
young researchers and students have been ousted from the mainstream
high-impact periodicals. All these subjective factors and the
‘obsession’ with impact factors have created a citation-related
publication bias, with discontinuing publication of a journal without
JIF as an extreme measure.


The ‘obsession’ with articles attracting abundant citations may be
also the trigger of the current unprecedented proliferation of
systematic reviews (14), most of which are of low quality and even harmful for the scientific evidence accumulation (15, 16, 17).


Academic promotion, grant funding, and rewarding schemes across most
developed countries and emerging scientific powers currently rely
heavily on where, but not necessarily what the authors publish.
Fallaciously, getting an article published in a high-impact journal is
viewed as a premise for academic promotion and research grant funding.
Many researchers list their articles on their individual profiles,
covering a certain period of academic activities, along with the JIFs
that tend to dynamically change (18).
Likewise, ResearchGate™, the global scholarly networking platform,
calculates scores of publication activity in connection with the JIFs.


The JIFs of the target journals are still inappropriately employed by
research evaluators as the proxies of the quality. In China, for
example, bonuses paid to academics depend on a category of the target
journals, which is calculated as an average of the JIFs in the last
three years (19). In the leading Chinese universities, distinctive monetary reward schemes push authors to submit to and publish more in Nature, Science, and other high-impact journals (20).
An analysis of more than 130,000 research projects, which were funded
by the U.S. National Institutes of Health, revealed that higher scores
were given by reviewers to proposals with potentially influential output
in terms of high JIFs and more citations, but not necessarily
innovative ideas (21).


The decades-long overemphasis placed on the JIFs has evolved into a
grossly incorrect use of the term “impact factor” by sprung up bogus
agencies. These ‘predatory’ agencies claim to assess the impact of
journals and calculate metrics, which often mimic those by Thomson
Reuters, but do not take into account indexing in established databases
and citations from indexed journals (22).
Predatory journals often display misleading or fake metrics on their
websites to influence inexperienced authors' choices of the target
journals (23).



GLOBAL INITIATIVES AGAINST MISUSES
To a certain degree, the decades-long global competition for getting
and increasing the JIFs has enabled improving the quality of the indexed
periodicals and subsequently attracting professional interest and
citations (24).
However, the absence of alternative metrics for a long time has led to
monopoly and misuses of the JIFs. Journals publishing a single or a few
highly-cited articles and boosting their JIFs in the two succeeding
years have achieved an advantage over the competing periodicals (25).
Disparagingly, some journal editors have also embarked on coercive
citation practices that unethically boosted their JIFs and adversely
affected the whole field of scientometrics (26, 27).


Additionally, a thorough analysis of impressive increases of the JIFs
of a cohort of journals in 2013–2014 (> 3, n = 49) revealed
manipulations with shrinking of publication output and decreasing
article numbers in the denominator of the JIF formula (28).


Curiously, despite the seemingly simple methodology of calculating
the JIF, values of metrics presented in the JCR often differ from those
calculated by editors and publishers themselves (29).


All these and many other deficiencies of the JIF have prompted
several campaigns against its monopoly and misuses. The San Francisco
Declaration on Research Assessment (DORA), which was developed by a
group of editors and publishers at the Annual Meeting of the American
Society for Cell Biology in 2012, encouraged interested parties across
all scientific disciplines to improve the evaluation of research output
and avoid relying on the JIFs as the proxies of the quality (30).
The Declaration highlighted the importance of crediting research works
based on scientific merits but not values of related JIFs. It also
called to discontinue practices of grant funding and academic promotion
in connection with JIFs. The organizations that issue journal metrics
were called to transparently publicize their data, allowing unrestricted
reuse and calculations by others.


A series of opinion pieces and comments on journal metrics, which were recently published in Nature, heralded a new powerful campaign against misuses of the JIFs (31).
First of all, it was announced that several influential journals of the
American Society for Microbiology would remove the JIFs from their
websites (32). By analyzing distribution of citations, which contributed to JIFs of Nature, Science, and PLOS One,
it was emphasized that the average citation values did not reveal the
real impact of most articles published in these journals. For example,
78% of Nature articles were cited below its latest impact factor
of 38.1. Displaying distribution of citations and drawing attention of
readers to highly-cited articles were considered as more appropriate for
assessing journal stance than simply publicizing the JIFs (33).


Editors of Nature strongly advised against replacing opinion
of peer reviewers with citations and related quantitative metrics for
evaluating grant applications and publications (34).
Paying more attention to what is new and important for public health
rather than relying on surrogate metrics and prestige of target journals
was considered as a more justified approach to academic promotion of
authors (35).


Finally, ten principles of research evaluation (The Leiden Manifesto) were published in Nature to guide research managers how to use a combination of quantitiative and qualitative tools (36).
The Leiden Manifesto called to protect locally relevant research, which
can be published in non-English and low-impact media, particularly in
the fields of social sciences and humanities. It pointed to the
differences in publication and citation practices across disciplines
that should not confound crediting and promotion systems; books,
national-language literature, and conference papers can be counted as
highly influential sources in some fields.



EMERGING ALTERNATIVE FACTORS OF THE IMPACT
The digitization of scholarly publishing has offered numerous ways
for increasing the discoverability of individual articles and improving
knowledge transfer (Box 1).
The systematization of searches through digital platforms and databases
has emerged as the main factor of scholarly influence. Authors and
editors alike are currently advised to carefully edit their article's
titles, abstracts, and keywords to increase the discoverability and
related impact (37).
Importantly, a recent analysis of 500 highly-cited articles in the
field of knowledge management revealed a positive correlation between
the number of keywords and citations (38). The same study pointed to the value of article references and page numbers for prediciting citations.



Box 1. Factors of the journal impact and importance



  1. Discoverability of journal articles by search engines by properly structuring titles, abstracts, and keywords
  2. Citations received by journal articles over a certain period of time from Scopus or Web of Science databases
  3. Downloads of journal articles within a certain period of time
  4. Attention to the journal by social media (e.g., Twitter, Facebook), blogs, newspapers, and magazines
  5. Journal endorsements and support by professional societies
  6. Completeness and adherence to ethical standards in the journal instructions
Experts advocate shifting from traditional JIF-based evaluations to
combined qualitative and quantitative metrics schemes for scholarly
sources (39).
Citation counts from prestigious citation databases, such as Web of
Science and Scopus, and related arithmetic metrics will remain the
strongholds of the journal ranking in the years to come (40).
Following a recent debate over the distribution of citations
contributing to the JIFs, it is likely that citation metrics will be
accompanied by plots depicting most and least cited items (41).


An argument in favor of a combined approach to the impact
particularly concerns the use of individual articles, which are
published in journals with low or declining JIFs, but are still actively
dowloaded and distributed among professionals, most of whom read but
never publish papers (42, 43).
The combined approach has been already embraced by Elsevier, displaying
top 25 most downloaded articles along with citation metrics from Web of
Science and Scopus on their journal websites. Although there is no
linear correlation, downloads reveal interest of the professional
community and may predict citations (44, 45).


Some established publishers, such as Nature Publishing Group and
Elsevier, have gone further and started providing their readers with
more inclusive information about the use of individual articles by
combining citation metrics and downloads with altmetric scores (46).
The altmetric score is a relatively new multidimentional metric, which
was proposed in 2010 to capture a board online attention of social
media, blogs, and scholarly networking platforms to research output (47).
Essentially, the enhanced online visibility of articles may attract
views, downloads, bookmarks, likes, and comments on various networking
platforms. Pilot studies of Facebook “likes” and Twitter mentions have
pointed to an association between social media attention and traditional
impact metrics, such as citations and downloads, in the field of
psychology and psychiatry (48, 49) and emergency medicine (50).
Although no such association has been reported across many other fields
of science, wider distribution of journal information through social
media holds promise for distinguishing popular and scientifically
important research output (51, 52, 53).


With the rapid growth of numerous online publication outlets,
reaching out to relevant readers and evaluators is becoming a critical
factor of impact. Emerging evidence suggests that periodicals with
affiliation and endorsement of relevant professional societies get an
advantage and attract more citations (54).
The journal affiliation to a professional society is advantageous in
terms of maintaining flow of relevant submissions from the membership
and continuous support of the scientific community, both valued by
prestigous indexing services. There are even suggestions to prefentially
submit articles to journals, which are supported by professional
societies, regardless of their JIFs. Such an approach can be
strategically important for circumventing substandard open-access
periodicals (55, 56).


Finally, several studies have examined the relationship between JIF
and completeness of the journal instructions with regard to research and
publication ethics (57, 58, 59). In a landmark comparative analysis of the instructions of 60 medical journals with JIFs above 10 (e.g., Nature, Science, Lancet) and below 10 for the year 2009 (e.g., Gut, Archives of Internal Medicine, Pain), ethical considerations were significantly better scored for periodicals with higher JIF (57).
The results of the study pointed to the importance of mentioning about
research reporting guidelines, such as STrengthening the Reporting of
OBservational studies in Epidemiology (STROBE) and Consolidated
Standards of Reporting Trials (CONSORT), conflicts of interest, local
ethics committee approval, and patient consents for increasing the
impact and attractiveness of the journals for authors. Similar results
were obtained in a subsequent analysis of the instructions of
radiological (58), but not medical laboratory journals (59).
Despite the differences across the journals, it can be concluded that
upgrading ethical instructions in line with the examples of the flagship
multidisciplinary and specialist journals is rewarding in terms of
attracting the best possible and complete research reports (60).



CONCLUSION
The lasting debates over the JIF, its uses, and misuses highlight
several points of interest to all stakeholders of science communication.
First of all, authors are currently offered numerous options for
choosing best target journals for their research. The JIFs may influence
their choices along with other journal metrics and emerging alternative
factors of the impact. They should realize that not all journals with
the JIFs are up to high ethical standards, and that some periodicals
without the JIFs but with support of professional societies can be
better platforms for relevant research. Journals accepting locally
important articles in English or national languages can be still
influential and useful (61).
Journal editors have an obligation toward their authors to widely
distribute relevant information to increase the use of the articles and
attract citations. Social media and scholarly networking platforms can
be instrumental in this regard. Regularly revising and upgrading journal
instructions may also improve the structure and ethical soundness of
the publications, and translate into the discoverability and
attractiveness for indexing services (60).
Editors, who aim to boost the JIFs, should not undermine the importance
of publishing different types of articles, regardless of their citation
chances. Manipulating with the number of articles, which are counted in
the denominator of the JIF formula, cannot be considered as the best
service to the authors.


Indexers of Thomson Reuters databases should respond to arguments
that point to the need for revising the original formula of the JIF (62, 63).
Remarkably, editorials and letters, the so-called noncitable items,
which have long been excluded from the denominator of the JIF, have
changed their influence over the past decades. These items, and
particularly in the modern biomedicine, contain long lists of
references, affecting the JIF calculations in many ways. It should be
also stressed that the lack of transparency of the JIF calculations,
which is partly due to the lack of open access to citations tracked by
Thomson Reuters databases (64), damages reputation of the JIF as a reliable and reproducible scientometric tool.


Finally, research evaluators should consider the true impact of
scholarly articles, which is confounded by their novelty, methodological
quality, ethical soundness, and relevance to the global and local
scientific communities.




Notes
DISCLOSURE:Armen
Yuri Gasparyan is an expert of Scopus Content Selection & Advisory
Board (since 2015), former council member of the European Association of
Science Editors and chief editor of European Science Editing (2011–2014). All other authors have no potential conflicts of interest to disclose.
AUTHOR CONTRIBUTION:Conceptualization:
Gasparyan AY, Nurmashev B, Kitas GD. Data curation: Yessirkepov M,
Udovik EE, Kitas GD. Visualization: Gasparyan AY, Yessirkepov M. Writing
- original draft: Gasparyan AY. Writing - review & editing:
Gasparyan AY, Nurmashev B, Yessirkepov M, Udovik EE, Baryshnikov AA,
Kitas GD.

References


1. Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90–93.


2. Gurnhill G. PeerJ receives its first (partial) impact factor [Internet]. [accessed on 6 November 2016].


3. Libkind AN, Markusova VA, Mindeli LE. Bibliometric
indicators of Russian journals by JCR-science edition, 1995-2010. Acta
Naturae 2013;5:6–12.


4. Garfield E. Journal impact factor: a brief review. CMAJ 1999;161:979–980.


5. Chen M, Zhao MH, Kallenberg CG. The impact factor of
rheumatology journals: an analysis of 2008 and the recent 10 years.
Rheumatol Int 2011;31:1611–1615.


6. Bredan A, Benamer HT, Bakoush O. Why are journals from
less-developed countries constrained to low impact factors? Libyan J Med
2014;9:25774.


7. Abramo G, D’Angelo CA, Di Costa F. The effect of a
country’s name in the title of a publication on its visibility and
citability. Scientometrics 2016;109:1895–1909.


8. Tahamtan I, Safipour Afshar A, Ahamdzadeh K. Factors
affecting number of citations: a comprehensive review of the literature.
Scientometrics 2016;107:1195–1225.


9. Nielsen MB, Seitz K. Impact factors and prediction of popular topics in a journal. Ultraschall Med 2016;37:343–345.


10. Zhao X, Guo L, Lin Y, Wang H, Gu C, Zhao L, Tong X. The
top 100 most cited scientific reports focused on diabetes research.
Acta Diabetol 2016;53:13–26.


11. Van Noorden R, Maher B, Nuzzo R. The top 100 papers. Nature 2014;514:550–553.


12. Bhandari M, Montori VM, Devereaux PJ, Wilczynski NL,
Morgan D, Haynes RB. Hedges Team. Doubling the impact: publication of
systematic review articles in orthopaedic journals. J Bone Joint Surg Am
2004;86-A:1012–1016.


13. Howard L, Wilkinson G. Impact factors of psychiatric journals. Br J Psychiatry 1997;170:109–112.


14. Uthman OA, Okwundu CI, Wiysonge CS, Young T, Clarke A.
Citation classics in systematic reviews and meta-analyses: who wrote the
top 100 most cited articles? PLoS One 2013;8:e78517.


15. Page MJ, McKenzie JE, Kirkham J, Dwan K, Kramer S,
Green S, Forbes A. Bias due to selective inclusion and reporting of
outcomes and analyses in systematic reviews of randomised trials of
healthcare interventions. Cochrane Database Syst Rev 2014:MR000035.


16. Zhang J, Wang J, Han L, Zhang F, Cao J, Ma Y.
Epidemiology, quality, and reporting characteristics of systematic
reviews and meta-analyses of nursing interventions published in Chinese
journals. Nurs Outlook 2015;63:446–455.e4.


17. Roush GC, Amante B, Singh T, Ayele H, Araoye M, Yang D,
Kostis WJ, Elliott WJ, Kostis JB, Berlin JA. Quality of meta-analyses
for randomized trials in the field of hypertension: a systematic review.
J Hypertens 2016;34:2305–2317.


18. Bavdekar SB, Save S. Choosing the right journal for a scientific paper. J Assoc Physicians India 2015;63:56–58.


19. Suo Q. Chinese academic assessment and incentive system. Sci Eng Ethics 2016;22:297–299.


20. Shao JF, Shen HY. Research assessment and monetary rewards: the overemphasized impact factor in China. Res Eval 2012;21:199–203.


21. Li D, Agha L. Research funding. Big names or big ideas:
do peer-review panels select the best science proposals? Science
2015;348:434–438.


22. Sohail S. Of predatory publishers and spurious impact factors. J Coll Physicians Surg Pak 2014;24:537–538.


23. Beall J. Dangerous predatory publishers threaten medical research. J Korean Med Sci 2016;31:1511–1513.


24. Abdullgaffar B. Impact factor in cytopathology
journals: what does it reflect and how much does it matter?
Cytopathology 2012;23:320–324.


25. Dimitrov JD, Kaveri SV, Bayry J. Metrics: journal’s impact factor skewed by a single paper. Nature 2010;466:179.


26. Wilhite AW, Fong EA. Scientific publications. Coercive citation in academic publishing. Science 2012;335:542–543.


27. Chorus C, Waltman L. A large-scale analysis of impact factor biased journal self-citations. PLoS One 2016;11:e0161021.


28. Kiesslich T, Weineck SB, Koelblinger D. Reasons for
journal impact factor changes: influence of changing source items. PLoS
One 2016;11:e0154199.


29. Undas A. The 2015 impact factor for Pol Arch Med Wewn: comments from the editor‑in‑chief. Pol Arch Med Wewn 2016;126:453–456.


30. San Francisco declaration on research assessment:
putting science into the assessment of research [Internet]. [accessed 6
November 2016].


31. Wilsdon J. We need a measured approach to metrics. Nature 2015;523:129.


32. Callaway E. Beat it, impact factor! Publishing elite turns against controversial metric. Nature 2016;535:210–211.


33. Time to remodel the journal impact factor. Nature 2016;535:466.


34. A numbers game. Nature 2015;523:127–128.


35. Benedictus R, Miedema F, Ferguson MW. Fewer numbers, better science. Nature 2016;538:453–455.


36. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I.
Bibliometrics: the Leiden Manifesto for research metrics. Nature
2015;520:429–431.


37. Bekhuis T. Keywords, discoverability, and impact. J Med Libr Assoc 2015;103:119–120.


38. Akhavan P, Ebrahim NA, Fetrati MA, Pezeshkan A. Major
trends in knowledge management research: a bibliometric study.
Scientometrics 2016;107:1249–1264.


39. Scarlat MM, Mavrogenis AF, Pećina M, Niculescu M.
Impact and alternative metrics for medical publishing: our experience
with International Orthopaedics. Int Orthop 2015;39:1459–1464.


40. Bollen J, Van de Sompel H, Hagberg A, Chute R. A
principal component analysis of 39 scientific impact measures. PLoS One
2009;4:e6022.


41. Blanford CF. Impact factors, citation distributions and journal stratification. J Mater Sci 2016;51:10319–10322.


42. Haitjema H. Impact factor or impact? Ground Water 2015;53:825.


43. Gibson R. Considerations on impact factor and publications in molecular imaging and biology. Mol Imaging Biol 2015;17:745–747.


44. Della Sala S, Cubelli R. Downloads as a possible index of impact? Cortex 2013;49:2601–2602.


45. Gregory AT, Denniss AR. Impact by citations and
downloads: what are heart, lung and circulation’s top 25 articles of all
time? Heart Lung Circ 2016;25:743–749.


46. Rhee JS. High-impact articles-citations, downloads, and altmetric score. JAMA Facial Plast Surg 2015;17:323–324.


47. Brigham TJ. An introduction to altmetrics. Med Ref Serv Q 2014;33:438–447.


48. Ringelhan S, Wollersheim J, Welpe IM. I like, I cite?
Do facebook likes predict the impact of scientific work? PLoS One
2015;10:e0134389.


49. Quintana DS, Doan NT. Twitter article mentions and
citations: an exploratory analysis of publications in the American
Journal of Psychiatry. Am J Psychiatry 2016;173:194.


50. Barbic D, Tubman M, Lam H, Barbic S. An analysis of altmetrics in emergency medicine. Acad Emerg Med 2016;23:251–268.


51. Amir M, Sampson BP, Endly D, Tamai JM, Henley J, Brewer
AC, Dunn JH, Dunnick CA, Dellavalle RP. Social networking sites:
emerging and essential tools for communication in dermatology. JAMA
Dermatol 2014;150:56–60.


52. Cosco TD. Medical journals, impact and social media: an ecological study of the Twittersphere. CMAJ 2015;187:1353–1357.


53. Tonia T, Van Oyen H, Berger A, Schindler C, Künzli N.
If I tweet will you cite? The effect of social media exposure of
articles on downloads and citations. Int J Public Health
2016;61:513–520.


54. Karageorgopoulos DE, Lamnatou V, Sardi TA, Gkegkes ID,
Falagas ME. Temporal trends in the impact factor of European versus USA
biomedical journals. PLoS One 2011;6:e16300.


55. Putirka K, Kunz M, Swainson I, Thomson J. Journal
impact factors: their relevance and their influence on society-published
scientific journals. Am Mineral 2013;98:1055–1065.


56. Romesburg HC. How publishing in open access journals
threatens science and what we can do about it. J Wildl Manage
2016;80:1145–1151.


57. Charlier P, Bridoux V, Watier L, Ménétrier M, de la
Grandmaison GL, Hervé C. Ethics requirements and impact factor. J Med
Ethics 2012;38:253–255.


58. Charlier P, Huynh-Charlier I, Hervé C. Ethics requirements and impact factor in radiological journals. Acta Radiol 2016;57:NP3.


59. Horvat M, Mlinaric A, Omazic J, Supak-Smolcic V. An
analysis of medical laboratory technology journals’ instructions for
authors. Sci Eng Ethics 2016;22:1095–1106.


60. Gasparyan AY, Ayvazyan L, Gorin SV, Kitas GD. Upgrading
instructions for authors of scholarly journals. Croat Med J
2014;55:271–280.


61. Gasparyan AY, Hong ST. Celebrating the achievements and
fulfilling the mission of the Korean Association of Medical Journal
Editors. J Korean Med Sci 2016;31:333–335.


62. Sewell JM, Adejoro OO, Fleck JR, Wolfson JA, Konety BR.
Factors associated with the journal impact factor (JIF) for urology and
nephrology journals. Int Braz J Urol 2015;41:1058–1066.


63. Liu XL, Gai SS, Zhou J. Journal impact factor: do the numerator and denominator need correction? PLoS One 2016;11:e0151414.


64. Fernandez-Llimos F. Bradford’s law, the long tail
principle, and transparency in journal impact factor calculations. Pharm
Pract (Granada) 2016;14:842.
KoreaMed Synapse