Challenges in publishing: producing, assuring and communicating quality
Kangas A., Hujala T. (2015). Challenges in publishing: producing, assuring and communicating quality. Silva Fennica vol. 49 no. 4 article id 1304. http://dx.doi.org/10.14214/sf.1304
Abstract
This
paper is based on a session “How to make forest science available for
all? Publishers’, editors’, and authors’ challenges” at the IUFRO XXIV
world conference, organized by Pekka Nygren and Eeva Korpilahti from the
Finnish Society of Forest Science. The presenters dealt with the
topical problems of publishing scientific knowledge from different
perspectives. The talks covered the development of journals,
publications and submissions, benefits and drawbacks of open access
publishing as well as electronic and traditional publishing, and
possibilities to promote interesting papers either from the journal’s or
from the author’s perspective, and the problems of disseminating the
scientific results to the end users. In this paper, a few prevalent
viewpoints, inspired by the session, are raised and discussed with some
suggestions included.
paper is based on a session “How to make forest science available for
all? Publishers’, editors’, and authors’ challenges” at the IUFRO XXIV
world conference, organized by Pekka Nygren and Eeva Korpilahti from the
Finnish Society of Forest Science. The presenters dealt with the
topical problems of publishing scientific knowledge from different
perspectives. The talks covered the development of journals,
publications and submissions, benefits and drawbacks of open access
publishing as well as electronic and traditional publishing, and
possibilities to promote interesting papers either from the journal’s or
from the author’s perspective, and the problems of disseminating the
scientific results to the end users. In this paper, a few prevalent
viewpoints, inspired by the session, are raised and discussed with some
suggestions included.
Addresses (View)
Received 21 January 2015 Accepted 23 June 2015 Published 29 June 2015
1 Exponential growth of scientific publishing
The
first peer-reviewed articles were published in 1665 in journals
Philosophical Transactions and Le Journal des Sçavans and in 2006 there
were already 23 750 titles (Jinha
2010). The number of submissions has grown exponentially during the
last 10 years, and the number of journals and published papers has
doubled in that time (Dreyer 2014). Jinha
(2010) estimated that 2009 the number of published peer-reviewed
articles exceeded 50 million. On the other hand, number of forestry
journals has not increased that fast. Number of papers dealing with
forests during 2002–2011 was about 20 000, but only 21% of them were
published in forestry journals. That may also mean that the hottest
topics are published elsewhere, in series where the impact factors are
higher, which may be a threat to the future of forestry journals in
general (Dreyer 2014).
first peer-reviewed articles were published in 1665 in journals
Philosophical Transactions and Le Journal des Sçavans and in 2006 there
were already 23 750 titles (Jinha
2010). The number of submissions has grown exponentially during the
last 10 years, and the number of journals and published papers has
doubled in that time (Dreyer 2014). Jinha
(2010) estimated that 2009 the number of published peer-reviewed
articles exceeded 50 million. On the other hand, number of forestry
journals has not increased that fast. Number of papers dealing with
forests during 2002–2011 was about 20 000, but only 21% of them were
published in forestry journals. That may also mean that the hottest
topics are published elsewhere, in series where the impact factors are
higher, which may be a threat to the future of forestry journals in
general (Dreyer 2014).
2 Is open access a problem or a solution?
In recent years, open access (OA) publishing has been recommended by many research organizations as well as funding agencies (Tutkimuksen
avoimuudella... 2014). Open access publications are considered
especially important in the developing countries, where they may be the
most prevalent mean to get access to scientific articles. When OA
articles have been compared to non-OA articles in the same journal, the
OA papers have a huge advantage in citations: Eysenbach
(2006) noted that after 4–10 months the probability of not getting any
citations was 49% for non-OA and 37% for OA articles, and also the
average number of citations was higher for OA articles (1.5 versus 1.2).
There is therefore great incentive for a researcher to publish in OA
journals, if they have funds for paying the publishing fees. It is
therefore reasonable to reserve funds in project budgets for publishing
(also) on OA journals; the drawback is that since no respective savings
are expected from journal subscriptions, the increasing OA publishing
expenses will decrease the actual research money.
avoimuudella... 2014). Open access publications are considered
especially important in the developing countries, where they may be the
most prevalent mean to get access to scientific articles. When OA
articles have been compared to non-OA articles in the same journal, the
OA papers have a huge advantage in citations: Eysenbach
(2006) noted that after 4–10 months the probability of not getting any
citations was 49% for non-OA and 37% for OA articles, and also the
average number of citations was higher for OA articles (1.5 versus 1.2).
There is therefore great incentive for a researcher to publish in OA
journals, if they have funds for paying the publishing fees. It is
therefore reasonable to reserve funds in project budgets for publishing
(also) on OA journals; the drawback is that since no respective savings
are expected from journal subscriptions, the increasing OA publishing
expenses will decrease the actual research money.
There
are, however, also a number of “predatory” open access journals,
quality of which is not adequate and which do not even follow reasonable
ethical standards. Beall (http://scholarlyoa.com/publishers/)
lists 825 OA journals that have a dubious quality. The predatory
journal listed may, for instance show feign impact factors, publish
already published papers, publish non-scientific texts, use
non-qualified reviewers or fake reviews and so on (Beall 2012). In the worst case, these journals are published in fake institutions by fake editors (see http://scholarlyoa.com/2014/01/09/questionable-oa-publisher-launches-with-a-clever-website-and-52-new-journals/).
are, however, also a number of “predatory” open access journals,
quality of which is not adequate and which do not even follow reasonable
ethical standards. Beall (http://scholarlyoa.com/publishers/)
lists 825 OA journals that have a dubious quality. The predatory
journal listed may, for instance show feign impact factors, publish
already published papers, publish non-scientific texts, use
non-qualified reviewers or fake reviews and so on (Beall 2012). In the worst case, these journals are published in fake institutions by fake editors (see http://scholarlyoa.com/2014/01/09/questionable-oa-publisher-launches-with-a-clever-website-and-52-new-journals/).
3 Is peer-review still the best quality assurance?
Having
the paper peer-reviewed is not necessarily a guarantee of good quality,
even though it has been shown that peer-review process improves the
quality (Armstrong 1997). Severe criticism has sometimes been presented. For instance, Richard Horton, editor of The Lancet, has written (2000): “We
portray peer review to the public as a quasi-sacred process that helps
to make science our most objective truth teller. But we know that the
system of peer review is biased, unjust, unaccountable, incomplete,
easily fixed, often insulting, usually ignorant, occasionally foolish,
and frequently wrong.” Yet, peer-review has been deemed as the best quality assurance available.
the paper peer-reviewed is not necessarily a guarantee of good quality,
even though it has been shown that peer-review process improves the
quality (Armstrong 1997). Severe criticism has sometimes been presented. For instance, Richard Horton, editor of The Lancet, has written (2000): “We
portray peer review to the public as a quasi-sacred process that helps
to make science our most objective truth teller. But we know that the
system of peer review is biased, unjust, unaccountable, incomplete,
easily fixed, often insulting, usually ignorant, occasionally foolish,
and frequently wrong.” Yet, peer-review has been deemed as the best quality assurance available.
Increasing
number of journals and submissions means that the number of review
reports needed has increased exponentially. The rejection rates have
also increased with the result that same papers are reviewed several
times in several journals: with 50% rejection rate each published paper
may have been reviewed in 2–3 journals and got 5–10 assessments before
publishing (Hochberg 2010). That is a burden to the whole peer-review
concept, and might reduce the average quality of the review reports.
number of journals and submissions means that the number of review
reports needed has increased exponentially. The rejection rates have
also increased with the result that same papers are reviewed several
times in several journals: with 50% rejection rate each published paper
may have been reviewed in 2–3 journals and got 5–10 assessments before
publishing (Hochberg 2010). That is a burden to the whole peer-review
concept, and might reduce the average quality of the review reports.
As
a possible solution to this problem, a so-called open peer-review has
been suggested. For instance, all articles submitted could be published
immediately and the review process could take place afterwards. Any
reviewer wishing to do so could comment and the reviews could be
published alongside the manuscript. The authors could still withdraw the
published paper and revise it according to the comments received. Thus
the work is public immediately and the papers, which the reviewers find
as good or bad ones are easily distinguished. Having the reviews
published would remove the possibility to submit poor-quality
manuscripts to different journals until favorable reviewers are found.
On the other hand, it would also remove the possibility of reviewers to
stop a good-quality manuscript from being published if they dislike it.
The open peer-review process could possibly both reduce the number of
submissions and improve their quality. It might also improve the quality
of the whole review process, when also the reviewers get scientific
merit on writing good reviews, and have to publish their names. On the
other hand, it might reduce the number of researchers willing to make
reviews.
a possible solution to this problem, a so-called open peer-review has
been suggested. For instance, all articles submitted could be published
immediately and the review process could take place afterwards. Any
reviewer wishing to do so could comment and the reviews could be
published alongside the manuscript. The authors could still withdraw the
published paper and revise it according to the comments received. Thus
the work is public immediately and the papers, which the reviewers find
as good or bad ones are easily distinguished. Having the reviews
published would remove the possibility to submit poor-quality
manuscripts to different journals until favorable reviewers are found.
On the other hand, it would also remove the possibility of reviewers to
stop a good-quality manuscript from being published if they dislike it.
The open peer-review process could possibly both reduce the number of
submissions and improve their quality. It might also improve the quality
of the whole review process, when also the reviewers get scientific
merit on writing good reviews, and have to publish their names. On the
other hand, it might reduce the number of researchers willing to make
reviews.
Another fairly recently
established effort of renewing the peer-review concept involves
submitting manuscripts to a qualified researchers’ community rather than
to a single journal. The author will then get voluntary review by
non-associated researchers. Then the author has an opportunity to submit
the manuscript to a journal of his/her choice after the favorable
review reports have been obtained. This type of activity is organized by
Peerage of Science Ltd (http://www.peerageofscience.org/).
The promise of the peer-reviewers’ community is to assess, show and
increase the value of peer-reviews and speed up the process of matching a
paper with a relevant journal. Papers are given credit via quality
indices, which measure the quality of the reviews assessed by other
reviewers, the number of reviews received, and the
review-quality-weighted average of scores given to seven categories of
the article, including breadth, impact, originality, methods, data,
inference, and literature. Both open peer-review and Peerage of Science
include features that may be considered when refining the whole
peer-reviewing system.
established effort of renewing the peer-review concept involves
submitting manuscripts to a qualified researchers’ community rather than
to a single journal. The author will then get voluntary review by
non-associated researchers. Then the author has an opportunity to submit
the manuscript to a journal of his/her choice after the favorable
review reports have been obtained. This type of activity is organized by
Peerage of Science Ltd (http://www.peerageofscience.org/).
The promise of the peer-reviewers’ community is to assess, show and
increase the value of peer-reviews and speed up the process of matching a
paper with a relevant journal. Papers are given credit via quality
indices, which measure the quality of the reviews assessed by other
reviewers, the number of reviews received, and the
review-quality-weighted average of scores given to seven categories of
the article, including breadth, impact, originality, methods, data,
inference, and literature. Both open peer-review and Peerage of Science
include features that may be considered when refining the whole
peer-reviewing system.
4 Are the best publications those that are cited most?
As
the number of journals and published papers is exponentially
increasing, there is a danger that the availability of the publications
will be more important factor in determining, which publications are
actually read and cited than the quality of the research (Hairiah 2014).
This is especially important if the open access journals have (on
average) lower quality than the journals requiring subscription.
the number of journals and published papers is exponentially
increasing, there is a danger that the availability of the publications
will be more important factor in determining, which publications are
actually read and cited than the quality of the research (Hairiah 2014).
This is especially important if the open access journals have (on
average) lower quality than the journals requiring subscription.
To
improve the visibility of good science, it would be important that the
best papers are open access (Hairiah 2014) or at least that a summary of
such papers, preferably with non-technical language are available to
everyone (Way 2014). It will be increasingly important to publish also
short syntheses of the publications based on reliable science to reach
the end users (Stelzer 2014). To improve the impact of science in
general, we would need to be able to better separate good papers from
poor ones. It could be both a responsibility and an asset for publishers
to distinguish the very best articles and provide OA for those papers
even without author fees.
improve the visibility of good science, it would be important that the
best papers are open access (Hairiah 2014) or at least that a summary of
such papers, preferably with non-technical language are available to
everyone (Way 2014). It will be increasingly important to publish also
short syntheses of the publications based on reliable science to reach
the end users (Stelzer 2014). To improve the impact of science in
general, we would need to be able to better separate good papers from
poor ones. It could be both a responsibility and an asset for publishers
to distinguish the very best articles and provide OA for those papers
even without author fees.
In recent years,
the researchers and journals have been measured with metrics based on
impact factors. However, 80% of the impact of a journal is attributable
to 20% of the papers (Neylon
and Wu 2009). Furthermore, variation in the volume of research between
subject fields affects the general level of impact factors. For this
reason, publishers have started to provide several modified citation
indices that, i.e., normalize the impact factor with the total number of
citations in the subject area. Even then, the quality of single
articles cannot be measured accurately with the impact factors of
journals. In fact, since the articles have gone online, the importance
of the impact factors has reduced: the proportion of highly cited
articles in high-impact journals has reduced and that of less highly
ranked journals improved after the 1990’s (Lozano
et al. 2012). This may be due to authors increasingly relying on the
results of search engines. With this development, good selection of
keywords may markedly increase the visibility of the article.
the researchers and journals have been measured with metrics based on
impact factors. However, 80% of the impact of a journal is attributable
to 20% of the papers (Neylon
and Wu 2009). Furthermore, variation in the volume of research between
subject fields affects the general level of impact factors. For this
reason, publishers have started to provide several modified citation
indices that, i.e., normalize the impact factor with the total number of
citations in the subject area. Even then, the quality of single
articles cannot be measured accurately with the impact factors of
journals. In fact, since the articles have gone online, the importance
of the impact factors has reduced: the proportion of highly cited
articles in high-impact journals has reduced and that of less highly
ranked journals improved after the 1990’s (Lozano
et al. 2012). This may be due to authors increasingly relying on the
results of search engines. With this development, good selection of
keywords may markedly increase the visibility of the article.
5 How to better measure the impact of articles?
One
obvious metrics for measuring the quality of single articles is the
number of times the article is cited by others. However, the number of
citations accumulates slowly during years and cannot be used to evaluate
the quality of recent papers. It is also noteworthy that different
types of articles may have different impact life-cycles, e.g. highly
relevant for a shorter time versus moderately relevant for a longer
time. Thus, one should be careful for not placing unintended valuations
on different article types when applying citation indices over time.
obvious metrics for measuring the quality of single articles is the
number of times the article is cited by others. However, the number of
citations accumulates slowly during years and cannot be used to evaluate
the quality of recent papers. It is also noteworthy that different
types of articles may have different impact life-cycles, e.g. highly
relevant for a shorter time versus moderately relevant for a longer
time. Thus, one should be careful for not placing unintended valuations
on different article types when applying citation indices over time.
For article level impact measurement new measures (the so-called Altmetrics) have been introduced (Thelwall
et al. 2013). These include number of sights, saves, mentions in
Twitter or Facebook or blogs (Wennström 2014). The Altmetrics may also
measure the importance of the research from the society perspective
rather than just scientific perspective. They may help the researchers
and end-users to find the papers that are most relevant in their own
work (Neylon and Wu 2009), but they also introduce the risk of reading the most popular rather than most important papers. Thelwall
et al. (2013) found that the altmetrics moderately correlated with the
citations of papers published in Nature and Science, but the best
alternative metrics was the recommendation at F1000 (http://f1000.com/prime/about/whatis/how),
which is a recommendation of peer-nominated faculty members for the
best articles they have read in their fields. Such recommendation system
(with names of the people giving the recommendations) might best
describe the quality of single articles.
et al. 2013). These include number of sights, saves, mentions in
Twitter or Facebook or blogs (Wennström 2014). The Altmetrics may also
measure the importance of the research from the society perspective
rather than just scientific perspective. They may help the researchers
and end-users to find the papers that are most relevant in their own
work (Neylon and Wu 2009), but they also introduce the risk of reading the most popular rather than most important papers. Thelwall
et al. (2013) found that the altmetrics moderately correlated with the
citations of papers published in Nature and Science, but the best
alternative metrics was the recommendation at F1000 (http://f1000.com/prime/about/whatis/how),
which is a recommendation of peer-nominated faculty members for the
best articles they have read in their fields. Such recommendation system
(with names of the people giving the recommendations) might best
describe the quality of single articles.
Both,
journals and authors, can use these new measures in actively promoting
new articles. The authors may promote their work for instance in social
media such as Facebook and Twitter. It has already been noted that the
promoted works do get more downloads and citations than average papers
(Wennström 2014). This increases the visibility of science, but there is
also a danger that what gets cited are not the best articles but the
most actively promoted research. Journals may promote the papers the
editors find the most important or the best quality for instance by
asking reviewers to develop their reviews to commentaries that are also
reviewed and published with open access (Way 2014). Such summary of the
important aspects of the papers will help the researchers to notice
publications of importance to them. Journals could promote the articles
also by publishing other supporting material such as audio-supported
slide-show presentations alongside the article (e.g. http://dx.doi.org/10.1016/j.jenvman.2014.05.029).
journals and authors, can use these new measures in actively promoting
new articles. The authors may promote their work for instance in social
media such as Facebook and Twitter. It has already been noted that the
promoted works do get more downloads and citations than average papers
(Wennström 2014). This increases the visibility of science, but there is
also a danger that what gets cited are not the best articles but the
most actively promoted research. Journals may promote the papers the
editors find the most important or the best quality for instance by
asking reviewers to develop their reviews to commentaries that are also
reviewed and published with open access (Way 2014). Such summary of the
important aspects of the papers will help the researchers to notice
publications of importance to them. Journals could promote the articles
also by publishing other supporting material such as audio-supported
slide-show presentations alongside the article (e.g. http://dx.doi.org/10.1016/j.jenvman.2014.05.029).
Reviewers
could also provide standardized labels that highlight the particularly
interesting, relevant or strong features in that article. Scientific
quality goes beyond scientific soundness (which is a threshold level for
a proper article) and is a product of several properties, originality,
theory, methodology, increment to knowledge, etc. It would be helpful
for readers to know for what reason an article has received a high
score. Various score decompositions are currently being used in
journals’ review sheets. However, at present they appear to be asked for
the purposes of editorial decisions only. In the future, the most
relevant relative strengths of articles as seen by the reviewers and the
editorial board could also be published.
could also provide standardized labels that highlight the particularly
interesting, relevant or strong features in that article. Scientific
quality goes beyond scientific soundness (which is a threshold level for
a proper article) and is a product of several properties, originality,
theory, methodology, increment to knowledge, etc. It would be helpful
for readers to know for what reason an article has received a high
score. Various score decompositions are currently being used in
journals’ review sheets. However, at present they appear to be asked for
the purposes of editorial decisions only. In the future, the most
relevant relative strengths of articles as seen by the reviewers and the
editorial board could also be published.
High-ranking
journals do not necessarily accept topics that are important at
national level (Hairiah 2014). As researchers strive for publishing in
the high-ranked journals to get merits, it may also be difficult to keep
journals concentrating on national level questions or applied research
running (Moser 2014). If the quality of the research were not solely
evaluated on a basis of the journal impact factor but also on article
level measures, it would be easier to persuade researchers to publish
their work also in national languages and on national topics, and
possibly increase the societal impact of the research.
journals do not necessarily accept topics that are important at
national level (Hairiah 2014). As researchers strive for publishing in
the high-ranked journals to get merits, it may also be difficult to keep
journals concentrating on national level questions or applied research
running (Moser 2014). If the quality of the research were not solely
evaluated on a basis of the journal impact factor but also on article
level measures, it would be easier to persuade researchers to publish
their work also in national languages and on national topics, and
possibly increase the societal impact of the research.
6 What can single researchers do?
Universities
and research institutes as well as commercial publishers hold a
steering role in developing the operational environment of scientific
publishing. In any case, the body of research publications grows
rapidly, predator journals violate research ethics, open access
publishing continuously competes with subscribe publishing,
peer-reviewing faces overhaul challenges, and single article quality
receives growing attention.
and research institutes as well as commercial publishers hold a
steering role in developing the operational environment of scientific
publishing. In any case, the body of research publications grows
rapidly, predator journals violate research ethics, open access
publishing continuously competes with subscribe publishing,
peer-reviewing faces overhaul challenges, and single article quality
receives growing attention.
Individual
researchers can manage the situation via paying attention to research
ethics. One should only publish on forums that follow good scientific
practices. For checking this, it is advised to learn about predator
publishers and carefully study the processes of the publisher and the
journal of interest. An ethically reliable journal has a rigorous and
transparent publishing process and publications listings at major
indexing services (Web of Knowledge, Scopus). Besides, a researcher is
advised to think through his/her publication strategy, i.e. deliberate
choices why to publish, therefore to whom to publish and where and how
to reach the target audiences. Choosing one’s scientific profile, in
terms of how to weight pursuing scientific merits versus societal
impacts, informs not only doing research but also selecting journals to
submit and advertising the published papers in various channels.
Typically, an individual’s publishing strategy may mean balancing
between traditional citation indices and societal effectiveness.
researchers can manage the situation via paying attention to research
ethics. One should only publish on forums that follow good scientific
practices. For checking this, it is advised to learn about predator
publishers and carefully study the processes of the publisher and the
journal of interest. An ethically reliable journal has a rigorous and
transparent publishing process and publications listings at major
indexing services (Web of Knowledge, Scopus). Besides, a researcher is
advised to think through his/her publication strategy, i.e. deliberate
choices why to publish, therefore to whom to publish and where and how
to reach the target audiences. Choosing one’s scientific profile, in
terms of how to weight pursuing scientific merits versus societal
impacts, informs not only doing research but also selecting journals to
submit and advertising the published papers in various channels.
Typically, an individual’s publishing strategy may mean balancing
between traditional citation indices and societal effectiveness.
For
doctoral students, however, hard scientific competition usually offers
limited opportunities to select where to publish. Choosing a low-quality
journal may reduce career opportunities, while established high-class
journals may easily reject younglings’ papers. A solution may then be to
identify an ethically working evolving journal, which has either
article-quality orientation or up-to-date advertising features. It is
important to remember that at the end a good-quality paper will find its
readers and citers (nearly) regardless of the actual journal. Both
doctoral students and other researchers will increasingly need
clarifying information on publishing alternatives and their potential
impacts on their scientific profiles and careers.
doctoral students, however, hard scientific competition usually offers
limited opportunities to select where to publish. Choosing a low-quality
journal may reduce career opportunities, while established high-class
journals may easily reject younglings’ papers. A solution may then be to
identify an ethically working evolving journal, which has either
article-quality orientation or up-to-date advertising features. It is
important to remember that at the end a good-quality paper will find its
readers and citers (nearly) regardless of the actual journal. Both
doctoral students and other researchers will increasingly need
clarifying information on publishing alternatives and their potential
impacts on their scientific profiles and careers.
7 Conclusions
In
recent years, the citation indices of journals, authors and papers have
been used more and more in evaluation of them but also in evaluation of
universities and even fields of science. At the same time, all these
measures have also been highly debated, as they might be misleading for
many different reasons. Therefore, we need to keep in mind that the
measures do not define good articles, journals or authors, but they may
be helpful in finding them. Open access also may have both dubious and
good features. Yet, as wide access as possible to the best papers is the
benefit of both authors and journals, but most of all science.
recent years, the citation indices of journals, authors and papers have
been used more and more in evaluation of them but also in evaluation of
universities and even fields of science. At the same time, all these
measures have also been highly debated, as they might be misleading for
many different reasons. Therefore, we need to keep in mind that the
measures do not define good articles, journals or authors, but they may
be helpful in finding them. Open access also may have both dubious and
good features. Yet, as wide access as possible to the best papers is the
benefit of both authors and journals, but most of all science.
References
Armstrong
J.S. (1997). Peer review for journals: evidence on quality control,
fairness and innovation. Science and Engineering Ethics 3: 63–84. http://dx.doi.org/10.1007/s11948-997-0017-3.
J.S. (1997). Peer review for journals: evidence on quality control,
fairness and innovation. Science and Engineering Ethics 3: 63–84. http://dx.doi.org/10.1007/s11948-997-0017-3.
Beall J. (2012). Criteria for determining predatory open-access publishers (2nd edition). http://scholarlyoa.com/2012/11/30/criteria-for-determining-predatory-open-access-publishers-2nd-edition/. [Cited 20 November 2014].
Beall J. (2012). Predatory publishers are corrupting open access. Nature 489. p. 179. http://dx.doi.org/10.1038/489179a.
Eysenbach G. (2006). Citation advantage of open access articles. PLoS Biol 4(5): e157. doi:10.1371/journal.pbio.0040157. http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0040157.
Hochberg M.E. (2000). Youth and the tragedy of the reviewer commons. Ideas in ecology and evolution 3: 8–10. http://dx.doi.org/10.4033/iee.2010.3.2.c.
Horton R. (2000). Genetically modified food: consternation, confusion, and crack-up. MJA 172: 148–9.
Jinha
A.E. (2010). Article 50 million: an estimate of the number of scholarly
articles in existence. Learned Publishing 23: 258–263. http://dx.doi.org/10.1087/20100308.
A.E. (2010). Article 50 million: an estimate of the number of scholarly
articles in existence. Learned Publishing 23: 258–263. http://dx.doi.org/10.1087/20100308.
Neylon, C & Wu S. (2009). Article-level metrics and the evolution of scientific impact. PLoS Biol 7(11): e1000242. http://dx.doi.org/10.1371/journal.pbio.1000242.
Lozano
GA, Larivière V, Gingras Y. (2012). The weakening relationship between
the impact factor and papers’ citations in the digital age. Journal of
the American Society for Information Science and Technology 63:
2140–2145. http://dx.doi.org/10.1002/asi.22731.
GA, Larivière V, Gingras Y. (2012). The weakening relationship between
the impact factor and papers’ citations in the digital age. Journal of
the American Society for Information Science and Technology 63:
2140–2145. http://dx.doi.org/10.1002/asi.22731.
Thelwall
M., Haustein S., Larivière V., Sugimoto C.R. (2013). Do altmetrics
work? Twitter and ten other social web services. PLoS ONE 8(5): e64841. http://dx.doi.org/10.1371/journal.pone.0064841.
M., Haustein S., Larivière V., Sugimoto C.R. (2013). Do altmetrics
work? Twitter and ten other social web services. PLoS ONE 8(5): e64841. http://dx.doi.org/10.1371/journal.pone.0064841.
Tutkimuksen
avoimuudella yllättäviä löytöjä ja luovaa oivaltamista. Avoimen tieteen
ja tutkimuksen tiekartta 2014–2017. (2014). Opetus- ja
kulttuuriministeriön julkaisuja 2014:20. [In Finnish].
avoimuudella yllättäviä löytöjä ja luovaa oivaltamista. Avoimen tieteen
ja tutkimuksen tiekartta 2014–2017. (2014). Opetus- ja
kulttuuriministeriön julkaisuja 2014:20. [In Finnish].
Total of 11 references
Original presentations
The original presentations (available at http://www.iufro.org/download/file/16684/4139/iwc14-abstracts_pdf, p. 108–109):
What future for research journals in forest and wood sciences? | Erwin Dreyer | Inra |
Implications of changing publication formats for public accessibility of developing country forest and agroforestry science | Kurniatun Hairiah | Brawijaya University |
From paper to bits - how to make and keep 100 years of forest science available online? | Pekka Nygren | Finnish Society of Forest Science |
Increasing access to forest science research while improving research impact: a perspective from Tree Physiology | Danielle Way | University of Western Ontario |
Search Engine Optimisation - should we edit or not? | Sofie Wennström | Taylor & Francis |
Progress in Knowledge Dissemination: Combining Fundamental and Applied Research Journals | W. Keith Moser | USDA, Forest Service |
Bringing Forest Science to the End-user: Three Key Challenges | Henry Stelzer | University of Missouri |
Development of boundary organizations to span barriers between fire science and fire managers in the United States | Susan Kocher | University of California Cooperative Extension |
No comments:
Post a Comment