Tuesday, 28 July 2015

Taylor & Francis Author Services - Writing your article - Search engine optimization

 Source: http://journalauthors.tandf.co.uk/preparation/writing.asp

Search engine optimization

Image: Search engine optimization

It is essential that authors and editors make every effort to ensure their articles are found online,
quickly and accurately, ideally within the top three hits. Search engine optimization (SEO) is
a means of making your article more visible to anyone who might be looking for it.
You need to ensure that search engines index your article, so that it comes up in a good
position in the list of results when a reader enters keywords into a search engine.
This makes it more likely that people will read your article. A strong correlation exists
between online hits and subsequent citations for journal articles.
We know that many readers start their research by using academic search engines such as
Google ScholarTM.




How do academic search engines work?

Many search engines have their own algorithms for ranking sites, some by ranking the relevance
of content and links to the site from other websites. Some search engines use metadata or
"meta-tagging" to assess relevant content. Most search engines, however, scan a page for keyword
phrases, which gives emphasis to phrases in headings and/or repeated phrases. The number
of other sites that link to a web page also indicates how that page is valued.


Please see the detailed guidelines provided by
Google Scholar here.






What can I do as an author or editor?

We know that the use of keywords helps to increase the chances of the article being
located, and therefore cited. Which words in your article are the most important? Put yourself in the
position of a reader. Which words might they type in to a search engine if they were looking for
something on your topic? Authors should know the key phrases for their subject area. Reference to
an established common indexing standard in a particular discipline is a useful starting point -
GeoRef, ERIC Thesaurus, PsycInfo, ChemWeb, and so on. There is further guidance on choosing keywords above.





The title and abstract you provide are also very important for
search engines. Some search engines will only index these two parts of your article. Your article
title should be concise, accurate, and informative. The title should be specific and it should
contain words that readers might be searching for. This will make it more likely that people will
find and read your article. Remember that you are writing for people as well as search engines!
And do not be tempted to over-optimize your article (as discussed in the first reference below).
The title must reflect the content of your article; if it does not, readers will be confused or
disappointed. The title must also be comprehensible to the general reader outside your field.
Where possible avoid abbreviations, formulae, and numbers. The following should also usually
be omitted: "Investigation of..."; "Study of..."; More about..."; "...revisited".





Think about how you can increase the number of people reading and citing your article
(see our detailed guidance here),
because the number of citations will influence where
it appears in the rankings. Link to the article once it is published, for example, from
your blog, via social networking sites, and from pages on your university website.
(Tips on promoting your article can be found here).





Further reading




Beel, J. and Gipp, B. (2010)
Academic search engine spam and Google Scholar's resilience
against it, The Journal of Electronic Publishing
, 13(3).



Beel, J., Gipp, B. and Wilde, E. (2010)

academic search engine optimization (ASEO): optimizing
scholarly literature for Google Scholar and Co., Journal of Scholarly Publishing
,
41(2), pp. 176–190




Taylor & Francis Author Services - Writing your article

Taylor & Francis Author Services - Writing your article

 Source: http://journalauthors.tandf.co.uk/submission/ScholarOne.asp

Keywords

Image: Keywords

It is essential that authors, editors, and
publishers make every effort to ensure articles are found online,
quickly and accurately, ideally within the top three hits. The key to
this is the appropriate use of keywords.



Recent evidence suggests that a strong
correlation exists between online hits and subsequent citations for
journal articles. Search engines rank highly as starting points.
Students are increasingly more likely to start their research by using
Google ScholarTM, rather than by the traditional starting point of
Abstracting and Indexing resources.



We know that the use of keywords helps to increase the chances of the article being located, and therefore cited.


Many search engines have their own algorithms for
ranking sites, some by ranking the relevance of content and links to
the site from other websites. Some search engines use metadata or
"meta-tagging" to assess relevant content. Most search engines, however,
scan a page for keyword phrases, which gives emphasis to phrases in
headings and/or repeated phrases. The number of other sites that link to
a web page also indicates how that page is valued.



Authors should know the key phrases for their
subject area. Reference to an established common indexing standard in a
particular discipline is a useful starting point - GeoRef, ERIC
Thesaurus, PsycInfo, ChemWeb, and so on.



Keyword terms may differ from the actual text
used in the title and abstract, but should accurately reflect what the
article is about. Why not try searching for the keywords you have
chosen, before you submit your article? This will help you see how
useful they are.



At Taylor & Francis, we are continuously
working to improve the search engine rankings for our journals. Our
linking program extends to many Abstracting and Indexing databases,
library sites, and through participation in CrossRefTM.



Taylor & Francis Author Services - Writing your article

Network-based Citation Metrics: Eigenfactor vs. SJR | The Scholarly Kitchen

 Source: http://scholarlykitchen.sspnet.org/2015/07/28/network-based-citation-metrics-eigenfactor-vs-sjr/

Network-based Citation Metrics: Eigenfactor vs. SJR

Journal Citation Network (or just a duck).
Is the influence of a journal best measured by the number of
citations it attracts or by the citations it attracts from other
influential journals?


The purpose of this post is to describe, in plain English, two
network-based citation metrics: Eigenfactor[1] and SCImago Journal Rank
(SJR)[2], compare their differences, and evaluate what they add to our
understanding of the scientific literature.


Both Eigenfactor and SJR are based on the number of citations a
journal receives from other journals, weighted by their importance, such
that citations from important journals like Nature are given
more weight than less important titles. Later in this post, I’ll
describe exactly how a journal derives its importance from the network.


In contrast, metrics like the Impact Factor do not weight citations:
one citation is worth one citation, whatever the source. In this sense,
the Eigenfactor and SJR get closer to measuring importance as a social
phenomenon, where influential people hold more sway over the course of
business, politics, entertainment and the arts. For the Impact Factor,
importance is equated with popularity.


Eigenfactor and SJR are both based on calculating something called eigenvector centrality,
a mathematical concept that was developed to understand social networks
and first applied to measuring journal influence in the mid-seventies.
Google’s PageRank is based on the same concept.


Eigenvector centrality is calculated recursively, such
that values are transferred from one journal to another in the network
until a steady-state solution (also known as an equilibrium) is
reached. Often 100 or so iterations are used before values become
stable. Like a hermetically sealed ecosystem, value is neither created
nor destroyed, just moved around.


There are two metaphors used to describe this process: The first conceives of the system as a fluid network,
where water drains from one pond (journal) to the next along citation
tributaries. Over time, water starts accumulating in journals of high
influence while others begin to drain. The other metaphor conceives of a
researcher taking a random walk from one journal to the next
by way of citations. Journals visited more frequently by the wandering
researcher are considered more influential.


However, both of these models break down (mathematically and
figuratively) in real life. Using the fluid analogy, some ponds may be
disconnected from most of the network of ponds; if there is just one
stream feeding this largely-disconnected network, water will flow in,
but not out. After time, these ponds may swell to immense lakes, whose
size is staggeringly disproportionate to their starting values. Using
the random walk analogy, a researcher may be trapped wandering among a
highly specialized collection of journals that frequently cite each
other but rarely cite journals outside of their clique.


The eigenvector centrality algorithm can adjust for this problem by
“evaporating” some of the fluid in each iteration and redistributing
these values back to the network as rain. Similarly, the random walk
analogy uses a “teleport” concept, where the researcher may be
transported randomly to another journal in the system–think of Scotty
transporting Captain Kirk back to the Enterprise
before immediately beaming him down to another planet.


Before I continue into details and differences, let me summarize thus
far: Eigenfactor and SJR are both metrics that rely on computing,
through iterative weighting, the influence of a journal based on the
entire citation network. They differ from traditional metrics, like the
Impact Factor, that simply compute an unweighted average.


In practice, eigenvector centrality is calculated upon an adjacency matrix
listing all of the journals in the network and the number of citations
that took place between them. Most of the values in this very large
table are zero, but some will contain very high values, representing
large flows of citations between some journals, for instance, between
the NEJM, JAMA, The Lancet, and BMJ.


The result of the computation–a transfer of weighted values from one
journal to the next over one hundred or so iterations–represents the
influence of a journal, which is often expressed as a percentage of the total influence in the network. For example, Nature‘s
2014 Eigenfactor was 1.50, meaning that this one journal holds 1.5% of
the total influence of the entire citation network. In comparison,
a smaller, specialized journal, AJP-Renal Physiology, received an Eigenfactor of 0.028. PLOS ONE’s Eigenfactor was larger than Nature’s (1.53) as a result of its immense size. Remember that Eigenfactor measures total influence in the citation network.


When the Eigenfactor is adjusted for the number of papers published in each journal, it is called the Article Influence Score. This is similar to SCImago’s SJR. So, while PLOS ONE had an immense Eigenfactor, its Article Influence Score was just 1.2 (close to average performance), compared to 1.1 for AJP-Renal Physiology, and 21.9 for Nature.


This year, Thomson Reuters began publishing a Normalized Eigenfactor, which expresses the Eigenfactor as a multiplicative
value rather than a percent. A journal with a value of 2 has twice as
much influence as the average journal in the network, whose value would
be one. Nature‘s Normalized Eigenfactor was 167, PLOS ONE was 171, while AJP-Renal Physiology was 3.


There are several differences between how the Eigenfactor and SJR are
both calculated, meaning they cannot be used interchangeably:


  1. Size of the network. Eigenfactor is based on the
    citation network of just over 11,000 journals indexed by Thomson
    Reuters, whereas the SJR is based on over 21,000 journals indexed in
    the Scopus database. Different citation networks will result in
    different eigenvalues.
  2. Citation window. Eigenfactor is based on citations
    made in a given year to papers published in the prior five years, while
    the SJR uses a three-year window. The creators of Eigenfactor argue that
    five years of data reduces the volatility of their metric from year to
    year, while the creators of the SJR argue that a three-year window
    captures peak citation for most fields and is more sensitive to the
    changing nature of the literature.
  3. Self-citation. Eigenfactor eliminates
    self-citation, while SJR allows self-citation but limits it to no more
    than one-third of all incoming citations. The creators of Eigenfactor
    argue that eliminating self-citation disincentivizes bad referencing
    behavior, while the creators of the SJR argue that self-citation is part
    of normal citation behavior and wish to capture it.
There are other small differences, such as the scaling factor (a
constant that defines how much “evaporation”or “teleporting”) that takes
place in each iteration. While both groups provide a full description
of their algorithm (Eigenfactor here; SJR here)
it is pretty clear that few of us (publishers, editors, authors) are
going to replicate their work. Indeed, these protocols assume that
you’ve already indexed tens of thousands of journals publishing several
million papers listing tens of millions of citations before you even
begin to assemble your adjacency matrix. And no, Excel doesn’t have a
simple macro for calculating eigenvalues. So while each group is fully
transparent in its methods, the shear enormity and complexity of the
task prevents all but the two largest indexers from replicating their
results. A journal editor really has no recourse but to accept the
numbers provided to him.


If you scan performances of journals, you’ll notice that journals
with the highest Impact Factor also have the highest Article
Influence and SJR scores, leaving one to question whether popularity in
science really measures the same underlying construct as influence.
 Writing in the Journal of Informetrics, Massimo Francechet reports
that for the biomedical, social sciences, and geosciences, 5-yr Impact
Factors correlate strongly with Article Influence Scores, but diverge
more for physics, material sciences, computer sciences, and engineering.
For these fields, journals may perform well one one metric but poorly
on the other. In another paper focusing
on the SJR, the authors noted some major changes in the ranking of
journals, and reported that eigenvalues tended to concentrate in fewer
(prestigious) journals. Considering how the metric is calculated,
this should not be surprising.


In conclusion, network-based citation analysis can help us more
closely measure scientific influence. However, the process is complex,
not easily replicable, harder to describe and, for most journals, gives
us the same result as much simpler methods. Even if not widely adopted
for reporting purposes, the Eigenfactor and SJR may be used for
detection purposes, such as identifying citation cartels
and other forms of citation collusion that are very difficult to detect
using traditional counting methods, but may become highly visible using
network-based analysis.


Notes:


1. Eigenfactor (and Article Influence) are terms trademarked by the University of Washington. Eigenfactors and Article Influence scores are published in the Journal Citation Report (Thomson Reuters) each June and are posted freely on the Eigenfactor.org
website after a six-month embargo. To date, the University of
Washington has not received any licensing revenue from Eigenfactor
metrics.


2. The SCImago Journal & Country Rank is based on Scopus data (Elsevier) and made freely available from: http://www.scimagojr.com/


About Phil Davis

I am an independent researcher and publishing consultant
specializing in the statistical analysis of readership and citation
data. I am a former postdoctoral researcher in science communication and
former science librarian. http://phil-davis.org/

Discussion

10 thoughts on “Network-based Citation Metrics: Eigenfactor vs. SJR




  1. Thanks Phil, this is an interesting post.


    The network based metrics are intellectually interesting, but as you
    point they are effectively black boxes, we have to take it on trust that
    the calculations have been done correctly and right number of citations
    included or excluded. Whereas it is reasonably straight forward to
    estimate an Impact Factor from the Web of Science. These metrics also
    fail in terms of the elevator pitch: I can explain an Impact Factor or
    other simple metrics in 30 seconds, the network based metrics take
    minutes to explain and even that glosses over the details.


    My own quick analysis of the Web of Science metrics shows two groups
    that are closely correlated by journal rank, Total Citations and
    Eigenfactor in one and the article weighted metrics such as Impact
    Factor and Article Influence Score in the other. The ever growing range
    of citation metrics doesn’t appear to add much extra information but
    does give journals another way to claim to be top. I also think that
    these metrics apply less in the social sciences and humanities where the
    ‘high prestige journals’ often have less difference in citation
    profile.









    Posted by James Hardcastle (@JwrHardcastle) | Jul 28, 2015, 6:10 am





  2. The exclusion of self citation seems rather strange. A specialized
    journal, which many are, may well have most of the articles on its
    topic. Later articles will certainly cite numerous prior articles, in
    the normal course of citation. This is a strong measure of the journal’s
    local importance. So excluding self citation appears to penalize that
    specialization which serves a specific research community.









    Posted by David Wojick | Jul 28, 2015, 7:17 am





  3. Thank you for an excellent explanation of this complex but
    important subject. One question I’d like to see explored: How well do
    the three indices (Eigenfactor, SJR, JIF) correlate? That is, how much
    does choosing one or the other affect a journal’s relative ranking with
    competing journals?









    Posted by Ken Lanfear | Jul 28, 2015, 10:12 am





  4. For the Impact Factor, importance is equated with popularity.
    Given that the IF (like all other citation metrics, as far as I’m
    aware) makes no effort to discriminate between approving and
    disapproving citations, wouldn’t it be more accurate to say that for the
    IF, importance is equated with notoriety rather than popularity?









    Posted by Rick Anderson | Jul 28, 2015, 10:26 am





    • My conjecture is that, in the physical sciences and engineering at
      least, negative citations are rare enough to be negligible. HSS may be
      different. It is a good research question, so I wonder if any work has
      been done on it. There is a lot of research on distinguishing positive
      and negative tweets, but maybe not citations.









      Posted by David Wojick | Jul 28, 2015, 12:05 pm





  5. Phil, I like the evaporation metaphor for the flow/voting
    interpretation of Eigenvector centrality. It’s a nice complement to the
    teleport metaphor for the random walk interpretation. It’s fun to see
    each interpretation of the algorithm in action, so you and your readers
    may enjoy the demo that Martin Rosvall and Daniel Edler put together to
    illustrate both the random walk interpretation and the flow
    interpretation. See http://www.mapequation.org/apps/MapDemo.html


    To play with the demo, click on “rate view” at the top center of the
    screen. Then you can click on “random walker” at the top to look at the
    random walk interpretation, using the “step” or “start” buttons to set
    the random walker into motion. Then reset and click on “Init Votes” to
    restart. Click on “Vote” or “Automatic Voting” to view the flow
    interpretation. In both cases, the bar graph at right shows each process
    converge to the leading eigenvector of the transition matrix.


    By the way, my view is that the most important difference between the
    Eigenfactor algorithm and the SJR approach is that in the Eigenfactor
    algorithm, the random walker takes one final step along the citation
    matrix *from the stationary distribution*, without teleporting. This
    ensures that no journal receives credit for teleportation (or
    evaporation and condensation) — the only credit comes from being cited
    directly. We’ve found this step extremely important in assigning
    appropriate ranks to lower-tier journals, whose ranks otherwise heavily
    influenced by the teleport process. In the demo linked above, you can
    see how this affects the final rankings by pressing the “Eigenfactor”
    button.


    Carl Bergstrom

    eigenfactor.org / University of Washington









    Posted by Eigenfactor Project (@eigenfactor) | Jul 28, 2015, 5:16 pm


Trackbacks/Pingbacks


  1. Pingback: Network-based Citation Metrics: Eigenfactor vs. SJR | Nader Ale Ebrahim - Jul 28, 2015



Leave a Reply



Network-based Citation Metrics: Eigenfactor vs. SJR | The Scholarly Kitchen

Monday, 27 July 2015

INVITATION TO WORKSHOP SERIES BY CENTRE FOR RESEARCH SERVICES







INVITATION TO WORKSHOP SERIES BY CENTRE FOR RESEARCH SERVICES






Dear Campus Community,


Do you know “Over 43% of ISI papers has never received any citations?” (nature.com/top100, 2014). Publishing a high-quality paper in scientific journals is only halfway towards receiving the citation in the future. The rest of the journey is dependent on disseminating the publications via proper utilization of the “Research Tools”. Proper tools allow the researchers to increase the research impact and citations for their publications. These workshop series will provide various techniques on how one can increase the visibility and enhance the impact of one’s research work.

Research Support Unit,
Centre for Research Services (PPP) would like to invite you to participate in our workshop series.  The details of the workshop series are:
Kindly find the brochure of the workshop in the attachment.  Admission is FREE.



Session

Title of Workshop

Date

Time

Venue

4

Optimize
your article for search engine to improve your research visibility

30 July 2015 (Thursday)

9.30 am – 12.00 noon

Computer Lab, Level 2, IPPP

5

Start
blogging and share your blog post with target researchers

7 August 2015 (Friday)

6

The
role of Twitter in the impact of a scientific publication

13 August 2015 (Thursday)

7

Create
and maintain ResearchID profile automatically

20 August 2015 (Thursday)

8

E-mail
marketing procedure and ResearchGate

28 August 2015 (Friday)


 Registration:

For registration, kindly provide your particular details (full name, affiliation, department & workshop session(s) you are interested to join) and email to uspi@um.edu.my.

For further inquiries, please contact 03-7967 7812 / 6289  (Norhafizah) or email us at uspi@um.edu.my


Thank you.



Regards,


ASSOC. PROF. DR. NGOH GEK CHENG

Head of Research Support Unit, Centre for Research Services,
Institute of Research Management & Monitoring (IPPP),
Level 2, Kompleks Pengurusan Penyelidikan & Inovasi,University of Malaya.




Research Tools

Visibility and Citation Impac





Visibility and Citation Impact

DOI: 10.5539/ies.v7n4p120, PP. 120-125


Abstract:
The number of
publications is the first criteria for assessing a researcher output.
However, the main measurement for author productivity is the number of
citations, and citations are typically related to the paper's
visibility. In this paper, the relationship between article visibility
and the number of citations is investigated. A case study of two
researchers who are using publication marketing tools confirmed that the
article visibility will greatly improve the citation impact. Some
strategies to make the publications available to a larger audience have
been presented at the end of this paper.




comments powered by Disqus

Visibility and Citation Impact jourlib.org

Sunday, 26 July 2015

Upskill Programme : "Introduction to the “Research Tools”









Dear All Postgraduate Candidates,

Greetings
from Institute of Graduate Studies!

We
are pleased to announce that we are organizing a workshop for the
month of July 2015 . Details of the workshop are as
follows:


                        Title
                     
               : "Introduction to
the “Research Tools”
                       
Speaker                    
           :  Dr. Nader Ale Ebrahim
                       
Date                    
                 : 31 July 2015
(Friday)
                       
Time                    
                : 9.00am – 12.00pm
                       
Venue                    
              : Computer Lab, Level 4,
Institute of Graduate Studies    
 
        Max. No. of Participants      
   : 40
                          

Introduction:

Research Tools” can
be defined as vehicles that broadly facilitate research and
related activities. Scientific tools enable researchers to
collect, organize, analyze, visualize and publicized research 
outputs. Dr. Nader has collected over 700 tools that enable students
to follow the correct path in research and to
ultimately produce high-quality research outputs
with more accuracy and efficiency. It is assembled as an interactive Web-based
mind map, titled “Research Tools”, which is updated
periodically. “Research Tools” consists of a
hierarchical set of nodes. It has four main nodes: (1) Searching the
literature, (2) Writing a paper, (3) Targeting suitable journals,
and  (4) Enhancing visibility and impact of the research,
and six auxiliary nodes. Several free tools can be found in the child
nodes. Some paid tools are also included.  In this workshop
some tools as examples from the four main nodes will be described.
The e-skills learned from the workshop are useful across various research disciplines and research institutions.

Problem
statements:

The
search can be time consuming and sometimes tedious task. How can make it
easier? How do deal with situations such as:

•         
“I just join as a new postgraduate student and I am not sure how to
do a
           
literature search”
•         
“I have been in research for some time now but
I spend a lot of time to get the
           
articles I want”
•         
“I am sure I have downloaded the article but I am not able to find it”
•         
“I wanted to write a new paper, how can I manage the references in the
          
shortest possible time?”
•         
“I have many references, some of my old papers, and some of my current
            research. Sometimes, they are so many that I can’t recall where
I have kept
           
them in my folders!”
•         
“I have written an article and I am not able to find a proper Journal”
•         
"I want to increase the citation of my papers, how can I do?"

We need an effective search strategy
which can save hours of wasted research time
and provide a clear direction for your research. The
benefits of attending this workshop are numerous and include learning how to
change the direction of searching to discover and how to use more efficient
the tools that are available through the Net.

Objectives

The
workshop seeks to serve the following objectives:

i.           
 
To
help students who seek to reduce the search time by expanding
ii.            Knowledge of researchers to more effectively use the "tools"
that are available through the Net.
iii.           To evaluate the types
of literature that researchers will encounter.
iv.          To convert the
information of the search for a written document. 
v.            To promote their
publication for further citation.



Who
Should Attend

Simply, senior and junior researchers along with supervisors who like in expanding
their knowledge and developing a research focus
in the shortest possible time. Senior researcher can use the tools to
present their research more effectively to the
scientific world which will bring a higher level of comments and citations from
other researchers which will in turn improve
their research. Junior researchers will
be able with the use of these tools to find a correct title and be
able to focus and write high quality academic proposals, thesis and articles.


Speaker
profile

Nader Ale Ebrahim is currently working
as a research fellow with the Research Support Unit, Centre of Research Services,
Institute of Research Management and Monitoring
(IPPP), University of Malaya. Nader holds a PhD degree in Technology Management
from Faculty of Engineering, University of Malaya. He has over 19 years of
experience in the field of technology management and new product development in
different companies. His current research interest
focuses on E-skills, Research Tools,
Bibliometrics and managing virtual NPD teams in SMEs’ R&D
centers. Nader developed a new method using the “Research Tools”
which help students who seek to reduce the search time by expanding the
knowledge of researchers to effectively use the
"tools" that are available on the Internet. Research Tools consist
of a hierarchical set of nodes. It has four main nodes: (1) Searching the
literature, (2) Writing a paper, (3) Targeting suitable journals, and (4)
Enhancing visibility and impact. He was the winner of Refer-a-Colleague
Competition and has received prizes from renowned establishments such as
Thomson Reuters. Nader is well-known as the founder of “Research Tools”
Box and the developer of “Publication Marketing Tools”. He conducted over
100 workshops within and outside of university of Malaya.


STUDENT DEVELOPMENT
& WRITING UNIT

INSTITUTE OF GRADUATE STUDIES

UNIVERSITY OF MALAYA

TEL : 03-79676935

FAX : 03-79568940

email: ips_upskill@um.edu.my

 

INVITATION TO REGISTER

Application
is now open from  27 - 29 July 2015  or until the
maximum capacity is reached, whichever comes first.

Interested
participants, please submit your particulars by clicking the following
link: REGISTER
HERE
 .



Thank
you.


=====================================================

Deputy Dean

Institute of Graduate Studies

University of Malaya

50603 Kuala Lumpur MALAYSIA



http://www.um.edu.my/

http://www.ips.um.edu.my/
 




Upskill Programme : "Introduction to the “Research Tools”