Monday, 30 October 2023

A Guide to Using AI Tools to Summarize Literature Reviews

 Source: https://typeset.io/resources/using-ai-tools-to-summarize-literature-reviews

 

Sumalatha G
Sumalatha G

 Needless to say, millions of scientific articles are getting published every year making it difficult for a researcher to read and comprehend all the relevant publications.

Back then, researchers used to manually conduct literature reviews by sifting through hundreds of research papers to get the significant information required for the research.

Fast forward to 2023 — things have turned out quite distinct and favorable. With the inception of AI tools, the literature review process is streamlined and researchers can summarize hundreds of research articles in mere moments. They can save time and effort by using AI tools to summarize literature reviews.

This article articulates the role of the top 5 AI tools to summarize literature reviews. You can also learn how AI is used as a powerful tool for summarizing scientific articles and understanding the impact of AI on academic research.

Understanding the Role of AI in Literature Reviews

Before we talk about the benefits of AI tools to summarize literature reviews, let’s understand the concept of AI and how it streamlines the literature review process.

Artificial intelligence tools are trained on large language models and they are programmed to mimic human tasks like problem-solving, making decisions, understanding patterns, and more. When Artificial Intelligence and machine learning algorithms are implemented in literature reviews, they help in processing vast amounts of information, identifying highly relevant studies, and generating quick and concise summaries — TL;DR summaries.

AI has revolutionized the process of literature review by assisting researchers with powerful AI-based tools to read, analyze, compare, contrast, and extract relevant information from research articles.

By using natural language processing algorithms, AI tools can effectively identify key concepts, main arguments, and relevant findings from multiple research articles at once. This assists researchers in quickly understanding the overview of the existing literature on a respective topic, saving their valuable time and effort.

Key Benefits of Using AI Tools to summarize Literature Review


1. Best alternative to traditional literature review

Traditional literature reviews or manual literature reviews can be incredibly time-consuming and often require weeks or even months to complete. Researchers have to sift through myriad articles manually, read them in detail, and highlight or extract relevant information. This process can be overwhelming, especially when dealing with a large number of studies.

However, with the help of AI tools, researchers can greatly save time and effort required to discover, analyze, and summarize relevant studies. AI tools with their NLP and machine learning algorithms can quickly analyze multiple research articles and generate succinct summaries. This not only improves efficiency but also allows researchers to focus on the core analysis and interpretation of the compiled insights.

2. AI tools aid in swift research discovery!

AI tools also help researchers save time in the discovery phase of literature reviews. These AI-powered tools use semantic search analysis to identify relevant studies that might go unnoticed in traditional literature review methods. Also, AI tools can analyze keywords, citations, and other metadata to prompt or suggest pertinent articles that align and correlate well with the researcher’s search query.

3. AI Tools ensure to stay up to date with the most research ideas!

Another advantage of using AI-powered tools in literature reviews is their ability to handle the ever-increasing volume of published scientific research. With the exponential growth of scientific literature, it has become increasingly challenging for researchers to keep up with the latest scientific research and biomedical innovations.

However, AI tools can automatically scan and discover new publications, ensuring that researchers stay up-to-date with the most recent developments in their field of study.

4. Improves efficiency and accuracy of Literature Reviews

The use of AI tools in literature review reduces the occurrences of human errors that may occur during traditional literature review or manual document summarization. So, literature review AI tools improve the overall efficiency and accuracy of literature reviews, ensuring that researchers can access relevant information promptly by minimizing human errors.

List of AI Tools to Summarize Literature Reviews

We have several AI-powered tools to summarize literature reviews. They utilize advanced algorithms and natural language processing techniques to analyze and summarize lengthy scientific articles.

Let's take a look at some of the most popular AI tools to summarize literature reviews.

  • SciSpace Literature Review
  • Semantic Scholar
  • Paper Digest
  • SciSummary
  • Consensus

SciSpace Literature Review

SciSpace Literature Review is an effective and efficient AI-powered tool to streamline the literature review process and summarize multiple research articles at once. Once you enter a keyword, research topic, or question, it initiates your literature review process by providing instant insights from the top 5 highly relevant papers at the top.

These insights are backed by citations that allow you to refer to the source. All the resultant relevant papers appear in an easy-to-digest tabular format explaining each of the sections used in the paper in different columns. You can also customize the table by adding or removing the columns according to your research needs. This is the unique feature of this literature review AI tool.

SciSpace Literature review stands out as the best AI tool to summarize literature review by providing concise TL;DR text and summaries for all the sections used in the research paper. This way, it makes the review process easier for any researcher, and could comprehend more research papers in less time.

Try SciSpace Literature Review now!

SciSpace Literature Review - Get to the bottom of scientific literature
SciSpace Literature Review is an interactive literature review workstation where you can find scientific articles, gather meaningful insights, and compare multiple sources. All in one place.
SciSpace Literature Review

Semantic Scholar

Semantic Scholar
Semantic Scholar

Semantic Scholar is an AI-powered search engine that helps researchers find relevant research papers based on the keyword or research topic. It works similar to Google Scholar.It helps you discover and understand scientific research by providing suitable research papers. The database has over 200 million research articles, you can filter out the results based on the field of study, author, date of publication, and journals or conferences.

They have recently released the Semantic Reader — an AI-powered tool for scientific readers that enhances the reading process. This is available in the beta version.

Try Semantic Scholar here

Paper Digest

Paper Digest
Paper Digest

Paper Digest — another valuable text summarizer tool (AI-powered tool) that summarizes the literature review and helps you get to the core insights of the research paper in a few minutes! This powerful tool works pretty straightforwardly and generates summaries of research papers. You just need to input the article URL or DOI and click on “Digest” to get the summaries. It comes for free and is currently in the beta version.

You can access Paper Digest here!

SciSummary

SciSummary
SciSummary

SciSummary is the best AI tool for summarizing literature review. It is the go-to tool that summarizes articles in seconds. It uses natural language processing models GPT 3.5 and GPT 4.0 to generate concise summaries. You need to upload the document on the dashboard or send the article link via email and your summaries will be generated and delivered to your inbox. This is the best AI-powered tool that helps you read and understand lengthy and complicated research papers. It has different pricing plans (both free and premium) which start at $4.99/month, you can choose the plans according to your needs.

You can access SciSummary here

Consensus

Consensus
Consensus

Consensus is another AI-powered text summarizer and academic search engine that uses artificial intelligence techniques to help you discover and extract key points from the research paper instantly. Similar to Semantic Scholar, it has a vast repository of 200 million scientific articles that are peer-reviewed and include articles from social sciences, computer science, economics, medical sciences, and more!

Consensus helps you extract key findings, summaries, methodological reports used in the research, and other components of the results. You can conduct effective research or literature reviews on Consensus either by inputting keywords, research topics, or open-ended questions. It has different pricing plans ranging from free to enterprise.

Try Consensus here!

Now that we have an understanding of the role of AI in literature reviews and the different AI tools available, let's delve into the process of using AI tools for literature reviews.

Step-by-Step Guide to Using AI Tools to Summarize Literature Reviews

Here’s a short step-by-step guide that clearly articulates how to use AI tools for summary generation!

  1. Select the AI-powered tool that best suits your research needs.
  2. Once you've chosen a tool, you must provide input, such as an article link, DOI, or PDF, to the tool.
  3. The AI tool will then process the input using its algorithms and techniques, generating a summary of the literature.
  4. The generated summary will contain the most important information, including key points, methodologies, and conclusions in a succinct format.
  5. Review and assess the generated summaries to ensure accuracy and relevance.

Challenges of using AI tools for summarization

AI tools are designed to generate precise summaries, however, they may sometimes miss out on important facts or misinterpret specific information.

Here are the potential challenges and risks researchers should be wary of when using AI tools to summarize literature reviews!

1. Lack of contextual intelligence

AI-powered tools cannot ensure that they completely understand the context of the research papers. This leads to inappropriate or misleading summaries of similar academic papers.

To combat this, researchers should feed additional context to the AI prompt or use AI tools with more advanced training models that can better understand the complexities of the research papers.

2. AI tools cannot ensure foolproof summaries

While AI tools can immensely speed up the summarization process, but, they may not be able to capture the complete essence of a research paper or accurately decrypt complex concepts.

Therefore, AI tools are just to be considered as technology aids rather than replacements for human analysis or understanding of key information.

3. Potential bias in the generated summaries

AI-powered tools are largely trained on the existing data, and if the training data is biased, it can eventually lead to biased summaries.

Researchers should be cautious and ensure that the training data is diverse and representative of various sources, different perspectives, and research domains.

4. Quality of the input article affects the summary output

The quality of the research article that we upload or input data also has a direct effect on the accuracy of the generated summaries.

If the input article is poorly written or contains errors, the AI tool might not be able to generate coherent and accurate summaries. Researchers should select high-quality academic papers and articles to obtain reliable and informative summaries.

Concluding!

AI summarization tools have a substantial impact on academic research. By leveraging AI tools, researchers can streamline the literature review process, enabling them to stay up-to-date with the latest advancements in their field of study and make informed decisions based on a comprehensive understanding of current knowledge.

By understanding the role of AI tool to summarize literature review, exploring different AI tools for summarization, following a systematic review process, and assessing the impact of these tools on their academic research, researchers can harness AI tools in enhancing their literature review processes.

If you are also keen to explore the best AI-powered tool for summarizing the literature review process, head over to SciSpace Literature Review and start analyzing the research papers right away — SciSpace Literature Review

Wednesday, 25 October 2023

E-Research Tools for Maximizing Research Visibility and Impact

 Source: https://doi.org/10.6084/m9.figshare.24433723.v1

In the ever-evolving landscape of academic research, librarians and researchers are stepping into roles that extend beyond traditional boundaries. They are the torchbearers of academic excellence, and the key to their success lies in the harnessing of cutting-edge technology, particularly Artificial Intelligence (AI). Join us in an exploration of the transformative power of AI in our upcoming talk at the WITS OPEN RESEARCH SERIES.

Sunday, 15 October 2023

Introduction to Write a Bibliometric Paper: Unveiling the Power of Research Tools for Literature Search, Paper Writing, and Journal Selection

 Source: https://doi.org/10.6084/m9.figshare.24312574.v1

🎯 Unlock the secrets of writing a powerful bibliometric paper with Nader Ale Ebrahim! 

Explore the potential of research tools for literature search, crafting compelling papers, and selecting the perfect journals. πŸ“šπŸ–‹️ 

Don't miss his illuminating presentation: πŸ‘‰ https://doi.org/10.6084/m9.figshare.24312574.v1 

#Research #Bibliometrics #AcademicWriting #ResearchTools πŸŒŸπŸ”

User

Introduction to Maximizing Research Visibility and Impact: Strategies for Al-Kut University College

 Source: https://doi.org/10.6084/m9.figshare.24312652.v1
🌟 Unlock the secrets of enhancing research visibility and impact! 

Join Nader Ale Ebrahim where he shares strategies tailored for Al-Kut University College. 

Discover the power of research tools and strategies to make your work shine. 

Check it out here: πŸ‘‰ https://doi.org/10.6084/m9.figshare.24312652.v1

 #ResearchVisibility #Impact #AcademicSuccess πŸš€πŸ“Š


Wednesday, 4 October 2023

List of academic search engines that use Large Language models for generative answers and some factors to consider when using

 Source: https://musingsaboutlibrarianship.blogspot.com/2023/09/list-of-academic-search-engines-that.html

List of academic search engines that use Large Language models for generative answers

This is a non-comprehensive list of academic search engines that use generative AI (almost always Large language models) to generate direct answers on top of list of relevant results, typically using Retrieval Augmented Generation (RAG) Techniques. We expect a lot more!

This technique involves grounding the generated answer by using a retriever to find text chunks or sentences (also known as context) that may answer the question. 

Besides generating direct answers with citations, it seems to me this new class of search engine often but not always

a) Use Semantic Search (as opposed to Lexical search)
b) Use the ability of Large Language Models to extract information from papers such as "method", "limitations", "region" and display them in a literature review matrix format

For more see recording by me - The possible impact of AI on search and discovery​ (July 2023)

The table below is updated to 28th Sept 2023

Name Sources LLM usedUpload your own PDF? Produces literature review matrix?Other features
Elicit.com/old.elicit.org
Semantic Scholar

OpenAI GPT models & other opensource LLMs Yes Yes
  •  List of concept search
Consensus  Semantic Scholar GPT4 for summarisesNo No, has Consensus meter  
scite.ai assistant Open Scholarly metadata and citation statements from selected partners "We use a variety of Language models depending on situation." GPT3.5 (generally), GPT4 (enterprise client), Claude instant (fallback) 



No No
  • Summaries include text from citation statements
  • Many options to control what is being cited
scispace Unknown Unknown



Yes Yes 
Zeta alpha (R&D in AI)Mostly Comp Science content only -
OpenAI GPT Models

 No NA
  • ability to turn on/off semantic/neural search
  • doc visualization map, showing semantic similarity with cluster labels autogenerated 
Core-GPT / technical paper (unreleased?) CORE   GPT4No No  
Scopus.ai (closed beta) Scopus index

?


 No  No
  • Graphical representation to see connections between keywords
Dimensions AI assistant (closed beta) Dimension index Dimensions General Sci-Bert and Open AI’s ChatGPT.
No


 NA
  • Provides TLDR




Technical aspects to consider

  • What is the source used for the search engine?
A lot of these tools currently use Semantic Scholar, OpenAlex, Arxiv etc which are basically open scholarly metadata and open access full-text sources. Open Scholarly metadata is quite comprehensive, however using open access full text only may lead to unknown biases.

Scite.ai here probably has the biggest advantage here given it also has some paywall full-text (technically citation statements only) from publisher partners.

That said, you cannot assume that just because the source includes full-text it is being used for extraction.

For example, Dimensions and Elicit which do have access to full-text do not appear to be currently using it for direct answers. For technical or perhaps legal reasons their direct answers are only extracted from abstracts. This is unlike Scite assistant which does cite text beyond abstracts.

Elicit does seem to use the available full-text (open access) for generate of the literature review matrix.

  • Are there ways for users to check/verify accuracy of the generated direct answer, or extracted information in the literature review matrix?
RAG type systems ensures hat the citations made are always "real" citations found in their search index, however there is no guarantee that the generated statement is supported by the citation.

In my view, a basic feature such systems should have is a feature to make it easy to check the accuracy of the answers generated.

When a sentence is followed by a citation, typically the whole paper isn't being cited. The system grounds ititsnswer based on a sentence or two from the paper. The best systems like Elicit or scite assistant make it easy to see which are the extracted sentences/context used to support the answer. This can be done via mouseover (scite assistant) or with highlights (elicit).


  • How accurate are the generated direct answers and/or extracted information in the literature review matrix in general?
Features that allow users to check, verify answers are great, but even better is if the system can provide some scores to give users a sense of how generally reliable the results are over a large number of examples.

One way to measure such citation accuracy is via citation precision and recall scores.  However, such scores only measures whether the generated statement and citation given supports the generated statement but do not measure if the generated statements actually answer the question!

A more complete solution is based on ragas framework which measures four aspects of the generated answer

The first two relate to generation part of the pipeline
  • faithfulness - measures how consistent the generated answer is with the contexts retrieved. This is done by checking if the claims in the generated answers can be deduced from the context
  • Answer Relevancy - measures if the generated answer tries to address the question. This does not actually check if the answer is factually correct (which is checked by faithfulness), there might be a tradeoff between the first two
The second two relate to the retrieval part of the pipeline or measures how good the retrieval is

  • Context Precision - This looks at whether the retriever is able to consistently find contexts that are relevant to the answer such that most of the citations retrieved are relevant.
  • Context Recall - This is the converse of the context precision, is the system able to retrieve most of the contexts that might answer the question
The final score could be a harmonic mean of all four scores.

It would be good if systems could generate these stats for users to have a sense of the reliability of these systems, though as of time of writing none of the academic search systems have released such evaluations.


  • How generative AI features are integrated in the search and how it affects you should search

We are still very early in the days of search+generative AI. It's unclear how such features will be integrated into the search.

There are also dozens of ways to do RAG/generative AI + search, either at inference time or even at pretraining stage
  • How does the query get converted to match the retrieved contexts - some examples
    • It could just do simple type of keyword matching
    • It could ask prompt the language model to come up with search strategy which is then used
    • It could convert the query into embedding and match with preindexed embeddings of documents/text
  • How do you combine the retrieved contexts with the LLM (Large Language model)
How it is implemented can lead to different optimal ways of searching. 

For example, say you looking for papers on whether there is an open access citation advantage. Should you search like...

1. Keyword Style - Open Access citation advantage

2. Natural Language style - Is there an Open Access citation advantage?

3. Prompt engineering style - You are a top researcher in the subject of Scholarly communication. Write a 500 word essay on the evidence around Open Access citation advantage with references


Not all methods will work equally well (or at all) for these systems even those based on RAG, e,g, Elicit works for 1&2 but not 3, scite assistant works for all even #3. 


  • Other additional features 

As shown in the table above, other nice features include the ability to upload PDFs for extraction to supplement the limitations of the tool's index is clearly highly desirable.

Scite assistant currently provides dozens of options to control how the generation of answers work is also an interesting direction. For example, you can specify the citations must come from a certain topic, journal or even individual set of papers you specify,


  • Other Non-technical factors
The usual non-technical factors when choosing systems to use apply of course. This includes, user privacy (is the system training on your queries), sustainability of the system (what's their business model?) etc, 


Some (non-comprehensive) list of general web search engines that use LLMs to generate answers

  1. Bing Chat
  2. Perplexity.ai
  3. You.com
Side note : Some systems are chatbots where it may decide to search when necessary, as opposed to Elicit, Scispace which are search engines that always search.... 

Some (non-comprehensive) list of Chatgpt plugins that search academic papers - Requires ChatGPT Plus (default is Bing Chat)



Tuesday, 3 October 2023

The Other AI: The impact of artificial intelligence on academic integrity.

 Source: https://today.ucsd.edu/story/the-other-ai

The Other AI

The impact of artificial intelligence on academic integrity.

Tricia Bertram Galant
Tricia Bertram Gallant emphasizes the importance of academic integrity and critical AI literacy.

Published Date

Article Content

This story was published in the Fall 2023 issue of UC San Diego Magazine.

Tricia Bertram Gallant, an expert on integrity and ethics in education, and director of the Academic Integrity Office and Triton Testing Center at UC San Diego shares her thoughts on artificial intelligence in the university setting.

1. What is the role of the Academic Integrity Office at UC San Diego?

The Academic Integrity Office promotes and supports a culture of integrity to reinforce quality teaching and learning. We train teaching assistants and faculty on how to prevent cheating and to establish a culture of integrity in their classes. And because of my background, I also advise faculty on creating assignments and writing syllabi and pedagogical choices. We work with students in terms of preventative education, but also after-education with students who have violated academic integrity to leverage the infraction as a teachable moment.

2. How do you think AI will change higher education? 

It will change everything. AI will allow us to teach things differently. In the past, students attended universities to access all the knowledge of the world, from the best minds and the best libraries. You don’t need to go anywhere now; you can access that information at home through the internet. Our physical, in-person universities need to be the place where students can be with other people, learn from each other, practice skills and find a mentor. The value of a university is in its people. 

3. How can AI support teaching at UC San Diego? 

Studies show that active, engaged classrooms lead to better learning outcomes. It’s exciting for me to think about the possibility that AI can free up faculty and support staff from designing, printing, distributing and grading exams so they can spend more time mentoring and coaching students. We can use AI to help faculty cognitively offload a whole bunch of things so they have more bandwidth to design highly relevant learning activities that captivate and inspire students, even in large lecture halls. It would allow us to offer an individualized and meaningful educational experience. I think AI will be the impetus to finally force higher education to change — to become the active, engaged learning environment that it was always meant to be. That it has to be.

4. Can UC San Diego students use ChatGPT and other AI-assisted technologies?

It’s up to the faculty and the learning objectives for their individual courses as to whether ChatGPT or other generative AI can be used. And that makes it complicated. But I ask the students: Did the professor say you could? If they didn’t, you need to ask, especially if your use of the technology will undermine the learning objectives of the course. For instance, if you’re in a Japanese class and you write something in English and give it to ChatGPT to translate it for you, well, that’s cheating. 

5. Should ChatGPT be integrated into coursework?

Yes, we should teach students how to properly use ChatGPT and other generative AI tools. They should acknowledge the use of the tool when submitting assignments. We should teach students critical AI literacy, including how it’s prompted and how they need to evaluate the information that comes from it. That will be a huge skill for our students, who will most likely utilize some sort of AI in their future workplace.

 

"I think AI will be the impetus to finally force higher education to change — to become the active, engaged learning environment that it was always meant to

Sunday, 1 October 2023

The CWTS Leiden Ranking 2023

 Source: https://www.leidenmadtrics.nl/articles/the-cwts-leiden-ranking-2023

The CWTS Leiden Ranking 2023

Today CWTS releases the 2023 edition of the CWTS Leiden Ranking. In this post, the Leiden Ranking team provides an update on ongoing developments related to the ranking.

Universities in the Leiden Ranking 2023

As Figure 1 shows, the number of universities in the Leiden Ranking keeps increasing. Like in the last three editions of the ranking, a university needs to have at least 800 fractionally counted publications in the most recent four-year time window to be included in the ranking. This year 1411 universities meet this criterion, 93 more than last year and 235 more than in 2020.

Figure 1. Increase in the number of universities in the Leiden Ranking (2020-2023).


The universities in the Leiden Ranking 2023 are located in 72 countries. Figure 2 shows the number of universities by country. China has the largest number of universities in the Leiden Ranking (273), followed by the US (206), in line with the last three editions of the ranking.

Figure 2. Number of universities in the Leiden Ranking 2023 by country.

 

Three countries previously not represented now also have universities in the Leiden Ranking. These are Indonesia (Bandung Institute of Technology, Universitas Gadjah Mada, and University of Indonesia), Cameroon (University of YaoundΓ© I), and Kazakhstan (Nazarbayev University).

More Than Our Rank

At CWTS we are strongly committed to promoting responsible uses of university rankings. Almost 20 years ago, our former director Ton van Raan was one of the first experts expressing concerns about the fatal attraction of rankings of universities. By creating the Leiden Ranking and contributing to U-Multirank, we have introduced alternatives to simplistic one-dimensional rankings. We have also developed ten principles to guide the responsible use of university rankings.

Building on this longstanding commitment to responsible uses of university rankings, we are proud to be one of the initial supporters of More Than Our Rank, an initiative launched in October 2022 by the International Network of Research Management Societies (INORMS). By providing “an opportunity for academic institutions to highlight the many and various ways they serve the world that are not reflected in their ranking position”, More Than Our Rank is fully aligned with our principles for ranking universities responsibly (see Figure 3). We hope that many universities and other stakeholders will join this important initiative.

Figure 3. Why does CWTS support More Than Our Rank? (Slide from this presentation.)

What’s next - Making the Leiden Ranking more transparent

Being as transparent as possible is one of our principles for responsible university ranking. While the Leiden Ranking offers methodological transparency by documenting its methods in considerable detail, the Web of Science data on which the ranking is based (made available to us by Clarivate, the owner of Web of Science) is of a proprietary nature and cannot be shared openly. This limits the transparency and reproducibility of the Leiden Ranking. It is also in tension with the growing recognition of the importance of “independence and transparency of the data, infrastructure and criteria necessary for research assessment and for determining research impacts” (one of the principles of the Agreement on Reforming Research Assessment).

In the new strategic plan of CWTS, openness of research information is a top priority. Open data sources such as Crossref and OpenAlex offer exciting opportunities to produce bibliometric analytics in a fully transparent and reproducible way. We are currently working on an ambitious project in which we explore the use of open data sources to create a fully transparent and reproducible version of the Leiden Ranking. We expect to share the outcomes of this project later this year.

Let us know your feedback and ideas

As always, we appreciate your feedback on the Leiden Ranking and your ideas on ways to improve the ranking. Don’t hesitate to reach out!