Sunday 3 December 2023

Boosting University Rankings by Improving Research Visibility and Impact, Part 2

 Source: https://doi.org/10.6084/m9.figshare.24717672.v1

Download (4.62 MB) 

Authored by Nader Ale Ebrahim

 


Published on 2023-12-03

In the first part of our series, we discussed the importance of enhancing research visibility to climb the academic rankings ladder. As we move to Part 2, we delve further into the practical tactics and tools that can help universities and researchers showcase their work more effectively and increase their impact in the academic world.

The Art of Making Research Visible

Research, regardless of its quality, can often remain unseen in the vast ocean of academic publications. The key to overcoming this challenge is strategic visibility. This segment of the workshop, led by me, Nader Ale Ebrahim, focuses on actionable strategies and innovative 'Research Tools' designed to elevate the presence of your research in the academic community.

Advanced Strategies for Enhancing Visibility

  1. Creating Impactful Online Profiles: Building on the concept of online academic profiles, this part will cover advanced techniques to optimize these profiles, ensuring they capture the essence of the research and the researcher effectively.

  2. Networking and Collaborative Synergies: We'll explore deeper into creating meaningful collaborations and how these partnerships can lead to greater research visibility and higher citation rates.

  3. Selective and Strategic Publishing: Going beyond just choosing the right journals, this part will discuss how to leverage your research publications to establish authority and thought leadership in your field.

  4. Social Media as a Research Dissemination Tool: Harnessing the power of social media requires more than just sharing; it's about engaging with the audience, storytelling, and creating a narrative around your research.

Practical Insights and Tools

This workshop is grounded in practicality. It's not just about theory; it's about equipping you with tangible tools and strategies. We will delve into various 'Research Tools' that can be seamlessly integrated into your research dissemination strategy, ensuring that your work not only reaches but also resonates with your target audience.

Related Materials and Further Learning

To complement the workshop, we reference materials like "Elevating Research Visibility and Impact: Strategies for Izmir Institute of Technology (İYTE)" and "How to Elevate Research Visibility and Impact." These resources provide additional insights and are instrumental in understanding the broader context of our discussion.

Conclusion: A Path to Prominence

As we wrap up Part 2 of this series, our goal remains clear: to empower universities and researchers with the skills and knowledge to significantly increase the visibility and impact of their research. By applying these strategies, we can collectively contribute to the academic success and elevate the standing of institutions in global rankings.

Stay tuned for more insights and tools that will help bring your research to the forefront of academic excellence.

 

Boosting University Rankings by Improving Research Visibility and Impact, Part 1

 Source: https://doi.org/10.6084/m9.figshare.24717675.v2

Download (6.8 MB) 

Authored by Nader Ale Ebrahim

Published on 2023-12-03


 

Universities are constantly striving to climb up the ladder in academic rankings, and one of the most effective ways to achieve this is by amplifying the visibility and impact of their research. In this era of information overload, even top-tier research can struggle to get the attention it deserves. This is where strategic dissemination of research plays a crucial role.

The Challenge of Getting Noticed

While groundbreaking studies, akin to those recognized by the Nobel Prize, naturally capture the spotlight, the reality is that most research does not operate at this echelon. The majority of scholarly work requires additional efforts to be seen and make a significant impact. This is where the challenge lies – how can universities enhance the visibility of their research to improve their standings in global academic rankings?

Practical Strategies for Enhancing Research Visibility

In this workshop, I, Nader Ale Ebrahim, a specialist in Research Visibility and Impact, will delve into practical strategies that can elevate the status of universities in academic rankings through enhanced research visibility. Here are some key areas we will explore:

  1. Creating Strong Online Academic Profiles: Establishing robust online profiles for universities and their researchers is a foundational step. Platforms like Google Scholar, ResearchGate, and LinkedIn play a pivotal role in showcasing academic work to a global audience.

  2. Collaborative Endeavors: Networking and collaborating with fellow researchers and institutions can significantly amplify the reach and impact of research. It fosters a sharing ecosystem, leading to increased citations and recognition.

  3. Strategic Publication and Dissemination: Choosing the right journals and conferences to publish research is critical. Open Access publishing and attending impactful conferences can boost visibility and citations.

  4. Leveraging Social Media: Social media platforms are powerful tools for disseminating research findings. They offer a direct channel to engage with a broader audience, including academia, industry, and the public.

The Workshop: A Gateway to Practical Solutions

This workshop is not just a theoretical exposition; it is packed with practical, easy-to-implement advice and tools designed to make your research stand out. I will introduce various 'Research Tools' that can aid researchers and universities in effectively showcasing their work, thus enhancing its visibility and impact.

Conclusion: Moving Up the Academic Ranks

The ultimate goal of this workshop is to equip universities and researchers with the knowledge and tools necessary to make their research more visible. By doing so, they can significantly improve their position in university rankings, contributing to the overall prestige and recognition of their academic endeavors.

Stay tuned for more insights and practical tips as we continue this journey in the realm of academic excellence.

 

 

Thursday 9 November 2023

Generative AI: Your Assistant as an Administrator or Faculty Member

 Source: https://www.insidehighered.com/opinion/blogs/online-trending-now/2023/11/08/generative-ai-your-assistant-administrator-or

November 08, 2023

Generative AI: Your Assistant as an Administrator or Faculty Member

Generative AI is quickly becoming a daily fixture in the lives of administrators and faculty. It enhances productivity, creativity and perspectives.

In writing this article, I sought the advice of Google Bard, Perplexity and Claude 2. In all of my research using generative AI, I use at least three of the established apps. This enables me to spot any responses that seem too far out of line or not credible. By spreading my research through multiple large language models, I can better ensure that I am not being led astray. Over time, this may not be necessary, but as the apps are being fine-tuned, I feel most comfortable being able to compare results.

Bard uses the PaLM 2 LLM. ChatGPT+ and Perplexity Copilot use versions of GPT4. Claude 2 is powered by Anthropic’s proprietary LLM. Using multiple chat bots with different underlying large language models helps to provide a diverse set of perspectives and responses to the same prompt. It only takes a minute to get a full response (even in the case of Google Bard, which, by default, gives three draft responses to each prompt). Then, using follow-up prompts, you can drill down for clarifications and citations.

(Note: An earlier version of this piece included a section with examples and citations suggested by Google Bard about different applications of GenerativeAI at Stanford University, Carnegie Mellon University, and Georgia Institute of Technology. These examples and citations were misleading and have been removed.)

There are many ways in which we can make effective use of AI every day as higher education faculty and administrators. AI can give us help even with some of the most challenging problems we face. For example, the undergraduate dean at John Jay College, Dara N. Byrne, writes that when she started in her position, the senior graduation rate was 54 percent. She used predictive AI to identify students who were most likely to drop out. In a deep data dive, she teamed with DataKind, a nonprofit using data science and AI, to select students for proactive interventions. This resulted in increasing the senior graduation rate to 86 percent.

The prompt I submitted to each of the three apps was “I am writing an article about productive uses of generative AI. It is for an audience of university deans and faculty. This will be published in Inside Higher Ed. My goal is to give those who have less experience with generative AI a sense of how it might be useful in their everyday work. Give me examples how these higher education workers could use generative AI.”

Following are sample excerpts of some of the results from each of the three apps I consulted.

From a search of Claude 2 on Oct. 29:

Research and writing: Generative AI could help faculty and researchers quickly synthesize literature reviews, generate hypotheses, outline papers and proposals, and draft sections of manuscripts. It could save time and enhance productivity. However, human creativity and critical thinking are still essential.

Administrative tasks: For deans and university leaders, generative AI could help draft communications, generate data insights from reports, compile meeting agendas and minutes, and more. It could save time on logistical tasks to focus leadership efforts elsewhere.

Personalized learning: Generative AI tutors could provide customized explanations, practice, and feedback to students, adapting to their strengths and weaknesses. This could promote self-paced mastery and free up faculty time.

The key for higher education is using generative AI thoughtfully and ethically to augment human efforts. With proper oversight and care, it has much potential to enhance teaching, research, creativity, and productivity. But the human touch remains irreplaceable for true learning, discovery, and decision-making.

From Perplexity on Oct. 29:

Automating Tasks

Automated Grading: generative AI can be used to grade assignments, essays, and exams, freeing up faculty time to focus on other tasks. This can be especially useful for large classes where grading can be time-consuming and tedious.

Chatbots: generative AI–powered chatbots can be used to answer student questions, provide support, and even offer personalized recommendations. This can help reduce the workload of faculty and staff, while also improving the student experience.

Improving Student Outcomes

Personalized Learning: generative AI can be used to create personalized learning experiences for students, based on their individual needs and learning styles. This can help improve student outcomes and engagement.

Early Warning Systems: generative AI can be used to identify students who are at risk of dropping out or failing a course, allowing faculty to intervene early and provide support.

Enhancing Research Data Analysis: generative AI can be used to analyze large datasets, such as those generated by scientific research. This can help researchers identify patterns and insights that might be missed by human analysis.

Natural Language Processing: generative AI can be used to analyze and understand large amounts of text, such as research papers, articles, and books. This can help researchers identify trends and insights that might be missed by human analysis.

As you see, the emphases of the results vary considerably. Of course, different prompts will result in different responses, and different apps will emphasize different interpretations of your prompt. The results to an identical prompt in the same app also will be different if run again just a few minutes after the first one.

The primary key to success in use of generative AI is to develop the prompt carefully. However, the second key to success is to use more than one generative AI app. A third key to success is to use generative AI often, daily if possible. Through repetition, you will refine your prompt writing and determine which app is likely to give you the response you seek for the questions and tasks you have.

Monday 30 October 2023

A Guide to Using AI Tools to Summarize Literature Reviews

 Source: https://typeset.io/resources/using-ai-tools-to-summarize-literature-reviews

 

Sumalatha G
Sumalatha G

 Needless to say, millions of scientific articles are getting published every year making it difficult for a researcher to read and comprehend all the relevant publications.

Back then, researchers used to manually conduct literature reviews by sifting through hundreds of research papers to get the significant information required for the research.

Fast forward to 2023 — things have turned out quite distinct and favorable. With the inception of AI tools, the literature review process is streamlined and researchers can summarize hundreds of research articles in mere moments. They can save time and effort by using AI tools to summarize literature reviews.

This article articulates the role of the top 5 AI tools to summarize literature reviews. You can also learn how AI is used as a powerful tool for summarizing scientific articles and understanding the impact of AI on academic research.

Understanding the Role of AI in Literature Reviews

Before we talk about the benefits of AI tools to summarize literature reviews, let’s understand the concept of AI and how it streamlines the literature review process.

Artificial intelligence tools are trained on large language models and they are programmed to mimic human tasks like problem-solving, making decisions, understanding patterns, and more. When Artificial Intelligence and machine learning algorithms are implemented in literature reviews, they help in processing vast amounts of information, identifying highly relevant studies, and generating quick and concise summaries — TL;DR summaries.

AI has revolutionized the process of literature review by assisting researchers with powerful AI-based tools to read, analyze, compare, contrast, and extract relevant information from research articles.

By using natural language processing algorithms, AI tools can effectively identify key concepts, main arguments, and relevant findings from multiple research articles at once. This assists researchers in quickly understanding the overview of the existing literature on a respective topic, saving their valuable time and effort.

Key Benefits of Using AI Tools to summarize Literature Review


1. Best alternative to traditional literature review

Traditional literature reviews or manual literature reviews can be incredibly time-consuming and often require weeks or even months to complete. Researchers have to sift through myriad articles manually, read them in detail, and highlight or extract relevant information. This process can be overwhelming, especially when dealing with a large number of studies.

However, with the help of AI tools, researchers can greatly save time and effort required to discover, analyze, and summarize relevant studies. AI tools with their NLP and machine learning algorithms can quickly analyze multiple research articles and generate succinct summaries. This not only improves efficiency but also allows researchers to focus on the core analysis and interpretation of the compiled insights.

2. AI tools aid in swift research discovery!

AI tools also help researchers save time in the discovery phase of literature reviews. These AI-powered tools use semantic search analysis to identify relevant studies that might go unnoticed in traditional literature review methods. Also, AI tools can analyze keywords, citations, and other metadata to prompt or suggest pertinent articles that align and correlate well with the researcher’s search query.

3. AI Tools ensure to stay up to date with the most research ideas!

Another advantage of using AI-powered tools in literature reviews is their ability to handle the ever-increasing volume of published scientific research. With the exponential growth of scientific literature, it has become increasingly challenging for researchers to keep up with the latest scientific research and biomedical innovations.

However, AI tools can automatically scan and discover new publications, ensuring that researchers stay up-to-date with the most recent developments in their field of study.

4. Improves efficiency and accuracy of Literature Reviews

The use of AI tools in literature review reduces the occurrences of human errors that may occur during traditional literature review or manual document summarization. So, literature review AI tools improve the overall efficiency and accuracy of literature reviews, ensuring that researchers can access relevant information promptly by minimizing human errors.

List of AI Tools to Summarize Literature Reviews

We have several AI-powered tools to summarize literature reviews. They utilize advanced algorithms and natural language processing techniques to analyze and summarize lengthy scientific articles.

Let's take a look at some of the most popular AI tools to summarize literature reviews.

  • SciSpace Literature Review
  • Semantic Scholar
  • Paper Digest
  • SciSummary
  • Consensus

SciSpace Literature Review

SciSpace Literature Review is an effective and efficient AI-powered tool to streamline the literature review process and summarize multiple research articles at once. Once you enter a keyword, research topic, or question, it initiates your literature review process by providing instant insights from the top 5 highly relevant papers at the top.

These insights are backed by citations that allow you to refer to the source. All the resultant relevant papers appear in an easy-to-digest tabular format explaining each of the sections used in the paper in different columns. You can also customize the table by adding or removing the columns according to your research needs. This is the unique feature of this literature review AI tool.

SciSpace Literature review stands out as the best AI tool to summarize literature review by providing concise TL;DR text and summaries for all the sections used in the research paper. This way, it makes the review process easier for any researcher, and could comprehend more research papers in less time.

Try SciSpace Literature Review now!

SciSpace Literature Review - Get to the bottom of scientific literature
SciSpace Literature Review is an interactive literature review workstation where you can find scientific articles, gather meaningful insights, and compare multiple sources. All in one place.
SciSpace Literature Review

Semantic Scholar

Semantic Scholar
Semantic Scholar

Semantic Scholar is an AI-powered search engine that helps researchers find relevant research papers based on the keyword or research topic. It works similar to Google Scholar.It helps you discover and understand scientific research by providing suitable research papers. The database has over 200 million research articles, you can filter out the results based on the field of study, author, date of publication, and journals or conferences.

They have recently released the Semantic Reader — an AI-powered tool for scientific readers that enhances the reading process. This is available in the beta version.

Try Semantic Scholar here

Paper Digest

Paper Digest
Paper Digest

Paper Digest — another valuable text summarizer tool (AI-powered tool) that summarizes the literature review and helps you get to the core insights of the research paper in a few minutes! This powerful tool works pretty straightforwardly and generates summaries of research papers. You just need to input the article URL or DOI and click on “Digest” to get the summaries. It comes for free and is currently in the beta version.

You can access Paper Digest here!

SciSummary

SciSummary
SciSummary

SciSummary is the best AI tool for summarizing literature review. It is the go-to tool that summarizes articles in seconds. It uses natural language processing models GPT 3.5 and GPT 4.0 to generate concise summaries. You need to upload the document on the dashboard or send the article link via email and your summaries will be generated and delivered to your inbox. This is the best AI-powered tool that helps you read and understand lengthy and complicated research papers. It has different pricing plans (both free and premium) which start at $4.99/month, you can choose the plans according to your needs.

You can access SciSummary here

Consensus

Consensus
Consensus

Consensus is another AI-powered text summarizer and academic search engine that uses artificial intelligence techniques to help you discover and extract key points from the research paper instantly. Similar to Semantic Scholar, it has a vast repository of 200 million scientific articles that are peer-reviewed and include articles from social sciences, computer science, economics, medical sciences, and more!

Consensus helps you extract key findings, summaries, methodological reports used in the research, and other components of the results. You can conduct effective research or literature reviews on Consensus either by inputting keywords, research topics, or open-ended questions. It has different pricing plans ranging from free to enterprise.

Try Consensus here!

Now that we have an understanding of the role of AI in literature reviews and the different AI tools available, let's delve into the process of using AI tools for literature reviews.

Step-by-Step Guide to Using AI Tools to Summarize Literature Reviews

Here’s a short step-by-step guide that clearly articulates how to use AI tools for summary generation!

  1. Select the AI-powered tool that best suits your research needs.
  2. Once you've chosen a tool, you must provide input, such as an article link, DOI, or PDF, to the tool.
  3. The AI tool will then process the input using its algorithms and techniques, generating a summary of the literature.
  4. The generated summary will contain the most important information, including key points, methodologies, and conclusions in a succinct format.
  5. Review and assess the generated summaries to ensure accuracy and relevance.

Challenges of using AI tools for summarization

AI tools are designed to generate precise summaries, however, they may sometimes miss out on important facts or misinterpret specific information.

Here are the potential challenges and risks researchers should be wary of when using AI tools to summarize literature reviews!

1. Lack of contextual intelligence

AI-powered tools cannot ensure that they completely understand the context of the research papers. This leads to inappropriate or misleading summaries of similar academic papers.

To combat this, researchers should feed additional context to the AI prompt or use AI tools with more advanced training models that can better understand the complexities of the research papers.

2. AI tools cannot ensure foolproof summaries

While AI tools can immensely speed up the summarization process, but, they may not be able to capture the complete essence of a research paper or accurately decrypt complex concepts.

Therefore, AI tools are just to be considered as technology aids rather than replacements for human analysis or understanding of key information.

3. Potential bias in the generated summaries

AI-powered tools are largely trained on the existing data, and if the training data is biased, it can eventually lead to biased summaries.

Researchers should be cautious and ensure that the training data is diverse and representative of various sources, different perspectives, and research domains.

4. Quality of the input article affects the summary output

The quality of the research article that we upload or input data also has a direct effect on the accuracy of the generated summaries.

If the input article is poorly written or contains errors, the AI tool might not be able to generate coherent and accurate summaries. Researchers should select high-quality academic papers and articles to obtain reliable and informative summaries.

Concluding!

AI summarization tools have a substantial impact on academic research. By leveraging AI tools, researchers can streamline the literature review process, enabling them to stay up-to-date with the latest advancements in their field of study and make informed decisions based on a comprehensive understanding of current knowledge.

By understanding the role of AI tool to summarize literature review, exploring different AI tools for summarization, following a systematic review process, and assessing the impact of these tools on their academic research, researchers can harness AI tools in enhancing their literature review processes.

If you are also keen to explore the best AI-powered tool for summarizing the literature review process, head over to SciSpace Literature Review and start analyzing the research papers right away — SciSpace Literature Review

Wednesday 25 October 2023

E-Research Tools for Maximizing Research Visibility and Impact

 Source: https://doi.org/10.6084/m9.figshare.24433723.v1

In the ever-evolving landscape of academic research, librarians and researchers are stepping into roles that extend beyond traditional boundaries. They are the torchbearers of academic excellence, and the key to their success lies in the harnessing of cutting-edge technology, particularly Artificial Intelligence (AI). Join us in an exploration of the transformative power of AI in our upcoming talk at the WITS OPEN RESEARCH SERIES.

Sunday 15 October 2023

Introduction to Write a Bibliometric Paper: Unveiling the Power of Research Tools for Literature Search, Paper Writing, and Journal Selection

 Source: https://doi.org/10.6084/m9.figshare.24312574.v1

🎯 Unlock the secrets of writing a powerful bibliometric paper with Nader Ale Ebrahim! 

Explore the potential of research tools for literature search, crafting compelling papers, and selecting the perfect journals. 📚🖋️ 

Don't miss his illuminating presentation: 👉 https://doi.org/10.6084/m9.figshare.24312574.v1 

#Research #Bibliometrics #AcademicWriting #ResearchTools 🌟🔍

User

Introduction to Maximizing Research Visibility and Impact: Strategies for Al-Kut University College

 Source: https://doi.org/10.6084/m9.figshare.24312652.v1
🌟 Unlock the secrets of enhancing research visibility and impact! 

Join Nader Ale Ebrahim where he shares strategies tailored for Al-Kut University College. 

Discover the power of research tools and strategies to make your work shine. 

Check it out here: 👉 https://doi.org/10.6084/m9.figshare.24312652.v1

 #ResearchVisibility #Impact #AcademicSuccess 🚀📊


Wednesday 4 October 2023

List of academic search engines that use Large Language models for generative answers and some factors to consider when using

 Source: https://musingsaboutlibrarianship.blogspot.com/2023/09/list-of-academic-search-engines-that.html

List of academic search engines that use Large Language models for generative answers

This is a non-comprehensive list of academic search engines that use generative AI (almost always Large language models) to generate direct answers on top of list of relevant results, typically using Retrieval Augmented Generation (RAG) Techniques. We expect a lot more!

This technique involves grounding the generated answer by using a retriever to find text chunks or sentences (also known as context) that may answer the question. 

Besides generating direct answers with citations, it seems to me this new class of search engine often but not always

a) Use Semantic Search (as opposed to Lexical search)
b) Use the ability of Large Language Models to extract information from papers such as "method", "limitations", "region" and display them in a literature review matrix format

For more see recording by me - The possible impact of AI on search and discovery​ (July 2023)

The table below is updated to 28th Sept 2023

Name Sources LLM usedUpload your own PDF? Produces literature review matrix?Other features
Elicit.com/old.elicit.org
Semantic Scholar

OpenAI GPT models & other opensource LLMs Yes Yes
  •  List of concept search
Consensus  Semantic Scholar GPT4 for summarisesNo No, has Consensus meter  
scite.ai assistant Open Scholarly metadata and citation statements from selected partners "We use a variety of Language models depending on situation." GPT3.5 (generally), GPT4 (enterprise client), Claude instant (fallback) 



No No
  • Summaries include text from citation statements
  • Many options to control what is being cited
scispace Unknown Unknown



Yes Yes 
Zeta alpha (R&D in AI)Mostly Comp Science content only -
OpenAI GPT Models

 No NA
  • ability to turn on/off semantic/neural search
  • doc visualization map, showing semantic similarity with cluster labels autogenerated 
Core-GPT / technical paper (unreleased?) CORE   GPT4No No  
Scopus.ai (closed beta) Scopus index

?


 No  No
  • Graphical representation to see connections between keywords
Dimensions AI assistant (closed beta) Dimension index Dimensions General Sci-Bert and Open AI’s ChatGPT.
No


 NA
  • Provides TLDR




Technical aspects to consider

  • What is the source used for the search engine?
A lot of these tools currently use Semantic Scholar, OpenAlex, Arxiv etc which are basically open scholarly metadata and open access full-text sources. Open Scholarly metadata is quite comprehensive, however using open access full text only may lead to unknown biases.

Scite.ai here probably has the biggest advantage here given it also has some paywall full-text (technically citation statements only) from publisher partners.

That said, you cannot assume that just because the source includes full-text it is being used for extraction.

For example, Dimensions and Elicit which do have access to full-text do not appear to be currently using it for direct answers. For technical or perhaps legal reasons their direct answers are only extracted from abstracts. This is unlike Scite assistant which does cite text beyond abstracts.

Elicit does seem to use the available full-text (open access) for generate of the literature review matrix.

  • Are there ways for users to check/verify accuracy of the generated direct answer, or extracted information in the literature review matrix?
RAG type systems ensures hat the citations made are always "real" citations found in their search index, however there is no guarantee that the generated statement is supported by the citation.

In my view, a basic feature such systems should have is a feature to make it easy to check the accuracy of the answers generated.

When a sentence is followed by a citation, typically the whole paper isn't being cited. The system grounds ititsnswer based on a sentence or two from the paper. The best systems like Elicit or scite assistant make it easy to see which are the extracted sentences/context used to support the answer. This can be done via mouseover (scite assistant) or with highlights (elicit).


  • How accurate are the generated direct answers and/or extracted information in the literature review matrix in general?
Features that allow users to check, verify answers are great, but even better is if the system can provide some scores to give users a sense of how generally reliable the results are over a large number of examples.

One way to measure such citation accuracy is via citation precision and recall scores.  However, such scores only measures whether the generated statement and citation given supports the generated statement but do not measure if the generated statements actually answer the question!

A more complete solution is based on ragas framework which measures four aspects of the generated answer

The first two relate to generation part of the pipeline
  • faithfulness - measures how consistent the generated answer is with the contexts retrieved. This is done by checking if the claims in the generated answers can be deduced from the context
  • Answer Relevancy - measures if the generated answer tries to address the question. This does not actually check if the answer is factually correct (which is checked by faithfulness), there might be a tradeoff between the first two
The second two relate to the retrieval part of the pipeline or measures how good the retrieval is

  • Context Precision - This looks at whether the retriever is able to consistently find contexts that are relevant to the answer such that most of the citations retrieved are relevant.
  • Context Recall - This is the converse of the context precision, is the system able to retrieve most of the contexts that might answer the question
The final score could be a harmonic mean of all four scores.

It would be good if systems could generate these stats for users to have a sense of the reliability of these systems, though as of time of writing none of the academic search systems have released such evaluations.


  • How generative AI features are integrated in the search and how it affects you should search

We are still very early in the days of search+generative AI. It's unclear how such features will be integrated into the search.

There are also dozens of ways to do RAG/generative AI + search, either at inference time or even at pretraining stage
  • How does the query get converted to match the retrieved contexts - some examples
    • It could just do simple type of keyword matching
    • It could ask prompt the language model to come up with search strategy which is then used
    • It could convert the query into embedding and match with preindexed embeddings of documents/text
  • How do you combine the retrieved contexts with the LLM (Large Language model)
How it is implemented can lead to different optimal ways of searching. 

For example, say you looking for papers on whether there is an open access citation advantage. Should you search like...

1. Keyword Style - Open Access citation advantage

2. Natural Language style - Is there an Open Access citation advantage?

3. Prompt engineering style - You are a top researcher in the subject of Scholarly communication. Write a 500 word essay on the evidence around Open Access citation advantage with references


Not all methods will work equally well (or at all) for these systems even those based on RAG, e,g, Elicit works for 1&2 but not 3, scite assistant works for all even #3. 


  • Other additional features 

As shown in the table above, other nice features include the ability to upload PDFs for extraction to supplement the limitations of the tool's index is clearly highly desirable.

Scite assistant currently provides dozens of options to control how the generation of answers work is also an interesting direction. For example, you can specify the citations must come from a certain topic, journal or even individual set of papers you specify,


  • Other Non-technical factors
The usual non-technical factors when choosing systems to use apply of course. This includes, user privacy (is the system training on your queries), sustainability of the system (what's their business model?) etc, 


Some (non-comprehensive) list of general web search engines that use LLMs to generate answers

  1. Bing Chat
  2. Perplexity.ai
  3. You.com
Side note : Some systems are chatbots where it may decide to search when necessary, as opposed to Elicit, Scispace which are search engines that always search.... 

Some (non-comprehensive) list of Chatgpt plugins that search academic papers - Requires ChatGPT Plus (default is Bing Chat)



Tuesday 3 October 2023

The Other AI: The impact of artificial intelligence on academic integrity.

 Source: https://today.ucsd.edu/story/the-other-ai

The Other AI

The impact of artificial intelligence on academic integrity.

Tricia Bertram Galant
Tricia Bertram Gallant emphasizes the importance of academic integrity and critical AI literacy.

Published Date

Article Content

This story was published in the Fall 2023 issue of UC San Diego Magazine.

Tricia Bertram Gallant, an expert on integrity and ethics in education, and director of the Academic Integrity Office and Triton Testing Center at UC San Diego shares her thoughts on artificial intelligence in the university setting.

1. What is the role of the Academic Integrity Office at UC San Diego?

The Academic Integrity Office promotes and supports a culture of integrity to reinforce quality teaching and learning. We train teaching assistants and faculty on how to prevent cheating and to establish a culture of integrity in their classes. And because of my background, I also advise faculty on creating assignments and writing syllabi and pedagogical choices. We work with students in terms of preventative education, but also after-education with students who have violated academic integrity to leverage the infraction as a teachable moment.

2. How do you think AI will change higher education? 

It will change everything. AI will allow us to teach things differently. In the past, students attended universities to access all the knowledge of the world, from the best minds and the best libraries. You don’t need to go anywhere now; you can access that information at home through the internet. Our physical, in-person universities need to be the place where students can be with other people, learn from each other, practice skills and find a mentor. The value of a university is in its people. 

3. How can AI support teaching at UC San Diego? 

Studies show that active, engaged classrooms lead to better learning outcomes. It’s exciting for me to think about the possibility that AI can free up faculty and support staff from designing, printing, distributing and grading exams so they can spend more time mentoring and coaching students. We can use AI to help faculty cognitively offload a whole bunch of things so they have more bandwidth to design highly relevant learning activities that captivate and inspire students, even in large lecture halls. It would allow us to offer an individualized and meaningful educational experience. I think AI will be the impetus to finally force higher education to change — to become the active, engaged learning environment that it was always meant to be. That it has to be.

4. Can UC San Diego students use ChatGPT and other AI-assisted technologies?

It’s up to the faculty and the learning objectives for their individual courses as to whether ChatGPT or other generative AI can be used. And that makes it complicated. But I ask the students: Did the professor say you could? If they didn’t, you need to ask, especially if your use of the technology will undermine the learning objectives of the course. For instance, if you’re in a Japanese class and you write something in English and give it to ChatGPT to translate it for you, well, that’s cheating. 

5. Should ChatGPT be integrated into coursework?

Yes, we should teach students how to properly use ChatGPT and other generative AI tools. They should acknowledge the use of the tool when submitting assignments. We should teach students critical AI literacy, including how it’s prompted and how they need to evaluate the information that comes from it. That will be a huge skill for our students, who will most likely utilize some sort of AI in their future workplace.

 

"I think AI will be the impetus to finally force higher education to change — to become the active, engaged learning environment that it was always meant to

Sunday 1 October 2023

The CWTS Leiden Ranking 2023

 Source: https://www.leidenmadtrics.nl/articles/the-cwts-leiden-ranking-2023

The CWTS Leiden Ranking 2023

Today CWTS releases the 2023 edition of the CWTS Leiden Ranking. In this post, the Leiden Ranking team provides an update on ongoing developments related to the ranking.

Universities in the Leiden Ranking 2023

As Figure 1 shows, the number of universities in the Leiden Ranking keeps increasing. Like in the last three editions of the ranking, a university needs to have at least 800 fractionally counted publications in the most recent four-year time window to be included in the ranking. This year 1411 universities meet this criterion, 93 more than last year and 235 more than in 2020.

Figure 1. Increase in the number of universities in the Leiden Ranking (2020-2023).


The universities in the Leiden Ranking 2023 are located in 72 countries. Figure 2 shows the number of universities by country. China has the largest number of universities in the Leiden Ranking (273), followed by the US (206), in line with the last three editions of the ranking.

Figure 2. Number of universities in the Leiden Ranking 2023 by country.

 

Three countries previously not represented now also have universities in the Leiden Ranking. These are Indonesia (Bandung Institute of Technology, Universitas Gadjah Mada, and University of Indonesia), Cameroon (University of Yaoundé I), and Kazakhstan (Nazarbayev University).

More Than Our Rank

At CWTS we are strongly committed to promoting responsible uses of university rankings. Almost 20 years ago, our former director Ton van Raan was one of the first experts expressing concerns about the fatal attraction of rankings of universities. By creating the Leiden Ranking and contributing to U-Multirank, we have introduced alternatives to simplistic one-dimensional rankings. We have also developed ten principles to guide the responsible use of university rankings.

Building on this longstanding commitment to responsible uses of university rankings, we are proud to be one of the initial supporters of More Than Our Rank, an initiative launched in October 2022 by the International Network of Research Management Societies (INORMS). By providing “an opportunity for academic institutions to highlight the many and various ways they serve the world that are not reflected in their ranking position”, More Than Our Rank is fully aligned with our principles for ranking universities responsibly (see Figure 3). We hope that many universities and other stakeholders will join this important initiative.

Figure 3. Why does CWTS support More Than Our Rank? (Slide from this presentation.)

What’s next - Making the Leiden Ranking more transparent

Being as transparent as possible is one of our principles for responsible university ranking. While the Leiden Ranking offers methodological transparency by documenting its methods in considerable detail, the Web of Science data on which the ranking is based (made available to us by Clarivate, the owner of Web of Science) is of a proprietary nature and cannot be shared openly. This limits the transparency and reproducibility of the Leiden Ranking. It is also in tension with the growing recognition of the importance of “independence and transparency of the data, infrastructure and criteria necessary for research assessment and for determining research impacts” (one of the principles of the Agreement on Reforming Research Assessment).

In the new strategic plan of CWTS, openness of research information is a top priority. Open data sources such as Crossref and OpenAlex offer exciting opportunities to produce bibliometric analytics in a fully transparent and reproducible way. We are currently working on an ambitious project in which we explore the use of open data sources to create a fully transparent and reproducible version of the Leiden Ranking. We expect to share the outcomes of this project later this year.

Let us know your feedback and ideas

As always, we appreciate your feedback on the Leiden Ranking and your ideas on ways to improve the ranking. Don’t hesitate to reach out!


Wednesday 20 September 2023

Check out #Scholia, an amazing research tool that creates visual scholarly profiles for a variety of topics, people, organizations, species, chemicals, and more!

 Source: https://scholia.toolforge.org/author/Q57412737

Check out #Scholia, an amazing research tool that creates visual scholarly profiles for a variety of topics, people, organizations, species, chemicals, and more!
This free service uses bibliographic and other information in Wikidata to provide users with comprehensive profiles that are both informative and visually appealing.
Although the data sets may be incomplete, Scholia is a great resource for anyone looking to conduct thorough research.
Give it a try!
https://lnkd.in/eqYzQYhv
#researchtool #scholarlyprofiles #wikidata #freeservice

Nader Ale Ebrahim (Q57412737)

List of publications RSS icon

Number of publications per year

author: publications-per-year.sparql

Number of pages per year

(Only articles with number of pages set are displayed) author: pages-per-year.sparql

This query yielded no results. You can still try to find something by modifying it.

Topics

Topic scores

Topics based on a weighting between fields of work, topics of authored works and topics of citing works.
author: topic-scores.sparql

Topics of authored works

Showing 1 to 10 of 15 entries