Tuesday 23 May 2023

How to Improve Research Visibility and Impact?

 Source: https://letsview.com/chatbot

How to Improve Research Visibility and Impact? 

 


Improving research visibility and impact is a crucial aspect of academic work. Dr. Nader Ale Ebrahim, a renowned researcher and academic, has developed several tools and techniques that can help researchers to improve their research visibility and impact. Here are some of the strategies that Dr. Ale Ebrahim recommends:

  1. Publish in high-impact journals: One of the most effective ways to improve research visibility and impact is to publish in high-impact journals. Dr. Ale Ebrahim recommends that researchers target journals that have a high impact factor and a broad readership.

  2. Use social media: Social media can be a powerful tool for promoting research and increasing its visibility. Dr. Ale Ebrahim recommends that researchers use social media platforms such as Twitter, LinkedIn, and ResearchGate to share their research and engage with other researchers in their field.

  3. Develop a strong online presence: In addition to social media, researchers should also develop a strong online presence by creating a personal website or blog. This can help to increase visibility and make it easier for other researchers to find and cite their work.

  4. Collaborate with other researchers: Collaborating with other researchers can help to increase research visibility and impact. Dr. Ale Ebrahim recommends that researchers seek out opportunities to collaborate with other researchers in their field, both within their institution and outside of it.

  5. Use research tools: Dr. Ale Ebrahim has developed several research tools that can help researchers to improve their research visibility and impact. These include the "Research Tools Box" and the "Research Methodology Canvas," which can help researchers to plan and execute their research projects effectively and efficiently.

In summary, improving research visibility and impact requires a combination of strategies, including publishing in high-impact journals, using social media and other online platforms, collaborating with other researchers, and using research tools. Dr. Nader Ale Ebrahim's teaching materials, including his research tools and techniques, can be found on his personal website and in his published research papers.

Write (LetsView Chat) an article about Dr. Nader Ale Ebrahim.

 Source: https://letsview.com/chatbot

Write an article about Dr. Nader Ale Ebrahim.

Dr. Nader Ale Ebrahim is a renowned researcher and academic who has made significant contributions to the field of engineering and technology. He is widely recognized for his expertise in research methodology, virtual teams, and knowledge management. Dr. Ale Ebrahim holds a Ph.D. in mechanical engineering from the University of Malaya and has published numerous research papers in international journals.


 

 

 

 

 

One of Dr. Ale Ebrahim's most significant contributions is his work on virtual teams. He has conducted extensive research on how virtual teams can be managed effectively, and his findings have been published in several high-impact journals. His research has helped organizations around the world to improve their virtual team management practices and has contributed to the development of new tools and techniques for managing remote teams.

Dr. Ale Ebrahim is also known for his work on research methodology. He has developed several innovative tools and techniques for conducting research, including the "Research Tools Box" and the "Research Methodology Canvas." These tools help researchers to plan and execute their research projects effectively and efficiently, saving them time and resources.

In addition to his research work, Dr. Ale Ebrahim is also a dedicated educator. He has taught courses on engineering and research methodology at several universities around the world and has supervised numerous graduate students. His teaching has inspired many students to pursue careers in engineering and research, and he is widely respected for his expertise and dedication to his students.


 

Overall, Dr. Nader Ale Ebrahim is a highly respected researcher, academic, and educator who has made significant contributions to the field of engineering and technology. His work on virtual teams and research methodology has had a profound impact on the way organizations and researchers approach these areas, and his dedication to education has inspired many students to pursue careers in these fields.

You have been warned: A breakthrough in rankings is coming

 Source: https://www.universityworldnews.com/post.php?story=20230519155707244

GLOBAL

You have been warned: A breakthrough in rankings is coming

In a tweet sent from Tashkent, Uzbekistan, after the closing of the IREG Observatory on Academic Ranking and Excellence conference I wrote: “IREG 2023 clearly articulated that in the coming years nothing in the world of rankings will be the same. The main word of the conference was ‘breakthrough’. We discussed where and when it will happen, and we looked for its first signs…”

Since university rankings have been a hot issue in higher education debates for a long time and several people have asked me about this future breakthrough, I will try to explain what I mean.

A view from Central Asia

Far away from the traditional ranking conferences, Tashkent turned out to be an excellent site for the IREG 2023 Conference. Once, its cities of Samarkand and Bukhara were a key link on the Silk Road and now Uzbekistan is a fast-developing country that has made education and science the basis of its modernisation efforts.

The annual IREG conferences are the world’s only neutral place where rankers, higher education experts and analysts as well as universities – often represented by rectors – meet.

The issues discussed there often have a direct impact on rankings and their standards.

Judging by the feedback, IREG 2023 was a creative and refreshing event. Here are three characteristic comments:

• Laylo Shokhabiddinova, head specialist of the international rankings department at Tashkent State Transport University, said: “From around the world, we have come to share our experiences, ideas and insights on ranking in higher education. It has been an eye-opening experience for me, and I am grateful for the chance to connect and collaborate with such an esteemed group of professionals. As universities continue to navigate a rapidly changing landscape, it is more important than ever to stay connected and learn from each other.”

• Alex Usher, president of Higher Education Strategy Associates, Canada, stated: “One of the most interesting ranking discussions I’ve heard in years. The difference in discourse around rankings after leaving a rich country is huge – here there is much less about marketing and much more about system control or benchmarking.”

• Komiljon Karimov, first deputy minister of higher education, science and innovation in Uzbekistan, commented: “I often participate in various conferences and seminars, but I have not met such a substantive and engaged discussion for many years.”

The conference in Tashkent showed how pragmatic and hopeful expectations about rankings are among universities and governments of countries outside Europe and North America.

They need the rankings as a tool to monitor implementation of reforms in higher education, to improve the quality of education and not for the sake of prestige. But this aspect has already been analysed by Usher in his highly recommended blog under the title “Rankings Discourses: West, East and South”.

Historical background

So, what new trends and ideas emerged in Tashkent about the global rankings landscape? To properly interpret new trends, we need to go back to the turn of the century and the beginning of the era of massification and globalisation of higher education. The UNESCO World Conference of 1998 was not able to properly describe this phenomenon. There was simply no comparable data available. It’s hard to believe, but the situation has not changed much since.

Concern about this state of affairs was sounded by Philip Altbach 10 years ago in a University World News article, “Long-term thinking needed in higher education”, in which he regretted that none of the global or regional organisations with the necessary potential and prestige (the UN, UNESCO and the OECD) had taken the necessary action on data collection.

He even went so far as to say that they had “abdicated” responsibility from the roles they should perform. He wrote: “There is a desperate need for ongoing international debate, discussion and regular data collection on higher education. At present, we have only a fragmented picture at best.”

Similarly unsuccessful in setting up an updated database was the European Union. Projects financed by the EU, such as the European Tertiary Education Register (ETER) had been undertaken but never fully implemented. This is why rankings, first national, then international, have been a discovery, introducing ‘countability’ to higher education.

The first conference of an international group of ranking experts (from which IREG was born) was in Warsaw in June 2002. There were no international rankings yet, but national rankings (including US News Best Colleges in the USA, Maclean’s in Canada, CHE in Germany and Perspektywy in Poland) were such a fascinating phenomenon that Jan Sadlak from UNESCO-CEPES and Andrzej Kozminski, rector of Kozminski University, invited a group of ranking creators to a debate in order to exchange information. I spoke there about Perspektywy’s first 10 years in ranking.

The next meeting, in Washington DC, in 2004, included Nian Cai Liu with his brand new Shanghai Ranking. At the 2006 meeting in Berlin, the IREG group adopted the famous “Berlin Principles on Ranking of Higher Education Institutions” which introduced quality standards for rapidly emerging new rankings.

Had UNESCO or any other international organisation been able to collect data globally and annually update reliable information on higher education, international rankings would probably never have gained such momentum and importance. But politicians, or rather bureaucrats, have failed, and not for the first time.

As a result, the short history of modern rankings (40 years of national rankings and 20 of international ones) has seen them gain an inflated role which they did not initially aspire to. It is a fascinating history of exploration, confusion and rivalry.

Don’t break the thermometer

We all see and feel that the world is changing and that it is changing in many ways. Geopolitical forces are shifting and technological advancements and artificial intelligence are both promising and scary. All this has a strong impact on higher education and, consequently, on the academic rankings. The rankings landscape is changing quickly too.

A new approach to rankings has emerged in the form of the impact rankings, reflecting university contributions to social goals. The range of regional and ‘by subject’ rankings have also grown.

At the same time, criticism of rankings has intensified, particularly from the European higher education and research community. There are new initiatives (the Coalition for Advancing Research Assessment to name one) which are searching for new conceptual approaches and tools for university assessment.

IREG Observatory welcomes the search for ever better tools to measure and evaluate excellence in research and higher education. However, we see no need to breed a ranking phobia in the process. Assessment and rankings serve different purposes. IREG expressed this very clearly in its position paper, “Assessment and rankings are different tools”, published last December.

Higher education around the world faces many difficult problems, but rankings are not the cause of them. Rankings are like a thermometer that signals various academic illnesses. But neither sickness nor fever disappears when we break the thermometer. The same applies to rankings.

At the countless ‘summits’ organised almost weekly by the major ranking players, the audience is told that the rankings and methodologies are approaching the pinnacle of human achievement, and that the answer to every university rector’s dreams is to purchase a ‘marketing package’, a kind of miracle medicine.

It is not for nothing that some now consider the big ranking organisations to be little more than ‘prestige-selling companies’ rather than just a rankings provider, with rankings serving merely as the ‘cherry on the cake’.

Big data

There can be no meaningful analysis without good data, data that meets the agreed standard, that is properly collected, externally validated, updated at least once a year, if not more frequently, and is widely available though not necessarily free, because quality costs. The fees, however, should be reasonable.

Such data, however, cannot be obtained without the cooperation of national education authorities. In turn, they must collect such data for the efficient realisation of their public policy and economic strategy. The data collected directly from universities by ranking organisations and adjusted to their requirements have many imperfections that have been well identified.

This applies to QS, Times Higher Education and other organisations that use surveys sent to and collected from universities. Of course, when no other options are available, flawed data serves better than none at all. However, let me point out that better databases, anchored in national systems, have already appeared and, year by year, are increasingly available.

M’hamed el Aisati, vice-president at Elsevier, and an analyst endowed with an impressive “ranking intuition”, emphasised in his speech in Tashkent the growing need for big data platforms to evaluate the impact of research, its visibility, academic collaboration and innovation.

He then presented the first already tested and operational national research evaluation platforms which are being used in Japan and Egypt.

What is a national research evaluation platform? As Aisati explained: “It is a big data platform that provides a knowledge network to support high-value decisions. It is a knowledge network that provides an integrated view of relevant research data, ensuring that high-value client decisions are based on an inclusive, truthful and unbiased view of their country’s research ecosystem.”

Such a platform is modular. It collects dispersed data, including digitisation. Non-English language data is being translated into English. Linking and profiling of references as well as automatic classification and metrics are provided.

In less specialised language: the big data national research evaluation platforms cover the entire research output of Japan and Egypt, including their academic characteristics. Hearing this at the IREG 2023 Conference, Professor Piotr Stepnowski, rector of the University of Gdansk in Poland, said in an emotional outburst: “This will radically change evaluation of individual countries in world science!”

Yes, it will! In fact, it already signals the beginning of a ‘breakthrough’, the word that best describes the essence of the discussion in Tashkent.

The role of AI

The subject of big data has appeared in various ranking-related conversations for over a dozen years. It is obvious that the global ‘data ocean’ on science and higher education can create new analytical possibilities and, consequently, new solutions in the ranking area. By the way, ‘data ocean’ was one of the key phrases of the IREG 2019 Conference in Bologna.

But the excitement about this potentially revolutionary technology has been effectively dampened by the so-called ‘artificial intelligence winter’. It was impossible to meet the huge expectations presented by the first AI algorithms because two things were still missing: data that could be sent in bulk, and computing power of sufficient capacity.

Only in the last 10 years has computing power, as well as very fast, high-bandwidth communication, like the 5G systems, allowed artificial intelligence to be used effectively. The emergence of ChatGPT and the rapid spread of this application has brought millions of people closer to the practical implementation of AI.

At the same time, in many countries solid database systems covering higher education and science have been created (the author of this text was the leader of the team that prepared the design of such a system in Poland, currently functioning under the name POL-on).

So we now have both the data and the desired computational power. It would be naive not to expect that a platform based on giant higher education databases and AI algorithms would soon make an appearance in the university rankings field. The question is: ‘Who will be the first to come up with such a ranking platform or application?’.

Waldemar Siwinski is president of the IREG Observatory on Academic Ranking and Excellence, and founder of the Perspektywy Education Foundation, Poland.

Saturday 20 May 2023

How Would You Detect ChatGPT-generated Text?

 Source: https://vinija.ai/models/chatGPT/

Models • ChatGPT

Overview

  • The world has been enamored by ChatGPT’s extraordinary capabilities and has been trying to identify ChatGPT’s various use-cases.
  • ChatGPT is a chat-bot released by OpenAI (with a focus on multi-round conversational dialogue as it’s mode of interaction) and has far out-shined its predecessor, GPT-3, in generating text.
  • Recall that GPT used the decoder part of the Transformer architecture and was trained to do next word prediction. In doing so, it could often offer false or hurtful information as it was only trained to predict the next word by being trained from text on the internet.
  • ChatGPT and its sibling model, InstructGPT, are both designed to fix that problem and be more aligned with its users via the use of RLHF.
  • Below, we’ve broken down the internals of ChatGPT and InstructGPT; check out the primer on GPT if you’d like a refresher.

Training ChatGPT

  • ChatGPT has been fine-tuned on with a combination of both Supervised Learning and Reinforcement Learning by using human feedback.
  • Specifically, ChatGPT uses “Reinforcement Learning from Human Feedback (RLHF), which uses human feedback in the training loop to minimize harmful, untruthful, and/or biased outputs” (source: Assembly AI). This is accomplished by having AI trainers rank the responses from the model.
  • ChatGPT’s dataset includes a wide variety of text from the internet, including articles, websites, books, and more.
  • It includes text from a wide range of sources and covers a diverse range of topics, in order to provide me with a broad understanding of language and the ability to generate text on a wide range of topics.

  • Below, the diagram from OpenAI (source) shows a high level overview of how ChatGPT is trained.

  • “Once ChatGPT has been trained, it can be used to generate text by starting with a given prompt and then generating the next word in the sequence, based on the patterns it has learned from the training data. The generated text can be fed back into ChatGPT as input, allowing it to generate further text, and so on, effectively creating a conversation.” OpenAI
  • Lastly, if you ask ChatGPT itself how it works internally, here’s what it has to say:

  • ChatGPT is able to perform a multitude of tasks from debugging code, making an itinerary for your travel, writing a short story or poem, to coming up with recipes of your favorite meals.
  • Since there isn’t a publication available on ChatGPT as of yet, we will look at details from its sibling model, InstructGPT, below. However, note that ChatGPT is finetuned from a model in the GPT-3.5 series, which finished training in early 2022 (source), while InstructGPT was finetuned on GPT-3.

How Would You Detect ChatGPT-generated Text?

  • After learning about ChatGPT and seeing its vast use cases, the next question on everyone’s mind usually is, how would you be able to detect if a content is generated by ChatGPT or a human?
  • Let’s look at a few methods:
    • Watermarking: A Watermark for Large Language Models
      • “In the paper “A Watermark for Large Language Models” , the authors propose a relatively simple method. The idea is that the creators of the LLM would add a “watermark” signal to any generated text passage, such that the meaning and quality of the passage is not altered by the signal, the signal can easily be detected without needing any access to the LLM that generated it, and the signal cannot be easily removed by simple modifications to the text” (source: Prof. Melanie Mitchell)
      • Watermarking by an LLM would have it unnoticeably generate a secret signal in the text that would help it identify where the text came from.
      • OpenAI’s watermarking is a cryptography-based approach to the problem.
    • DetectGPT by Eric Mitchell et al.:
      • “DetectGPT relies on generating the (log-)probabilities of the text. If an LLM produces text, each token has a conditional probability of appearing based on the previous tokens. Multiply all these conditional probabilities (or effectively, sum up the log probabilities) to obtain the (joint) probability for the text.
      • DetectGPT then perturbs the text: if the probability of the new text is noticeably lower than the original one it is AI-generated. Otherwise, if it’s approximately the same, it’s human-generated.
      • e.g., consider the 2 sentences below
        • original input: “This sentence is generated by an AI or human” => log-proba 1
        • perturbed: “This writing is created by a an AI or person” => log-proba 2
        • If log-proba 2 < log-proba 1 -> AI-generated
        • If log-proba 2 ~ log-proba 1 -> human-generated
      • Limitation: Requires access to the (log-)probabilities of the texts. This involves using a specific LLM model, which may not be representative of the AI model used to generate the text in question.” Sebastian Raschka
    • AI Classifier by OpenAI:
      • As stated by OpenAI, “The AI Text Classifier is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT.”
      • The training dataset here consists of both human and AI generated text and the model assigns probability from very unlikely, unlikely, or unclear if the text is AI-generated.
    • GPTZero:
      • “GPTZero computes perplexity values. The perplexity is related to the log-probability of the text mentioned for DetectGPT above. The perplexity is the exponent of the negative log-probability. So, the lower the perplexity, the less random the text. Large language models learn to maximize the text probability, which means minimizing the negative log-probability, which in turn means minimizing the perplexity.
      • GPTZero then assumes the lower perplexity are more likely generated by an AI.
      • Limitations: see DetectGPT above. Furthermore, GPTZero only approximates the perplexity values by using a linear model.” Sebastian Raschka
  • Paper: Training language models to follow instructions with human feedback by Ouyang et al.
  • InstructGPT was also trained with RLHF much like ChatGPT in order to have the language model align with the user’s intent and you can learn more about the algorithm below.
  • InstructGPT’s training can be broken down into 3 steps:
  • Supervised Learning:
    • In this step, GPT-3 is fine-tuned with supervised learning based on human annotations that carry out instruction tuning.
    • Essentially, InstructGPT starts with a set of labeled data that demonstrates the desired outcome and behavior from the model which is used to fine-tune GPT-3 with supervised learning.
  • Reward Model:
    • Here, a dataset is collected of rankings of several output options (could be outputs from the same model or could even be outputs from a larger model based on which the KL divergence of the model being trained vs. the larger model could be minimized) and the labeler ranks them from best to worst. A reward model is trained with this data as part of this step.
  • Combining the two:
    • Finally, the model generates an output to a new prompt it receives from the dataset. Then reinforcement learning is used by deploying proximal policy optimization by calculating the reward for the output and the policy is hence updated.
  • We can visually understand the process by looking at the image below from OpenAI’s Training language models to follow instructions with human feedback

  • Here is the improvement of performance by InstructGPT over its predecessors:
    • “The resulting InstructGPT models are much better at following instructions than GPT-3. They also make up facts less often, and show small decreases in toxic output generation. Our labelers prefer outputs from our 1.3B InstructGPT model over outputs from a 175B GPT-3 model, despite having more than 100x fewer parameters.” OpenAI
    • It is to be noted that InstructGPT can still make simple mistakes, however fine-tuning with human feedback has still enabled it to align with human intent (with a focus on enhancing trust and safety).

Visual Summary

Further Reading

Friday 12 May 2023

Why India falls behind in citations despite producing high numbers of research papers?

 Source: https://www.financialexpress.com/education-2/why-india-falls-behind-in-citations-despite-producing-high-numbers-of-research-papers/3082109/

Why India falls behind in citations despite producing high numbers of research papers?

India ranks fourth in research output but ninth when it comes to research citation, raising concern over the quality of work produced.

The low citation rate of Indian research papers is mainly due lack of originality and the rigour required.
X
The low citation rate of Indian research papers is mainly due lack of originality and the rigour required.

Mediocrity in research and a lack of research culture in India are considered among major factors behind low standing of Indian Indian Higher Education Institutions (HEIs) in the global university ranking systems. India ranks ninth in research citation despite producing double the global average in research output. “This shows that a lot of research being done is not as impactful and relevant as it is expected to be. It is a matter of concern as the purpose of research is to contribute to the existing pool of knowledge and  benefit the society at large,” Tripti Toor, associate professor, IILM University, said. 

Experts believe that the low citation rate of Indian research papers is mainly due to the lack of research and writing skills, proficiency in language, and the desire to obtain academic titles, perks, and promotions. “Many published research works from India lack originality besides the rigour required for a scholarly paper to be cited by the academia. Institutional rankings have further encouraged a culture where analysis is assessed in numbers rather than on qualitative indicators of creativity and inventiveness,” Kokil Jain, dean of research and EFPM program chair, Fortune Institute of International Business, explained.

Furthermore, despite an increase in the number of women in STEM education, there is still a significant gender gap in research and innovation careers. “Unconscious biases in hiring and promotion can result in women being overlooked for research and innovation careers, even if they are equally qualified or more qualified than male candidates. Their familial responsibilities become a hindering factor while balancing demands of a research or innovation career,” Daviender Narang, Director, Jaipuria Institute of Management, explained.

In addition, cultural biases may be operational when it comes to citation of Indian research work. There may be a bias against citing research from certain countries or regions. It is possible that researchers would prefer to cite research from countries they perceive to be less developed in certain areas,”Jain noted. 

To address these challenges, experts opined that India needs to invest more in research infrastructure, provide better training to researchers, encourage more international collaborations, and increase funding for research. “Additionally, researchers need to focus on producing high-quality research with global significance and publish in international journals to increase their visibility and impact,” Narag added.

Thursday 11 May 2023

CiteSee: Augmenting Citations in Papers with Persistent and Personalized Historical Context

 Source: https://blog.allenai.org/citesee-e0f9e9d46569

CiteSee: Augmenting Citations in Papers with Persistent and Personalized Historical Context

Left: Titled “contextualize” at the top, followed by a paragraph figure with 4 inline citations. Citation 1 is red, 2 is green, 3 is overlaid with a red quotation mark, and 4 is overlaid with a red heart emoji. Right: Titled “Discover” at the top, followed by a paragraph figure with 3 inline citations. Citation 12 is highlighted in light yellow, and citation 10 is highlighted in a more saturated yellow. Citation 13 is highlighted in a very saturated yellow-orange color.
CiteSee augments inline citations to known papers to help contextualize the current paper. This includes saved(1, red) and visited papers (2, green), papers previously cited by current user (3”), and their own publications (♥). CiteSee also highlights citations to unknown papers (10–12) to help discover important prior work based a user’s engagements on their citing papers.

Inline citations play a crucial role in the scholarly research process, as they allow researchers to contextualize the paper they are reading within the cited work, draw connections among relevant papers, and build up a higher level view of the research fields. A prior work estimated that inline citations account for around 1 in 5 paper discoveries during active research (King et al. 2009). Despite their importance and ubiquity, our preliminary interviews showed that it can be challenging for scholars to prioritize which inline citations to explore, considering the sheer volume of citations they encounter during literature reviews and the varying relevance to a reader’s interests. The most common concern was the fear of overlooking important citations, which could lead to significant research consequences. Additionally, participants expressed difficulties in tracking their progress and retaining context around saved or visited papers.

From left to right, common citation type with 3 examples with different yellow highlight shadings. Visited papers rendered in green, saved papers rendered in red, cited papers with a red quotation mark overlaying the top right corner, and “own papers” with a red heart emoji overlay the reference number. Under the common citations examples, there is an “unexplored papers” label, and under the other 4, there is an “Explored /familiar papers” label.
Overview of different visual augmentation types, with one category for citations to unexplored papers, and four categories for explored / familiar papers.

These findings informed the development of CiteSee, a personalized paper reading tool for in-situ citation sensemaking. CiteSee visually augments citations within scientific papers based on their connections to a user’s research activities to better reflect their research interests and literature review progress. By leveraging a user’s publications, paper library and recent reading history, CiteSee offers a range of visual citation augmentation types.These augmentation types enable users to both prioritize unexplored inline citations most related to their interests during literature reviews (i.e., re-encountered across papers), as well as keeping track of which inline citations were already explored (e.g., visited or saved). In addition, CiteSee also presents persistent and personalized historical context around citations, allowing users to make sense of how a citation connects to them personally. For this, users can further interact with inline citations by clicking on them, revealing personalized contexts in a Paper Card, such as the last time the paper was opened or the citing sentences from across papers they have recently explored. This allows for a more personalized and context-rich understanding of the citations within a paper. By leveraging these core mechanisms, CiteSee effectively supports users in discovering relevant citations, surfacing familiar papers, and providing personalized context around inline citations to aid in conducting literature reviews.

Two screenshots. Left: A popup card of citation [33] highlighted and yellow. The content of the card is as follows: The main area has the title, authors, and abstract of the cited paper. The bottom half of the card contained a list of other paper titles and citing sentences from the user’s reading history. Right: A similar screenshot of a paper card for citation [56] in red showing that it is previously saved. The bottom of the card contained “Saved from:” a paper title and a citing sentence.
[Left] To help users discover important prior work, unexplored citations are highlighted in different shades of yellow to indicate their potential relevance to the user. [Right] To help users keep track of which citations were already explored and to draw connections between familiar papers to the current paper, inline citations to familiar papers (e.g., saved) are rendered in red. [Both] To see personalized context around inline citations, users can click on a citation to see its Paper Card with personalized context such as citing sentences from recently read papers or the citing sentence where the cited paper was saved.

In a lab study, we validate CiteSee’s core functionality of highlighting relevant citations for paper discovery during literature reviews. Participants read a set of papers and actively examined citations to find important prior work, with results showing that CiteSee’s personalized approach significantly outperformed three baselines. We also conducted a field deployment study to further understand CiteSee’s real-world benefits. We recruited participants who had planned to conduct literature reviews and installed on their computers for one to two weeks for a planned literature review. We found that participants were actively engaged with the system, and that the majority of papers they discovered and saved were via highlighted inline citations. In the post-interviews, participants expressed benefits in using CiteSee, with its visual augmentations helping them discover more relevant prior work, remember papers they have examined or encountered in the past, and make sense of common citations across multiple papers, ultimately proving to be a valuable tool in supporting literature review tasks and enhancing understanding of inline citations.

In conclusion, CiteSee is a promising scientific paper reading tool that enhances the literature review process by personalizing the user’s experience and providing contextualized inline citations. By tracking and exploiting the user’s past research activities, CiteSee allows researchers to prioritize highly relevant inline citations during literature reviews and explore additional personalized context to make better sense of them. Our studies demonstrate the advantages of CiteSee over baseline strategies for paper discovery, and the positive impact it has on real-world literature review tasks. As scientific research publications continue to grow rapidly, intelligent reading tools like CiteSee will play a crucial role in helping researchers navigate the vast landscape of existing literature, ensuring they can identify and understand the most relevant and impactful work in their fields and how they relate and build on each other.

CiteSee received the Best Paper Award (1%) in ACM #CHI2023, and we will present this work at the conference in Hamburg, Germany this month. CiteSee is a collaborative effort among researchers from

@ , and including Joseph Chee Chang, Amy X. Zhang, Jonathan Bragg, Andrew Head, Kyle Lo, Doug Downey, and Daniel S. Weld.

Follow @allen_ai and @semanticscholar on Twitter, and subscribe to the AI2 Newsletter to stay current on news and research coming out of AI2.

Joseph Chee Chang
AI2 Blog

😉💻🔄 Research Scientist @ AI2/Semantic Scholar | prev @ Carnegie Mellon

Google announces AI-generated search results experiment. Here's what to expect

 Source: https://www.abc.net.au/news/science/2023-05-11/google-unveils-ai-generated-snapshot-for-search-results/102331786

    Science

    Google announces AI-generated search results experiment. Here's what to expect

    Posted , updated 
    Google VP of Search Liz Reid
    Google VP of Search Liz Reid announcing the AI experiment at the company's annual developer conference.()

    For more than two decades, the format of the Google Search results page hasn't changed much: type a phrase and get a list of blue links.

    Now, the most valuable real estate on the internet appears set to undergo a renovation, with Google announcing an opt-in trial for adding generative artificial intelligence (AI) to the results page.

    The company that dominates the global search engine market has been under pressure to stage an AI comeback after the runaway success of rival chatbot ChatGPT.

    At its annual Google I/O developer conference in San Francisco this week, the tech giant unveiled part of its answer.

    For the moment, the trial can only be accessed in the US, but it offers a glimpse of what's coming for Google Search in Australia.

    What do AI-generated search results look like?

    In adding generative AI to search, Google could have taken the route of Microsoft Bing, replacing the Search results page entirely with a ChatGPT-style messaging system.

    Instead, it's tried to incorporate AI-generated answers into the results page, and kept the list of blue links.

    What Google calls an "AI-powered snapshot" appears at the top of the Search results page.

    The Google AI snapshot being trialled for search results
    An example of the AI-generated "snapshot", with links to the source material on the right of the page.()

    This is an AI-generated summary a few paragraphs long, including links to sites, intended to corroborate the information presented.

    Below this, the snapshot presents a list of potential follow-up questions.

    There's also the option of another view, which breaks the snapshot down into its sentences, with a link to the sources for information for that specific sentence.

    Search results in the conventional format of blue links appear below the snapshot, but they're a long way down the page.

    Given most people don't scroll down their search results (the top three results get more than half the clicks), the long reign of the plain blue link appears to be over.

    How do I sign up to the trial?

    For the moment, the trial is limited to Chrome desktop and the Google App in the US.

    "We’ll be opening up sign-ups for Search Labs today, with access to SGE [Search generative experience] beginning in the coming weeks," Google said in a blog post published overnight.

    Australians get a message "Search Labs isn't available for your account right now".

    But if you want to try, here's the waiting list.

    Are there still ads?

    Google generates about 80 per cent of its revenue from ads, and mostly from Google Search.

    The company has been criticised for featuring too many ads alongside Search results, making it harder for users to find the information they're looking for.

    Unsurprisingly, the new generative AI experiment will feature ads.

    In one of the screenshots released by Google, for a query about commuter e-bikes, ads for e-bikes appear beneath the AI snapshot, labelled "sponsored" in bold black text.

    "Search ads will continue to appear in dedicated ad slots throughout the page," Google said in a blog post.

    "In this new experience, advertisers will still have the opportunity to reach potential customers along their search journeys.

    "We'll test and evolve the ads experience as we learn more."

    Google's AI-generated search results experiment
    Below the AI-generated "snapshot" are links to bikes for purchase, and below these (not visible here) are sponsored links.()

    Improvements in AI will allow Google to improve its knowledge of consumer behaviour and offer more highly targeted advertising placement, said Paul Haskell-Dowland, a professor of cybersecurity at Edith Cowan University.

    "The number of advertisements could be significantly less, but they will be highly optimised and relevant to you as an individual," he said.

    "Google will potentially have a more rounded view of you." 

    Are AI-generated search results accurate?

    Adding generative AI to the search engine that services 85 per cent of the world's search engine activity carries obvious risks.

    Google's AI chatbot Bard, like ChatGPT and others, has had problems with factual errors and giving dangerous advice. 

    Google employees reportedly labelled the system a "pathological liar" and pleaded with the company not to release it to the public.

    Then, when Google launched Bard three months ago, it made a factual error in one of the company's own advertisements.

    The generative AI model that powered Bard has since been replaced by another. At this week's conference, Google unveiled a "next-generation language model", PaLM 2, that it says outperforms other leading systems on some tasks.

    PaLM 2 appears to be a big improvement on its predecessor, but Google admits it can still make factual errors, reinforce harmful social biases around such things as race or gender, and give answers that are racist or xenophobic.

    For this reason, when rolling out PaLM 2-powered generative AI search results, Google has installed guardrails. There are some kinds of search or questions the AI search engine won't touch.

    It will default to regular search results if it determines there isn't enough reliable information on the internet to create a snapshot.

    You may get the same result for questions about racism, terrorism or another subject area Google deems unsafe.

    Expect Google's AI-generated search results to be carefully scrutinised during the trial.

    Meanwhile, Google says it's opening up the PaLM 2-powered Bard chatbot for everyone to use. 

    From next week, it's removing the waitlist for Bard, and opening access to people in 180 countries.

    Will this change the way we google?

    Knowledge of how search engines work subtly influences how we phrase our search terms.

    If you're looking for something to watch, you might search, "best movies 2023", because you know Google is good at answering that kind of question.

    The search engine has objective movie rankings, like Rotten Tomatoes, blog posts, and box office figures, that it can pull data from.

    Right now,  you'd be wary of searching for something too broad, like, "Where should we go on our holiday?". Or too specific, like, "Where should we go on our holiday this June for one week, we're looking for a mix of relaxation and exercise, without spending too much money."

    Instead, you might break this question up into a dozen searches, and essentially wander about, searching for ideas and inspiration.

    But one of the promises of AI-generated search results is being able to skip this step.

    The search engine will be essentially running lots of disparate searches at once and then combining that information into a few paragraphs.

    We won't have to be so literal with our Google search terms, said Professor Haskell-Dowland.

    "AI offers the ability for computers to understand what we naturally mean rather than what we write," he said.

    "With generative AI, meaning can be interpreted and intent derived."

    For Google, rattled by the success of ChatGPT and reports Samsung could make Bing its default search engine, a lot is riding on the AI-generated search experiment.

    The world may keep googling, but the way we do this is going to change.

    Posted , updated 
    Share