Tuesday, 23 May 2023

How to Improve Research Visibility and Impact?

 Source: https://letsview.com/chatbot

How to Improve Research Visibility and Impact? 


Improving research visibility and impact is a crucial aspect of academic work. Dr. Nader Ale Ebrahim, a renowned researcher and academic, has developed several tools and techniques that can help researchers to improve their research visibility and impact. Here are some of the strategies that Dr. Ale Ebrahim recommends:

  1. Publish in high-impact journals: One of the most effective ways to improve research visibility and impact is to publish in high-impact journals. Dr. Ale Ebrahim recommends that researchers target journals that have a high impact factor and a broad readership.

  2. Use social media: Social media can be a powerful tool for promoting research and increasing its visibility. Dr. Ale Ebrahim recommends that researchers use social media platforms such as Twitter, LinkedIn, and ResearchGate to share their research and engage with other researchers in their field.

  3. Develop a strong online presence: In addition to social media, researchers should also develop a strong online presence by creating a personal website or blog. This can help to increase visibility and make it easier for other researchers to find and cite their work.

  4. Collaborate with other researchers: Collaborating with other researchers can help to increase research visibility and impact. Dr. Ale Ebrahim recommends that researchers seek out opportunities to collaborate with other researchers in their field, both within their institution and outside of it.

  5. Use research tools: Dr. Ale Ebrahim has developed several research tools that can help researchers to improve their research visibility and impact. These include the "Research Tools Box" and the "Research Methodology Canvas," which can help researchers to plan and execute their research projects effectively and efficiently.

In summary, improving research visibility and impact requires a combination of strategies, including publishing in high-impact journals, using social media and other online platforms, collaborating with other researchers, and using research tools. Dr. Nader Ale Ebrahim's teaching materials, including his research tools and techniques, can be found on his personal website and in his published research papers.

Write (LetsView Chat) an article about Dr. Nader Ale Ebrahim.

 Source: https://letsview.com/chatbot

Write an article about Dr. Nader Ale Ebrahim.

Dr. Nader Ale Ebrahim is a renowned researcher and academic who has made significant contributions to the field of engineering and technology. He is widely recognized for his expertise in research methodology, virtual teams, and knowledge management. Dr. Ale Ebrahim holds a Ph.D. in mechanical engineering from the University of Malaya and has published numerous research papers in international journals.






One of Dr. Ale Ebrahim's most significant contributions is his work on virtual teams. He has conducted extensive research on how virtual teams can be managed effectively, and his findings have been published in several high-impact journals. His research has helped organizations around the world to improve their virtual team management practices and has contributed to the development of new tools and techniques for managing remote teams.

Dr. Ale Ebrahim is also known for his work on research methodology. He has developed several innovative tools and techniques for conducting research, including the "Research Tools Box" and the "Research Methodology Canvas." These tools help researchers to plan and execute their research projects effectively and efficiently, saving them time and resources.

In addition to his research work, Dr. Ale Ebrahim is also a dedicated educator. He has taught courses on engineering and research methodology at several universities around the world and has supervised numerous graduate students. His teaching has inspired many students to pursue careers in engineering and research, and he is widely respected for his expertise and dedication to his students.


Overall, Dr. Nader Ale Ebrahim is a highly respected researcher, academic, and educator who has made significant contributions to the field of engineering and technology. His work on virtual teams and research methodology has had a profound impact on the way organizations and researchers approach these areas, and his dedication to education has inspired many students to pursue careers in these fields.

You have been warned: A breakthrough in rankings is coming

 Source: https://www.universityworldnews.com/post.php?story=20230519155707244


You have been warned: A breakthrough in rankings is coming

In a tweet sent from Tashkent, Uzbekistan, after the closing of the IREG Observatory on Academic Ranking and Excellence conference I wrote: “IREG 2023 clearly articulated that in the coming years nothing in the world of rankings will be the same. The main word of the conference was ‘breakthrough’. We discussed where and when it will happen, and we looked for its first signs…”

Since university rankings have been a hot issue in higher education debates for a long time and several people have asked me about this future breakthrough, I will try to explain what I mean.

A view from Central Asia

Far away from the traditional ranking conferences, Tashkent turned out to be an excellent site for the IREG 2023 Conference. Once, its cities of Samarkand and Bukhara were a key link on the Silk Road and now Uzbekistan is a fast-developing country that has made education and science the basis of its modernisation efforts.

The annual IREG conferences are the world’s only neutral place where rankers, higher education experts and analysts as well as universities – often represented by rectors – meet.

The issues discussed there often have a direct impact on rankings and their standards.

Judging by the feedback, IREG 2023 was a creative and refreshing event. Here are three characteristic comments:

• Laylo Shokhabiddinova, head specialist of the international rankings department at Tashkent State Transport University, said: “From around the world, we have come to share our experiences, ideas and insights on ranking in higher education. It has been an eye-opening experience for me, and I am grateful for the chance to connect and collaborate with such an esteemed group of professionals. As universities continue to navigate a rapidly changing landscape, it is more important than ever to stay connected and learn from each other.”

• Alex Usher, president of Higher Education Strategy Associates, Canada, stated: “One of the most interesting ranking discussions I’ve heard in years. The difference in discourse around rankings after leaving a rich country is huge – here there is much less about marketing and much more about system control or benchmarking.”

• Komiljon Karimov, first deputy minister of higher education, science and innovation in Uzbekistan, commented: “I often participate in various conferences and seminars, but I have not met such a substantive and engaged discussion for many years.”

The conference in Tashkent showed how pragmatic and hopeful expectations about rankings are among universities and governments of countries outside Europe and North America.

They need the rankings as a tool to monitor implementation of reforms in higher education, to improve the quality of education and not for the sake of prestige. But this aspect has already been analysed by Usher in his highly recommended blog under the title “Rankings Discourses: West, East and South”.

Historical background

So, what new trends and ideas emerged in Tashkent about the global rankings landscape? To properly interpret new trends, we need to go back to the turn of the century and the beginning of the era of massification and globalisation of higher education. The UNESCO World Conference of 1998 was not able to properly describe this phenomenon. There was simply no comparable data available. It’s hard to believe, but the situation has not changed much since.

Concern about this state of affairs was sounded by Philip Altbach 10 years ago in a University World News article, “Long-term thinking needed in higher education”, in which he regretted that none of the global or regional organisations with the necessary potential and prestige (the UN, UNESCO and the OECD) had taken the necessary action on data collection.

He even went so far as to say that they had “abdicated” responsibility from the roles they should perform. He wrote: “There is a desperate need for ongoing international debate, discussion and regular data collection on higher education. At present, we have only a fragmented picture at best.”

Similarly unsuccessful in setting up an updated database was the European Union. Projects financed by the EU, such as the European Tertiary Education Register (ETER) had been undertaken but never fully implemented. This is why rankings, first national, then international, have been a discovery, introducing ‘countability’ to higher education.

The first conference of an international group of ranking experts (from which IREG was born) was in Warsaw in June 2002. There were no international rankings yet, but national rankings (including US News Best Colleges in the USA, Maclean’s in Canada, CHE in Germany and Perspektywy in Poland) were such a fascinating phenomenon that Jan Sadlak from UNESCO-CEPES and Andrzej Kozminski, rector of Kozminski University, invited a group of ranking creators to a debate in order to exchange information. I spoke there about Perspektywy’s first 10 years in ranking.

The next meeting, in Washington DC, in 2004, included Nian Cai Liu with his brand new Shanghai Ranking. At the 2006 meeting in Berlin, the IREG group adopted the famous “Berlin Principles on Ranking of Higher Education Institutions” which introduced quality standards for rapidly emerging new rankings.

Had UNESCO or any other international organisation been able to collect data globally and annually update reliable information on higher education, international rankings would probably never have gained such momentum and importance. But politicians, or rather bureaucrats, have failed, and not for the first time.

As a result, the short history of modern rankings (40 years of national rankings and 20 of international ones) has seen them gain an inflated role which they did not initially aspire to. It is a fascinating history of exploration, confusion and rivalry.

Don’t break the thermometer

We all see and feel that the world is changing and that it is changing in many ways. Geopolitical forces are shifting and technological advancements and artificial intelligence are both promising and scary. All this has a strong impact on higher education and, consequently, on the academic rankings. The rankings landscape is changing quickly too.

A new approach to rankings has emerged in the form of the impact rankings, reflecting university contributions to social goals. The range of regional and ‘by subject’ rankings have also grown.

At the same time, criticism of rankings has intensified, particularly from the European higher education and research community. There are new initiatives (the Coalition for Advancing Research Assessment to name one) which are searching for new conceptual approaches and tools for university assessment.

IREG Observatory welcomes the search for ever better tools to measure and evaluate excellence in research and higher education. However, we see no need to breed a ranking phobia in the process. Assessment and rankings serve different purposes. IREG expressed this very clearly in its position paper, “Assessment and rankings are different tools”, published last December.

Higher education around the world faces many difficult problems, but rankings are not the cause of them. Rankings are like a thermometer that signals various academic illnesses. But neither sickness nor fever disappears when we break the thermometer. The same applies to rankings.

At the countless ‘summits’ organised almost weekly by the major ranking players, the audience is told that the rankings and methodologies are approaching the pinnacle of human achievement, and that the answer to every university rector’s dreams is to purchase a ‘marketing package’, a kind of miracle medicine.

It is not for nothing that some now consider the big ranking organisations to be little more than ‘prestige-selling companies’ rather than just a rankings provider, with rankings serving merely as the ‘cherry on the cake’.

Big data

There can be no meaningful analysis without good data, data that meets the agreed standard, that is properly collected, externally validated, updated at least once a year, if not more frequently, and is widely available though not necessarily free, because quality costs. The fees, however, should be reasonable.

Such data, however, cannot be obtained without the cooperation of national education authorities. In turn, they must collect such data for the efficient realisation of their public policy and economic strategy. The data collected directly from universities by ranking organisations and adjusted to their requirements have many imperfections that have been well identified.

This applies to QS, Times Higher Education and other organisations that use surveys sent to and collected from universities. Of course, when no other options are available, flawed data serves better than none at all. However, let me point out that better databases, anchored in national systems, have already appeared and, year by year, are increasingly available.

M’hamed el Aisati, vice-president at Elsevier, and an analyst endowed with an impressive “ranking intuition”, emphasised in his speech in Tashkent the growing need for big data platforms to evaluate the impact of research, its visibility, academic collaboration and innovation.

He then presented the first already tested and operational national research evaluation platforms which are being used in Japan and Egypt.

What is a national research evaluation platform? As Aisati explained: “It is a big data platform that provides a knowledge network to support high-value decisions. It is a knowledge network that provides an integrated view of relevant research data, ensuring that high-value client decisions are based on an inclusive, truthful and unbiased view of their country’s research ecosystem.”

Such a platform is modular. It collects dispersed data, including digitisation. Non-English language data is being translated into English. Linking and profiling of references as well as automatic classification and metrics are provided.

In less specialised language: the big data national research evaluation platforms cover the entire research output of Japan and Egypt, including their academic characteristics. Hearing this at the IREG 2023 Conference, Professor Piotr Stepnowski, rector of the University of Gdansk in Poland, said in an emotional outburst: “This will radically change evaluation of individual countries in world science!”

Yes, it will! In fact, it already signals the beginning of a ‘breakthrough’, the word that best describes the essence of the discussion in Tashkent.

The role of AI

The subject of big data has appeared in various ranking-related conversations for over a dozen years. It is obvious that the global ‘data ocean’ on science and higher education can create new analytical possibilities and, consequently, new solutions in the ranking area. By the way, ‘data ocean’ was one of the key phrases of the IREG 2019 Conference in Bologna.

But the excitement about this potentially revolutionary technology has been effectively dampened by the so-called ‘artificial intelligence winter’. It was impossible to meet the huge expectations presented by the first AI algorithms because two things were still missing: data that could be sent in bulk, and computing power of sufficient capacity.

Only in the last 10 years has computing power, as well as very fast, high-bandwidth communication, like the 5G systems, allowed artificial intelligence to be used effectively. The emergence of ChatGPT and the rapid spread of this application has brought millions of people closer to the practical implementation of AI.

At the same time, in many countries solid database systems covering higher education and science have been created (the author of this text was the leader of the team that prepared the design of such a system in Poland, currently functioning under the name POL-on).

So we now have both the data and the desired computational power. It would be naive not to expect that a platform based on giant higher education databases and AI algorithms would soon make an appearance in the university rankings field. The question is: ‘Who will be the first to come up with such a ranking platform or application?’.

Waldemar Siwinski is president of the IREG Observatory on Academic Ranking and Excellence, and founder of the Perspektywy Education Foundation, Poland.

Saturday, 20 May 2023

How Would You Detect ChatGPT-generated Text?

 Source: https://vinija.ai/models/chatGPT/

Models • ChatGPT


  • The world has been enamored by ChatGPT’s extraordinary capabilities and has been trying to identify ChatGPT’s various use-cases.
  • ChatGPT is a chat-bot released by OpenAI (with a focus on multi-round conversational dialogue as it’s mode of interaction) and has far out-shined its predecessor, GPT-3, in generating text.
  • Recall that GPT used the decoder part of the Transformer architecture and was trained to do next word prediction. In doing so, it could often offer false or hurtful information as it was only trained to predict the next word by being trained from text on the internet.
  • ChatGPT and its sibling model, InstructGPT, are both designed to fix that problem and be more aligned with its users via the use of RLHF.
  • Below, we’ve broken down the internals of ChatGPT and InstructGPT; check out the primer on GPT if you’d like a refresher.

Training ChatGPT

  • ChatGPT has been fine-tuned on with a combination of both Supervised Learning and Reinforcement Learning by using human feedback.
  • Specifically, ChatGPT uses “Reinforcement Learning from Human Feedback (RLHF), which uses human feedback in the training loop to minimize harmful, untruthful, and/or biased outputs” (source: Assembly AI). This is accomplished by having AI trainers rank the responses from the model.
  • ChatGPT’s dataset includes a wide variety of text from the internet, including articles, websites, books, and more.
  • It includes text from a wide range of sources and covers a diverse range of topics, in order to provide me with a broad understanding of language and the ability to generate text on a wide range of topics.

  • Below, the diagram from OpenAI (source) shows a high level overview of how ChatGPT is trained.

  • “Once ChatGPT has been trained, it can be used to generate text by starting with a given prompt and then generating the next word in the sequence, based on the patterns it has learned from the training data. The generated text can be fed back into ChatGPT as input, allowing it to generate further text, and so on, effectively creating a conversation.” OpenAI
  • Lastly, if you ask ChatGPT itself how it works internally, here’s what it has to say:

  • ChatGPT is able to perform a multitude of tasks from debugging code, making an itinerary for your travel, writing a short story or poem, to coming up with recipes of your favorite meals.
  • Since there isn’t a publication available on ChatGPT as of yet, we will look at details from its sibling model, InstructGPT, below. However, note that ChatGPT is finetuned from a model in the GPT-3.5 series, which finished training in early 2022 (source), while InstructGPT was finetuned on GPT-3.

How Would You Detect ChatGPT-generated Text?

  • After learning about ChatGPT and seeing its vast use cases, the next question on everyone’s mind usually is, how would you be able to detect if a content is generated by ChatGPT or a human?
  • Let’s look at a few methods:
    • Watermarking: A Watermark for Large Language Models
      • “In the paper “A Watermark for Large Language Models” , the authors propose a relatively simple method. The idea is that the creators of the LLM would add a “watermark” signal to any generated text passage, such that the meaning and quality of the passage is not altered by the signal, the signal can easily be detected without needing any access to the LLM that generated it, and the signal cannot be easily removed by simple modifications to the text” (source: Prof. Melanie Mitchell)
      • Watermarking by an LLM would have it unnoticeably generate a secret signal in the text that would help it identify where the text came from.
      • OpenAI’s watermarking is a cryptography-based approach to the problem.
    • DetectGPT by Eric Mitchell et al.:
      • “DetectGPT relies on generating the (log-)probabilities of the text. If an LLM produces text, each token has a conditional probability of appearing based on the previous tokens. Multiply all these conditional probabilities (or effectively, sum up the log probabilities) to obtain the (joint) probability for the text.
      • DetectGPT then perturbs the text: if the probability of the new text is noticeably lower than the original one it is AI-generated. Otherwise, if it’s approximately the same, it’s human-generated.
      • e.g., consider the 2 sentences below
        • original input: “This sentence is generated by an AI or human” => log-proba 1
        • perturbed: “This writing is created by a an AI or person” => log-proba 2
        • If log-proba 2 < log-proba 1 -> AI-generated
        • If log-proba 2 ~ log-proba 1 -> human-generated
      • Limitation: Requires access to the (log-)probabilities of the texts. This involves using a specific LLM model, which may not be representative of the AI model used to generate the text in question.” Sebastian Raschka
    • AI Classifier by OpenAI:
      • As stated by OpenAI, “The AI Text Classifier is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT.”
      • The training dataset here consists of both human and AI generated text and the model assigns probability from very unlikely, unlikely, or unclear if the text is AI-generated.
    • GPTZero:
      • “GPTZero computes perplexity values. The perplexity is related to the log-probability of the text mentioned for DetectGPT above. The perplexity is the exponent of the negative log-probability. So, the lower the perplexity, the less random the text. Large language models learn to maximize the text probability, which means minimizing the negative log-probability, which in turn means minimizing the perplexity.
      • GPTZero then assumes the lower perplexity are more likely generated by an AI.
      • Limitations: see DetectGPT above. Furthermore, GPTZero only approximates the perplexity values by using a linear model.” Sebastian Raschka
  • Paper: Training language models to follow instructions with human feedback by Ouyang et al.
  • InstructGPT was also trained with RLHF much like ChatGPT in order to have the language model align with the user’s intent and you can learn more about the algorithm below.
  • InstructGPT’s training can be broken down into 3 steps:
  • Supervised Learning:
    • In this step, GPT-3 is fine-tuned with supervised learning based on human annotations that carry out instruction tuning.
    • Essentially, InstructGPT starts with a set of labeled data that demonstrates the desired outcome and behavior from the model which is used to fine-tune GPT-3 with supervised learning.
  • Reward Model:
    • Here, a dataset is collected of rankings of several output options (could be outputs from the same model or could even be outputs from a larger model based on which the KL divergence of the model being trained vs. the larger model could be minimized) and the labeler ranks them from best to worst. A reward model is trained with this data as part of this step.
  • Combining the two:
    • Finally, the model generates an output to a new prompt it receives from the dataset. Then reinforcement learning is used by deploying proximal policy optimization by calculating the reward for the output and the policy is hence updated.
  • We can visually understand the process by looking at the image below from OpenAI’s Training language models to follow instructions with human feedback

  • Here is the improvement of performance by InstructGPT over its predecessors:
    • “The resulting InstructGPT models are much better at following instructions than GPT-3. They also make up facts less often, and show small decreases in toxic output generation. Our labelers prefer outputs from our 1.3B InstructGPT model over outputs from a 175B GPT-3 model, despite having more than 100x fewer parameters.” OpenAI
    • It is to be noted that InstructGPT can still make simple mistakes, however fine-tuning with human feedback has still enabled it to align with human intent (with a focus on enhancing trust and safety).

Visual Summary

Further Reading

Friday, 12 May 2023

Why India falls behind in citations despite producing high numbers of research papers?

 Source: https://www.financialexpress.com/education-2/why-india-falls-behind-in-citations-despite-producing-high-numbers-of-research-papers/3082109/

Why India falls behind in citations despite producing high numbers of research papers?

India ranks fourth in research output but ninth when it comes to research citation, raising concern over the quality of work produced.

The low citation rate of Indian research papers is mainly due lack of originality and the rigour required.
The low citation rate of Indian research papers is mainly due lack of originality and the rigour required.

Mediocrity in research and a lack of research culture in India are considered among major factors behind low standing of Indian Indian Higher Education Institutions (HEIs) in the global university ranking systems. India ranks ninth in research citation despite producing double the global average in research output. “This shows that a lot of research being done is not as impactful and relevant as it is expected to be. It is a matter of concern as the purpose of research is to contribute to the existing pool of knowledge and  benefit the society at large,” Tripti Toor, associate professor, IILM University, said. 

Experts believe that the low citation rate of Indian research papers is mainly due to the lack of research and writing skills, proficiency in language, and the desire to obtain academic titles, perks, and promotions. “Many published research works from India lack originality besides the rigour required for a scholarly paper to be cited by the academia. Institutional rankings have further encouraged a culture where analysis is assessed in numbers rather than on qualitative indicators of creativity and inventiveness,” Kokil Jain, dean of research and EFPM program chair, Fortune Institute of International Business, explained.

Furthermore, despite an increase in the number of women in STEM education, there is still a significant gender gap in research and innovation careers. “Unconscious biases in hiring and promotion can result in women being overlooked for research and innovation careers, even if they are equally qualified or more qualified than male candidates. Their familial responsibilities become a hindering factor while balancing demands of a research or innovation career,” Daviender Narang, Director, Jaipuria Institute of Management, explained.

In addition, cultural biases may be operational when it comes to citation of Indian research work. There may be a bias against citing research from certain countries or regions. It is possible that researchers would prefer to cite research from countries they perceive to be less developed in certain areas,”Jain noted. 

To address these challenges, experts opined that India needs to invest more in research infrastructure, provide better training to researchers, encourage more international collaborations, and increase funding for research. “Additionally, researchers need to focus on producing high-quality research with global significance and publish in international journals to increase their visibility and impact,” Narag added.