Your search
Results 71 resources
-
Greetings, if our institution opts for the Tier 3 plan for the ChatGPT API, allowing students to utilize our API, and the usage either surpasses or falls below the predefined limits of Tier 3, how does it influence the monthly pricing? To clarify, if usage exceeds the Tier 3 limits, will our monthly payment remain fixed at $1000, or will the pricing be adapted according to our actual usage? You can refer to the link provided for details on the tier plans: https://platform.openai.com/docs/guides/...
-
The scholarly information-seeking process for behavioral research consists of three phases: searching, accessing, and processing of past research. Existing IT artifacts, such as Google Scholar, have in part addressed the searching and accessing phases, but fall short of facilitating the processing phase, creating a knowledge inaccessibility problem. We propose a behavioral ontology learning from text (BOLT) design framework that presents concrete prescriptions for developing systems capable...
-
The theoretical positioning of a review is of the utmost importance in terms of its contribution to knowledge. This paper clarifies the significance of this design principle for different types of review i.e. for describing, understanding, explaining or testing purposes. Furthermore, new tools now mean that it is both possible and relevant for bibliometrics novices to use bibliometrics to support literature reviews. Applying the BIBGT method and combining two bibliometric techniques –...
-
Abstract Evidence reviews are widely used to summarize findings from existing studies and, as such, are an important base for policy analysis. Over the past 50 years, three waves of evidence reviews have emerged: (1) the meta‐analysis wave, (2) the mixed‐methods synthesis wave, and (3) the core components wave. The present article first describes these waves and reflects on the benefits and limitations of each wave in the context of policy analysis....
-
Abstract With the accelerating growth of the academic corpus, doubling every 9 years, machine learning is a promising avenue to make systematic review manageable. Though several notable advancements have already been made, the incorporation of machine learning is less than optimal, still relying on a sequential, staged process designed to accommodate a purely human approach, exemplified by PRISMA. Here, we test a spiral, alternating or oscillating approach, where full-text...
-
With the accelerating growth of the academic corpus, doubling every nine years, machine learning is a promising avenue to make systematic review manageable. Though several notable advancements have already been made, the incorporation of machine learning is less than optimal, still relying on a sequential, staged process designed to accommodate a purely human approach, exemplified by PRISMA. Here, we test a spiral, alternating or oscillating approach, where full-text screening is done...
-
Abstract Extracting structured knowledge from scientific text remains a challenging task for machine learning models. Here, we present a simple approach to joint named entity recognition and relation extraction and demonstrate how pretrained large language models (GPT-3, Llama-2) can be fine-tuned to extract useful records of complex scientific knowledge. We test three representative tasks in materials chemistry: linking dopants and host materials, cataloging metal-organic...
-
Systematic reviews are vital to the pursuit of evidence-based medicine within healthcare. Screening titles and abstracts (T&Ab) for inclusion in a systematic review is an intensive, and often collaborative, step. The use of appropriate tools is therefore important. In this study, we identified and evaluated the usability of software tools that support T&Ab screening for systematic reviews within healthcare research.
-
Academic databases are a fundamental source for identifying relevant literature in a field of study. Scopus contains more than 90 million records and indexes around 12,000 documents per day. However, this context and the cumulative nature of science itself make it difficult to selectively identify information. In addition, academic database search tools are not very intuitive, and require an iterative and relatively slow process of searching and evaluation. In response to these...
-
Citation indices are tools used by the academic community for research and research evaluation that aggregate scientific literature output and measure impact by collating citation counts. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. The use of citations in research evaluation without consideration of context can be problematic because a citation that presents contrasting...
-
As the original AI qualitative data analysis software, NVivo has fine-tuned the autocoding feature that lets researchers conduct text data analysis with AI to detect and code themes and sentiments.
-
Abstract Reviews have long been recognized as among the most important forms of scientific communication. The rapid growth of the primary literature has further increased the need for reviews to distill and interpret the literature. This review on Reviews and Reviewing: Approaches to Research Synthesis encompasses the evolution of the review literature, taxonomy of review literature, uses and users of reviews, the process of preparing reviews, assessment of review quality and...
-
The repository aims to create an overview and comparison of software used for systematically screening large amounts of textual data using machine learning.
-
Rayyan is an intelligent research collaboration platform that saves you time completing literature reviews and systematic reviews. Intuitive, scalable, fast.
Explore
Our programmes
- EdTech Hub (2)