Your search
Results 111 resources
-
This session highlights the basics of Vocational Education and Training (VET). Each university has its own characteristics. The contributions seek to encourage various forms of VET. Challenges for universities and other institutions are emphasised. The contributions help draw conclusions for the Further structuring of VET in Sub-Saharan Africa. Other country-specific articles from the session concentrate on the characteristics and orientation of VET systems, thereby helping create an overall...
-
Abstract We present SemOpenAlex , an extensive RDF knowledge graph that contains over 26 billion triples about scientific publications and their associated entities, such as authors, institutions, journals, and concepts. SemOpenAlex is licensed under CC0, providing free and open access to the data. We offer the data through multiple channels, including RDF dump files, a SPARQL endpoint, and as a data source in the Linked Open Data cloud,...
-
External review is a fundamental component of Global Environmental Assessments, ensuring their processes are comprehensive, objective, open and transparent, and are perceived as such. Here, we focus on review of Intergovernmental Panel on Climate Change (IPCC) Assessment Reports. The review process has received little scrutiny, although review comments and author responses are public. Here we analyse review documents from the Fourth and Fifth Assessments, focusing primarily on Working Group...
-
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe \model{}'s architecture and training and evaluate its performance on a range of language-understanding, mathematics, and...
-
Pretrained general-purpose language models can achieve state-of-the-art accuracies in various natural language processing domains by adapting to downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of their success, the size of these models has increased rapidly, requiring high-performance hardware, software, and algorithmic techniques to enable training such large models. As the result of a joint effort between Microsoft and NVIDIA, we present details on the training...
-
Scientific research plays a key role in the advancement of human knowledge and pursuit of solutions to important societal challenges. Typically, research occurs within specific institutions where data are generated and subsequently analyzed. Although collaborative science bringing together multiple institutions is now common, in such collaborations the analytical processing of the data is often performed by individual researchers within the team, with only limited internal oversight and...
-
The anthology presents current approaches to digitally supported professional learning. The articles provide insights into the dynamic development of the interfaces between gainful employment and vocational training and further education in the context of digitization of work and learning aids. The volume is thus connected to the publication “Berufsbildung am Bau digital” (edited by Bernd Mahrin and Johannes Meyser), which was published in 2019 by the University Press of the Technische...
-
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our...
-
We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process...
Explore
Our programmes
Organisations, collaborators and clients
Theme
-
Artificial Intelligence
(8)
- Climate (1)
- Education (1)
- Energy consumption (1)
- Climate-resilience, classrooms, physical environment (1)
- COVID-19 (3)
- Education Workforce (2)
- Geodata in Education (1)
-
Implementation science
(6)
- Design-thinking (6)
- Literature Reviews, meta-analysis, evidence synthesis (3)
- Programme implementation (6)
- Research and evaluation (7)
- Teacher Education (10)
-
Technical and vocational education and training (TVET)
(4)
- Publications (4)
Location
-
Africa
(5)
- Eastern Africa (4)
- Middle Africa (1)
- Northern Africa (1)
-
Southern Africa
(3)
- Namibia (1)
- South Africa (3)
- Western Africa (4)
-
Americas
(5)
- Central America (2)
-
Northern America
(5)
- Canada (3)
- United States (5)
- South America (2)
-
Asia
(4)
-
Eastern Asia
(2)
- China (2)
- Japan (2)
- Korea, Republic (1)
-
South-eastern Asia
(2)
- Philippines (1)
- Singapore (2)
- Southern Asia (3)
-
Western Asia
(3)
- Georgia (1)
- Israel (1)
- Saudi Arabia (1)
- State of Palestine (1)
- Turkey (2)
-
Eastern Asia
(2)
-
Europe
(3)
-
Eastern Europe
(2)
- Poland (1)
- Romania (1)
- Russian Federation (1)
-
Northern Europe
(3)
- Finland (1)
- Ireland (1)
- Sweden (2)
- United Kingdom (3)
- Southern Europe (3)
-
Western Europe
(2)
- Austria (1)
- France (2)
- Germany (2)
- Netherlands (2)
-
Eastern Europe
(2)
-
Oceania
(2)
-
Australia and New Zealand
(2)
- Australia (2)
-
Australia and New Zealand
(2)
Publication year
-
Between 1900 and 1999
(1)
-
Between 1990 and 1999
(1)
- 1995 (1)
-
Between 1990 and 1999
(1)
- Between 2000 and 2024 (106)
- Unknown (4)