|Period:||period 3 (week 6 through 15, i.e., 8-2-2021 through 16-4-2021; retake week 27)|
|Participants:||up till now 55 subscriptions|
|Schedule:||Official schedule representation can be found in MyTimetable|
|Note:||No up-to-date course description available.|
Text below is from year 2019/2020
|Contents:||The aim of the course is to offer an in-depth introduction to Natural Language Generation (NLG), with a focus on its empirical basis, on its practical applications (e.g., in medicine and weather forecasting), and on theoretical perspectives (Gricean, Bayesian, etc.). Different approaches to the construction of NLG systems will be discussed. The emphasis in this course will be on underlying ideas, not on algorithmic details. The taught component of the course will broadly consist of four main parts: I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly. II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text. III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning. IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area. The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.|
A useful overview of this research area is Gatt & Krahmer (2018) A. Gatt and E.J. Krahmer, "Survey of the state of the art in natural language generation: core tasks, applications and evaluation." Journal of Artificial Intelligence Research.
|Exam form:||Exam (70%) Individual coursework (20%) Group presentation (10%)|
|Minimum effort to qualify for 2nd chance exam:||To qualify for the retake exam, the grade of the original must be at least 4.|
|Description:||Upon completion of this course, the student
Understands the aims and components of a typical NLG system. |
Is able to apply the main research methods that are used in NLG.
Understands the linguistic underpinnings of NLG.
Is able to summarise and present an NLG research paper.