Courses and Workshops


Language and Computation Courses

Foundational Courses

  1. Treebanks, Linguistic Theories and Applications, Petya Osenova (Sofia University “St. Kliment Ohridski”, Bulgaria) and Kiril Simov (Bulgarian Academy of Sciences, Bulgaria)

    The course aims to introduce syntax in contemporary linguistic theories through syntactic resources, called ‘treebanks’ and its main NLP applications. Treebanking has been active for more than 30 years now. Trade-off between linguistic grammars and applied syntactic corpora has acquired many forms, such as: attempts to linguistic neutrality or devotion to a specific linguistic theory; underspecification or loss of information; scaling with automatic parsers, etc.

    The course will outline the design of the annotation schemes as reflections of the linguistic theories; the relation of the syntactic information to other types of linguistic knowledge, such as morphology and semantics; the training of parsers on the treebanks; the applications of the treebanks for Machine Translation, Information Retrieval, Word Sense Disambiguation, etc. We will trace the treebank endeavors from monolingual to multilingual architectures and will pay special attention to the Universal Dependencies Initiative as a mega-lingual project supported by Google.

  2. The Lexicon: an Interdisciplinary Introduction, Elisabetta Jezek (University of Pavia, Italy)

    The course provides an interdisciplinary introduction to the study of words, their main properties, and how we use them to create meaning. It offers a detailed description of the structure of the lexicon, and introduces the categories that are needed to classify and represent the types of meaning variation that word display in composition; it also examines the interconnection between these variations and syntactic form, cognition and pragmatics. We use empirical evidence from corpora and human judgements to evaluate formalisms and methodologies developed in the field of formal linguistics, cognitive science, and natural language processing – particularly distributional semantics -to account for lexical-related phenomena. The key feature of the course is that it merges evidence-based theoretical accounts with computational perspectives. The measured and accessible approach make it an ideal foundational course for students and scholar across disciplines: linguistics, cognitive science, philosophy, computer science, artificial intelligence, and data scientists working on text mining.

Introductory Courses

  1. Advanced Regression Methods for Linguistics, Martijn Wieling (University of Groningen, The Netherlands)

    This course will introduce students to advanced regression methods. While many people have learned about multiple regression, interpreting the output of a regression model, especially with interactions present is something which is often found difficult. The course will therefore start with one lecture explaining multiple regression. Subsequently, two lectures of the course will cover (Gaussian and logistic) mixed-effects regression in order to enable students to take into account structural variability present in the data. For example, experiments in linguistics frequently involve participants responding to multiple items. This structure needs to be brought into the model in order to prevent overconfident (i.e. too low) p-values. The final two lectures of this course provide a thorough introduction to generalized additive modeling, which is a powerful method to analyze non-linear patterns in data. This approach is especially useful when time-series data (such as EEG, eye-tracking, or articulatory data) need to be analyzed.

  2. Distributional Semantics – A Practical Introduction, Stefan Evert (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)

    Distributional semantic models (DSM) – also known as “word space” or “distributional similarity” models – are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations – in the form of high-dimensional vector spaces – through a statistical analysis of the contexts in which words occur. DSMs are a promising technique for solving the lexical acquisition bottleneck by unsupervised learning, and their distributed representation provides a cognitively plausible, robust and flexible architecture for the organisation and processing of semantic information.

    This course aims to equip participants with the background knowledge and skills needed to build different kinds of DSM representations – from traditional “count” models to neural word embeddings – and apply them to a wide range of tasks. There will be a particular focus on practical exercises with the help of user-friendly software packages and various pre-built models.

  3. Multiword Expressions in a Nutshell, Carlos Ramisch (Aix Marseille University, France), Agata Savary (Université François Rabelais Tours – IUT de Blois, France) and Aline Villavicencio (University of Essex, UK and Universidade Federal do Rio Grande do Sul, Brazil)

    Much has been said and written about multiword expressions (MWEs). Even though a “pain in the neck”, they have become a hot topic in computational linguistics, as focus has moved from automatic discovery to in-context identification, parsing, semantic interpretation and machine translation. Nonetheless, the current treatment of MWEs in language technology is far from satisfactory, given their complex and heterogeneous behaviour. The goal of this hands-on course is to provide a broad introduction to MWEs, with strong multilingual emphasis. It covers theoretical foundations, discussing properties and guidelines for their annotation, possible scenarios for their computational treatment, and techniques for idiomaticity prediction. Laboratory sessions provide students with an opportunity to use tools like FLAT for corpus annotation and the mwetoolkit for idiomaticity prediction. Laboratory sessions provide students with an opportunity to use tools like FLAT for corpus annotation and the mwetoolkit for idiomaticity prediction. This course is addressed to students and researchers in computational linguistics who wish to analyse and integrate MWEs into their computational tools and linguistic studies.

  4. Probabilistic Language Understanding, Gregory Scontras (University of California, Irvine, USA)

    Recent advances in computational cognitive science (i.e., simulation-based probabilistic programs) have paved the way for significant progress in formal, implementable models of pragmatics. Rather than describing a pragmatic reasoning process, these models articulate and implement one, deriving both qualitative and quantitative predictions of human behavior—predictions that consistently prove correct, demonstrating the viability and value of the framework. The present course provides a practical introduction to the Bayesian Rational Speech Act modeling framework. Through hands-on practice deconstructing web-based language models, students will learn the basics of the modeling framework. Students should expect to leave the course having gained the ability to

    1. digest the primary modeling literature and
    2. independently construct models of their own.


  5. Introduction to Linked Open Data in Linguistics, Thierry Declerck (DFKI GmbH, Germany and ACDH-ÖAW, Austria) and John P. McCrae (The National University of Ireland Galway, Ireland)

    Publishing language resources under open licenses and linking them together has been an area of increasing interest in academic circles, including applied linguistics, lexicography, natural language processing and information technology, and to facilitate exchange of knowledge and information across boundaries between disciplines as well as between academia and the IT business.

    Until now this development has been discussed in workshops, datathons, and has also been at the core of the work conducted within the W3C Ontology-Lexica Community Group, whose final report has been published in May 2016 (Lexicon Model for Ontologies: Community Report, 10 May 2016)1. We see this development as an important step towards making linguistic data:

    1. easily and uniformly queryable,
    2. interoperable and
    3. sharable over the Web using open standards such as the HTTP protocol and the RDF data model.


    While it has been shown that linked data has significant value for the management of language resources in the Web, the practice is still far from being an accepted standard in the community. Thus it is important that we continue to push the development and adoption of linked data technologies among creators of language resources, but also within curricula at universities and summer schools.

    This proposed course for ESSLLI 2018 class has the main goal of giving people in the field of computational linguistics practical skills in the fields of linked data and semantic technologies as applied to linguistics and lexical data. After developing a short initial ontology, participants will learn step by step how to represent multilingual data with their ontology and how to ground it linguistically. We will introduce a variety of state-of-the-art multilingual representation formats and application scenarios in which to leverage and exploit multilingual semantic data. Finally, we will detail the connection of lexical and corpus resources using the NIF2 data format. At the end of the class, participants will be able to use Linguistic Linked Open Data (LLOD) for the semantic representation of linguistic data. Students will also be made familiar with best practices for publishing their own linguistic data in the Linguistic Linked Data cloud (guidelines resulting from a past European Supporting Action, LIDER:

    Both instructors of this proposed course have spent the last years on investigating the interface of lexical data and knowledge representation systems. We can refer to a number of publications together on various aspects of this intersection of ontologies and natural language resources. John was a driving force behind the development of the Lexicon Model for Ontologies (lemon) and its further development in the context of a W3C Working Group on Ontology-Lexica (see the section “practical information” and “references” below). Thierry has considerable experience in connecting the field of lexicography with the LLOD, and he has been in teaching on other but related topics at three past ESSLLIs. John and Thierry have also been teaching on the topics at recent Linguistic Linked Data and Semantic Technology summer schools (Eurolan3) and datathons (LIDER datathon)4.

  6. Cross-lingual Semantic Parsing, Kilian Evang (University of Düsseldorf, Germany)

    Semantic parsing deals with translating natural language utterances into something that a computer can “understand” – for example, into database queries, formal commands or logical formulas. In order to do this, computers need to learn to understand word meanings, and to combine them into utterance meanings. In this course, you will learn how this can be done using combinatory categorial grammars (CCG) and compositional semantics. Moreover, you will learn how parallel corpora and annotation projection can be used to do this cross-lingually, that is, for many different natural languages, with little additional human effort.

Advanced Courses

  1. Computing Dynamic Meanings: Building Integrated Competence-Performance Theories for Semantics, Jakub Dotlacil (University of Amsterdam, The Netherlands) and Adrian Brasoveanu (University of California, Santa Cruz, USA)

    This course will introduce a framework for developing integrated competence-performance theories for natural language semantics. Specifically, we will explicitly model semantic interpretation as part of a general cognitive architecture. Our theory, implemented in a computational model, of semantic interpretation as a cognitive process satisfies the following properties:

    1. it is incremental (e.g., it proceeds in the standard, left-to-right fashion)
    2. it models cognitive processes needed in interpretation (in particular, access to and retrieval from declarative memory)
    3. it can be matched and tested against performance data (on-line behavioral measures collected, for instance, in eye tracking experiments)


    The theory is built by connecting dynamic semantics approaches to natural language meaning and interpretation (DRT, Kamp 1981, Kamp and Reyle, 1993, FCS, Heim 1982, DPL, Groenendijk and Stokhof, 1991) with the cognitive architecture ACT-R (Anderson and Lebiere, 1998). Along the way, the course presents detailed performance / behavioral data that theories of interpretation as a cognitive process will have to account for.

  2. Neural Dependency Parsing of Morphologically-Rich Languages, Erhard Hinrichs (University of Tübingen, Germany) and Daniël de Kok (University of Tübingen, Germany)

    This course will introduce parsing of morphologically-rich languages using neural transition-based dependency parsers. Until recently, most work on neural transition-based dependency parsing was conducted on English. However, with the recent introduction of the Universal Dependency annotation scheme and corresponding treebanks for 50 languages (De Marneffe et al., 2014; Nivre et al., 2016), it seems timely to explore neural transition-based dependency parsing for other languages. In this course, we will focus on morphologically-rich languages and will draw on our own dependency parsing research as one use case of this kind. More generally, we will discuss how information about the macrostructure of different clause types (de Kok and Hinrichs, 2016) and about word-level morphology (de Kok, 2015; Ballesteros et al., 2015) can substantially improve parsing accuracy for such languages. We will further show how recurrent neural networks can be used to model such information.

  3. Probabilistic Modeling and Bayesian Data Analysis in Experimental Semantics and Pragmatics, Michael Franke (University of Tübingen, Germany) and Michael Henry Tessler (Stanford University, USA)

    Experimental approaches to theoretical questions in semantics and pragmatics are booming. Some see an ​empirical turn in progress. A welcome enrichment it may be, but the unification of rich theoretical work and novel experimental data brings new conceptual and practical problems: how do established theoretical notions lead to empirically testable predictions and what can we learn from experimental data about theoretical variables of interest? This course addresses these questions by introducing theory-driven probabilistic modeling in connection with Bayesian data analysis as a helpful set of tools to learn from observational data through the lens of a theoretical model. We will introduce the basics of Bayesian data analysis and probabilistic modeling through a series of concrete case studies in natural language semantics and pragmatics.

  4. Word Vector Space Specialisation, Ivan Vulić (University of Cambridge, UK) and Nikola Mrkšić (University of Cambridge, UK)

    Word representation learning has become a research area of central importance in modern NLP. The most pervasive representation techniques are still grounded in the distributional hypothesis, as they are learned from co-occurrence information in large corpora, and coalesce various types of information (e.g., similarity vs. relatedness vs. association). Specialising vector spaces to maximise their content with respect to one key property while mitigating others has become an active research topic. Proposed approaches fall into two broad categories:

    1. Unsupervised methods which learn from raw textual corpora in more sophisticated ways (e.g. using context selection and attention); and
    2. Knowledge-base driven approaches which exploit available resources to encode external information into distributional vector spaces.

    This one-week introductory course will introduce students and researchers to recent methods for constructing vector spaces specialised for a range of downstream NLP applications. We will deliver a detailed survey of the proposed methods and discuss best practices for their intrinsic and application-oriented evaluation.

  5. Computational Models of Events, James Pustejovsky (Brandeis University, USA)

    The notion of `event’ has long been central for both modeling the semantics of natural language as well as reasoning in goal-driven tasks in artificial intelligence. This course examines developments in computational models for events, bringing together recent work from the areas of semantics, logic, computer science, and computational linguistics. The goal of this course is to look at event structure from a unifying perspective, enabled by a new synthesis of how these disciplines have approached the problem. This entails examining the structure of events at all levels impacted by linguistic expressions:

    1. predicate decomposition and subatomic event structure;
    2. atomic events and themapping to syntax;
    3. events in discourse structure; and
    4. the macro-event structure of narratives and scripts.



  1. Ambiguity: Perspectives on Representation and Resolution, Timm Lichte (Heinrich-Heine-University Düsseldorf, Germany) and Christian Wurm (Heinrich-Heine-University Düsseldorf, Germany)

    Workshop website:

    Natural language is overloaded in the sense that linguistic symbols can, and usually have, two or more (but enumerably many) possible interpretations from which the hearer has to choose a specific one without being explicitely told to do so. This is what we want to call ambiguity, distinguishing it from the notions of underspecification and vagueness. Understood in this general way, ambiguity exists and arises on virtually all levels of linguistic modelling, and resolving ambiguity undoubtly is one of the main challenges when dealing with language in communication.

    As a consequence of this diversity, a variety of perspectives exists on how to represent and resolve ambiguity. Also depending on the linguistic object at hand, some represent ambiguity in the semantics by means of, for example, plain disjunction, complex types, or game theoretic models. Some leave it to syntax and assume, for example, two lexical entries for /bank/ that reflect the two different readings. At the same time, however, it is still unclear what semantic ambiguity should be attributed to in logical terms. Finally, if resolution fails, from a philosophical or inferential point of view, one should be interested in the question: given a sentence ambiguous between two readings, does another sentence follow from the two of them? Hence possible treatments of ambiguity include resolution, representation and reasoning. Our main goal is hence to work towards a unified perspective on ambiguity, and for this, we want to bring together researchers from various backgrounds (linguistics, computational linguistics, computer science, logic, philosophy) to approach (among other) the following questions:

    1. How can ambiguity be represented in terms of logic, distributional vectors, weighted (deep) networks etc.?
    2. Is there a “core” notion of ambiguity, and what does it look like? How does it relate to similar phenomena such as polysemy?
    3. What sort of ambiguity should be treated in semantics proper? In particular, when does one want to get rid of ambiguity, and when should we prefer to keep track of it?
    4. What can be gained by combining different approaches to ambiguity, for example simple context-based word disambiguation and disambiguation based on semantic content? Are there underlying methods which always work (e.g. game theory)?
    5. What sort of knowledge is needed for resolving ambiguity? And how can the interaction between knowledge and resolution be modelled?
    6. How can we formalize the distinction between polysemy (including metonymy) and homonymy? Where do we need this distinction, and where not?
    7. What constraints exists on which meanings an ambiguous term can have? And how can we capture this?

    Our special focus is thus on bringing together approaches seeing ambiguity as a mere computational problem and approaches seeing it as a linguistic phenomenon with some interest in itself.


  • Annotation in Digital Humanities (annDH): How Can Linguistics/Computational Linguistics Help with Annotation in DH, Sandra Kübler (Indiana University, USA) and Heike Zinsmeister (University of Hamburg, Germany)

    Workshop website:

    Linguistic annotation is one of the core interfaces between linguistics and computational linguistics. It has also become a central interface between computational linguistics (CL) and digital humanities (DH). Texts are preprocessed and annotated, e.g. with parts of speech, for distant reading and other visualization applications, topic and network analyses, text mining and question answering for humanist research questions. In these applications the annotation is a means to an end and mostly invisible to the humanist researchers.

    In this workshop, we will push the boundary of this interface and focus on annotation beyond the standard linguistic categories, looking at categories and relations relevant for humanist research questions themselves, such as metaphors, stereotypes, entities, causation of historical events, narratives, or philosophical reasoning. In this area, CL cannot necessarily provide tools, but instead it can provide methodology and best practices. Thus, lessons learned in linguistic annotation can be repurposed for annotation in DH. This includes CL support of the epistemological process of developing the annotation categories themselves, which are often inductively—or abductively—derived in a hermeneutically cyclic way. Also included in the scope of the workshop is research to the data types in the digital humanities, which mostly concern non-canonical language and thus pose challenges for automated annotation.


  • NLP in the Era of Big Data, Deep Learning, and Post Truth, Preslav Nakov (Qatar Computing Research Institute, HBKU, Qatar), Ahmed Ali (Qatar Computing Research Institute, HBKU), Irina Temnikova (Sofia University), Georgi Georgiev (Ontotext), Lluis Marquez (Amazon), Shafiq Joty (Nanyang Technological University), and Ivan Koychev (Sofia University)

    Here is the Website of the Workshop.

    Selected papers that have been presented at the workshop will be invited to submit a full version of the paper to the Cybernetics and Information Technologies journal, which is indexed by SCOPUS, SJR, and some other databases:

    The Cybernetics and Information Technologies Journal

    Recent years have seen fast advances of the field of Natural Language Processing (NLP) due to the simultaneous influence of two revolutionary forces: Big Data and Deep Learning. The aim of using large corpora has been prominent in NLP since an earlier statistical, corpus-based revolution of the 1990s. Indeed, in corpus-based NLP size does matter, and researchers have been exploring corpora as large as the entire Web; now this abundance of data has enabled the return of Neural Networks and the rise of Deep Learning. More recently, we have further seen the rise of Big Data with its 3Vs: Volume, Velocity, and Variety. Even more recently, with the spread of fake news, it has been suggested that a fourth V should be considered: Veracity.

    The workshop welcomes work presenting new developments in applying NLP for solving problems related to Big Data, Deep Learning, and Veracity. We also invite discussion about the impact of these revolutionary forces on the field of NLP as a whole.

  • Language and Logic Courses

    Foundational Courses

    1. Problems and Arguments in Relativist Semantics, Dan Zeman (University of Vienna, Austria)

      In recent years, semantic relativism has occupied a central position in philosophy of language and linguistics, the view being applied to a variety of natural language expressions such as predicates of taste, aesthetic adjectives, moral terms, epistemic modals, gradable adjectives, future contingents, epistemic vocabulary etc. The arguments for (and against) relativism can be grouped in two categories: arguments appealing to intuitions in various scenarios and syntactic/semantic arguments. Among the first, perhaps the best known is the argument from disagreement, but retraction and eavesdropping scenarios have also been taken to support relativism. Among the second we find arguments from control, binding, licensing, sluicing, and from embeddings under various attitude verbs. This course aims to bring all these arguments together and assess their dialectical efficacy. At the same time, the course aims to draw attention to novel phenomena of relevance (e.g., “perspectival plurality”) and discuss their possible treatment in a relativist semantics.

    Introductory Courses

    1. Handling Vagueness in Logic, via Algebras and Games, Serafina Lapenta (University of Salerno, Italy) and Diego Valota (The University of Milan, Italy)

      In order to capture enough features of the real word in the language of mathematics one needs to represent vague concepts. Several approaches to vagueness can be found in literature, some of them are based on the idea of sharpening vague predicates. A different approach allows to treat vagueness as it is and puts mathematical fuzzy logic at its core, interpreting vague predicates in truth-degrees ranging over [0; 1]. Such logics can be better understood via their semantics and this course aims at giving a clear picture of the algebraic and game-based semantics of some predominant fuzzy logic, after analyzing the notion of vagueness in itself and discussing arguments in favor and against the use of mathematical fuzzy logic to handle vagueness.
      The course.

    2. Introduction to Logical Geometry, Lorenz Demey (KU Leuven, Belgium) and Hans Smessaert (KU Leuven, Belgium)

      Aristotelian diagrams, such as the square of opposition, have a rich history in philosophical logic, and today they are also widely used in other disciplines, such as linguistics and computer science. In recent years, these diagrams have also begun to be studied as objects of independent logical and diagrammatic interest, giving rise to the burgeoning field of logical geometry.

      This course will

      1. give students a sense of the wide and interdisciplinary range of (sometimes unexpected) applications of Aristotelian diagrams,
      2. discuss some of the fundamental logical and diagrammatic issues related to these diagrams, and
      3. introduce the methods and tools developed in logical geometry for studying these topics.


      In particular, we will deal with applications such as Russell’s theory of definite descriptions and formal concept analysis, logical and diagrammatic issues such as Aristotelian vs. duality vs. Boolean structure, logic-sensitivity and informational vs. computational equivalence of diagrams, and methods such as bitstring semantics.

      Course website:

    3. Meaning, Prosody and Commitment, Claire Beyssade (The University of Paris VIII, France) and Elisabeth Delais-Roussarie (CNRS-LLING – Nantes, France)

      Introduced by Hamblin in the 70s, and subsequently largely ignored, the notion of commitment has today become central in formal semantics and pragmatics and in particular in the literature on speech acts, discursive particles, and prosody. The course intends to systematize the material about commitment offering a historical overview (session-1) and a detailed presentation of available formal models (session-2). It will then evaluate these theories across three empirical domains: prosody (session-3), modality (session-4), and attitudes (session-5). In the last two sessions, by making the notion of commitment invest areas for which truth in epistemic models or private states has been traditionally used, we will propose to revisit the divide between private and public states firstly established by Hamblin. We will explore the intricacies between these two spaces and propose new answers to hot debates on modals and attitudes.

    4. Introduction to Formal Argumentation, Dov M. Gabbay (Bar-Ilan University, UK) and Massimiliano Giacomin (University of Brescia, Italy)

      The goal of the course is to introduce the research field of computational argumentation to students, young researchers, and other non-specialists, and to foster a sound understanding of its foundations, basic models adopted, fundamental methods and techniques. The course covers both abstract argumentation, which abstracts away the structure of arguments and the nature of their relations, and structured argumentation, which deals with how an argument is structured and how relationship among arguments, such as attack and support, are derived.

    5. Causal Models in Linguistics, Philosophy, and Psychology, Daniel Lassiter (Stanford University, USA) and Thomas Icard (Stanford University, USA)


      This course explains and motivates formal models of causation built around Bayes nets and structural equation models, a topic of increasing interest across multiple cognitive science fields, and describes their application to select problems in psychology, philosophy, and linguistics.

    6. Semantic Modeling with Frames, Rainer Osswald (Heinrich-Heine-University Düsseldorf, Germany) and Wiebke Petersen (Heinrich-Heine-University Düsseldorf, Germany)

      The course provides an introduction to the frame-based modeling of natural language semantics. Its main goal is to teach students the formal foundations of frames and how to apply them to the modeling of various semantic phenomena. The course starts with an overview of the history of frames followed by a brief discussion of the pros and cons of attribute-based structures such as frames for the representation of semantic and conceptual knowledge. Next we give a formal definition of frames as relational structures, and we show how such structures can be specified by means of an appropriate logical language. We then illustrate how frames can be applied to the modeling of various semantic phenomena and domains. Special emphasis is put on the representation of events and changes. Moreover, we will address issues of frame composition and the syntax-semantics interface. The course closes with an outlook to more advanced topics.

    7. Current Topics in the Semantics and Pragmatics of Plural Expressions, Benjamin Spector (Institut Jean Nicod, France)


      What is the meaning of plural expressions in natural languages: “the students”, “some apples”, etc? While the answer to such questions seems pretty straightforward at first sight, appearances are deceptive. It turns out that providing a unified and empirically adequate theory of the meaning of plural expressions in various syntactic environments is surprisingly difficult.

      The goal of this class is to introduce students to one major approach to plural semantics, based on the idea that plural expressions denote or quantify over so-called “plural individuals”, and to present some recent research within this framework that aims to address puzzles pertaining to the interpretation of numerals and plural definites. The discussion will contain the presentation of formal models, a detailed investigation of their predictions, as well as data coming from experimental semantics. We will cover topics such as:

      • the various types of readings that plural expressions can trigger depending on the type of predicate they combine with (collective readings, distributive readings, cumulative)
      • the complex semantic and pragmatic behavior of numerals (“three birds”) and modified numerals (“more than three birds”), within the mereological approach to plural semantics
      • the semantic underdetermination of plural definites, and its consequences for plural quantification.

    Advanced Courses

    1. Logics for Epistemic and Strategic Reasoning in Multi-Agent Systems, Valentin Goranko (Stockholm University, Sweden)

      This course is intended for a wide audience with basic knowledge of modal and temporal logics. I will introduce and discuss some of the most important and popular families of logics for multi-agent systems (MAS), starting with multi-agent epistemic and dynamic epistemic logics. Then I will focus on logics for strategic reasoning and abilities in MAS: with complete information or with incomplete and imperfect information; with no memory, bounded memory, or perfect recall of players; for reasoning within dynamically changing strategy contexts or constructive concepts of strategy, etc. I will also discuss the interaction between information and strategic abilities in MAS and some applications to multi-agent planning and to multi-player games

    2. Plurality: Theoretical and Experimental Perspective, Agata Renans (Ulster University, UK) and Jacopo Romoli (Ulster University, UK)

      Plurality is at the center of contemporary (formal) semantic and pragmatic investigations.

      In recent years, there has been more and more research focusing on plurality-related phenomena using experimental methods. In this course, the students will be provided with an overview of the semantic and pragmatic theories of plurality and also with the novel experimental methodologies used to investigate various aspects of plurality in a cross-linguistic perspective. The aim of the course is to enable students to conduct their own (theoretical and experimental) research in this domain.

    3. Reasoning in Games: Players as Programs, Eric Pacuit (University of Maryland, USA)

      The aim of this course is to cover some of the most successful and fruitful approaches to modeling players’ reasoning and deliberations in games—from logic, artificial intelligence, cognitive science, and game theory—with an eye toward the possibility of unification. The course will draw on recent literature in game theory, behavioral economics, cognitive science, and artificial intelligence. Such an interdisciplinary perspective will appeal to many of the participants at ESSLLI. Students attending this course will get hands-on experience using WebPPL probabilistic programs ( to represent agents in game situations; understand how well-known computational models (finite automata and Turing machines) can be used to represent players’ reasoning in games; understand how methods from the mathematical theory of evolution can provide a powerful tool to explain strategic interactions; and be exposed to intriguing experimental results about how humans behave in strategic situations.
      Course website:

    4. Non-Canonical Comparatives: Syntax-semantics and Psycholinguistics, Roumyana Pancheva (University of Southern California, USA) and Alexis Wellwood (University of Southern California, USA)

      The overwhelming majority of the literature on the syntax-semantics and psycholinguistics of comparative constructions (i.e., those with the morphemes -er, as, too, enough, etc, in English) has focused on those targeting gradable adjectives and adverbs like tall and fast. We investigate non-canonical comparative constructions targeting nouns (more coffee/toys), and verbs (sleep/jump more). We begin by discussing the similarities and differences between these two types (Day 1), and then turn to current issues. First, we investigate structures that obligatorily express comparison by number (more coffees, run to the store more), and their derivational dependence on plural and aspectual morphology across languages (Days 2 and 3). Next, we probe speaker understanding by looking at adult and child verification of nominal and verbal comparatives (Day 4). Finally, we discuss the “comparative illusion” phenomenon, which appears to depend on more’s flexibility as a nominal and verbal quantifier (Day 5).
      The course.

    5. Facial Displays and Their Dialogical Meanings, Jonathan Ginzburg (Paris Diderot University, France) and Ye Tian (Amazon Research Cambridge, UK & Laboratoire de Linguistique Formelle, Université Paris Diderot)

      The course aims at modelling the discourse impact of non-verbal social signals (nvss) such as laughter, smiling, frowning, and the like, but also including conventionalised pictorial representations (“emoji”) as textual manifestations. We will start with a brief overview of the two main theoretical sources for this course: first, work in computational and theoretical linguistics on the semantics and pragmatics of dialogue and second, psychological and computational approaches to emotion appraisal. The course will offer detailed argumentation, contrary to received wisdom until recently, as to the mutual interaction between nvss/emojis and content emanating from verbal material, in particular that nvss/emojis bear propositional content. We propose viewing nvss/emojis as akin to an event anaphors and show how to deduce various pragmatic functions from the basic meanings posited in combinination with enthymatic reasoning and emotional appraisal. The course will also include practical sessions on multidimensional laughter classification of speech using ELAN and emoji data analysis in social media.

    6. Natural Language Ontology, Friederike Moltmann (The French National Center for Scientific Research and New York University)

      Metaphysics in the past was considered mainly a pursuit of philosophers, asking questions about being in most general terms. While some philosophers made appeal to natural language, others have rejected such an appeal arguing that the ontology reflected in language diverges significantly from what there really is. What is certain is that with the development of natural language semantics (and syntax), the ontology reflected in natural language has become an important object of study in itself, as the subject matter of natural language ontology. This course gives an overview of the sorts of the ways natural language reflects ontological notions and structures, of cases of discrepancies between the ontology implicit in natural language and the reflective ontology of philosophers or non-philosophers, and of the ways natural language ontology can be conceived with respect to other projects in metaphysics. It also addresses the Chomskyan skepticism as regards reference (and ontology) and the importance of recent developments in (generative) syntax for natural language ontology.

    7. Formal Semantics of Pictorial Narratives, Dorit Abusch (Cornell University, USA) and Mats Rooth (Cornell University, USA)

      The class looks at current research on the semantics and pragmantics of pictorial narratives such as comics, emphasizing methods of possible worlds semantics and dynamic semantics. Such narratives show intriguing parallels with natural language narratives, and equally intriguing differences. Topics include propositional semantics for pictures, indexing, temporal progression, sentences that describe pictures, and explicit and implicated intensionality.


    1. Quantity in Language and Thought, Shane Steinert-Threlkeld (University of Amsterdam, The Netherlands) and Jakub Szymanik (University of Amsterdam, The Netherlands)

      Here is the website of the workshop.

      Quantifiers are linguistic expressions encoding representations of quantities. Their study has been one of the great success stories in natural language semantics. On the other hand, the study of the mental representation of numerical and other quantitative information has become an active area of research in cognitive science and neuroscience. This workshop provides a venue for continued exploration of the interface between these two domains. In what ways do cognitive theories of quantities constrain and inform the semantics of quantifiers? Are the quantifiers realized in natural language constrained by our cognitive representations of number? Similarly, can insights from semantics inform the study of the psychology of number? We welcome new experimental and theoretical work at this interface.

    2. Bridging Formal and Conceptual Semantics, Kata Balogh (Heinrich-Heine-University Düsseldorf, Germany) and Wiebke Petersen (Heinrich-Heine-University Düsseldorf, Germany)

      Workshop website:

      The main aim of the workshop is to get together linguists, philosophers and cognitive scientists from the two leading research traditions of natural language meaning, often referred to them as “formal semantics” and “conceptual semantics”. The workshop provides a platform to investigate and discuss the ways of possible bridges between the two semantic perspectives, and to initiate a deeper conversation and collaboration between them. The workshop intends to gather approaches that show the way how the two perspectives can strengthen each other. The workshop holds 6 high-quality contributed talks, each of 45 minutes including discussions and two invited talks by distinguished researchers working on topics closely related to the main issue of the workshop.

    Logic and Computation Courses

    Foundational Courses

    1. Introduction to Proof Theory, Anupam Das (University of Copenhagen, Denmark) and Thomas Powell (Technical University of Darmstadt, Germany)

      Proof theory is one of the `four pillars’ of mathematical logic, and is of fundamental interest to mathematicians, computer scientists, philosophers and linguists alike. It serves as the foundation for many other endeavours in logic and has also been useful in realising the interplay between logic and other areas of mathematics, not least via the theory of computation.

      This course will introduce students with little-to-no background in logic to the world of proof theory from a computational perspective. The overall aim is to leave the student with an appreciation of how proof theory can be exploited to obtain interesting properties of logics, and how it relates to computation. Moreover this course should suffice to prepare a student for more advanced topics in logic and proof theory.

    2. Aggregating Judgements: Logical and Probabilistic Approaches, Eric Pacuit (University of Maryland, USA)

      This course will introduce the key results (including proofs) and the main research themes in the study of judgement aggregation and the wisdom of the crowds. The course will focus on both logical and probabilistic models of judgement aggregation. The primary objective is to introduce the main mathematical methods and conceptual ideas found in this literature. Topics include: The judgement aggregation model, Discursive Dilemma, and the Doctrinal Paradox; Probabilistic opinion pooling; The Condorcet Jury Theorem and its variants; Opinion pooling (Lehrer-Wagner model, de Groot’s Theorem); Possibility and impossibility of aggregating judgements (Dietrich and List impossibility Theorem); Merging probabilistic opinions (Blackwell-Dubins Theorem); Aumann’s agreeing to disagree theorem and its generalizations; Diversity trumps ability theorem (Hong-Page Theorem). To facilitate understanding the main proof techniques and results, I will spend much of the time working through the theorems at the blackboard.
      Course website:

    Introductory Courses

    1. Spatio-Epistemic Logics: When Knowledge Meets Geometry, Philippe Balbiani (Toulouse Institute of Computer Science Research, France)

      In the field of knowledge representation and reasoning, despite the fact that epistemic connectives are sometimes interpreted in concrete structures defined by means of runs and clock time functions, one of the things which strikes one when studying multi-agent epistemic logics is how abstract their semantics are. Contrasting this fact is the fact that real agents like robots in everyday life and virtual characters in video games have strong links with their spatial environment. In this course, we will introduce multi-agent epistemic logics which semantics can be defined by means either of purely geometrical notions, or of purely topological notions. In these logics, possible states are defined by considering the positions (points, regular closed subsets) in Rn occupied by agents and the sections (cone, regular closed subsets) of Rn seen by agents whereas accessibility relations are defined by means of the ability of agents to imagine possible states compatible with what they currently see.

    2. An Introduction to Paraconsistent Logic and its Applications, Can Baskent (University of Bath, UK)


      Paraconsistent logics are formal systems where contradictions do not entail everything. In paraconsistent logics, it is possible to have non-trivial inconsistent theories. This course introduces various well-known paraconsistent logics with an application oriented perspective. The applications are chosen from foundational issues in logic, mathematics and game theory with an aim of relating the subject to a wider audience with different backgrounds and interests. For that reason, the course targets an inter-disciplinary audience and serves as an introduction for those who are interested in non-classical logic, non-standard mathematics, set theory and game theory.

      The topics of this course will include

      • Introduction to Paraconsistent Logics: Logic of Paradox, Relevant Logic and Logics of Formal Inconsistency
      • Semantic Tools and Techniques for Paraconsistent Logics
      • Paraconsistency in Set Theory, Analysis and Topology
      • Paraconsistency in Epistemic Game Theory


    3. Introduction to Description Logics, Ivan Varzinczak (Artois University, France)

      This course provides an introduction to Description Logics (DLs) in the context of knowledge representation and reasoning (KRR). DLs are a family of logic-based KRR formalisms with interesting computational properties and many applications. In particular, DLs are well-suited for representing and reasoning about terminological knowledge and constitute the formal foundations of semantic-web ontologies. There are different flavors of description logics with specific expressive power and applications, an example of which is ALC and on which we shall have a strong focus in this course.

      We start with a motivation for representing and reasoning with ontologies. We then present the description logic ALC, its syntax, semantics, logical properties and proof methods, especially the tableau-based one. Finally, we illustrate the usefulness of DLs with the popular Protégé ontology editor, a tool allowing for both the design of DL-based ontologies and the ability to perform reasoning tasks with them.

    4. Coalgebraic Methods for Automata (CoMA), Filippo Bonchi (The French National Center for Scientific Research, France), Marcello Bonsangue (Leiden University, The Netherlands) and Jurriaan Rot (Radboud University, Nijmegen, The Netherlands)

      Coinduction, the dual of induction, is a mathematical principle for reasoning about infinite and circular structures. Originally studied in the field of concurrency theory, by now it is evident that coinductive techniques are ubiquitous in computer science, mathematics and logic. In particular, coinduction has led to a new foundation of automata theory, in turn leading to novel algorithms and methods for various kinds of automata. These applications are driven by the modelling of automata as coalgebras, an abstract framework for the uniform study of dynamical systems. This course provides a gentle introduction to coinduction and coalgebras, using automata as a motivating example.

    5. Strategies, Knowledge, and Know-How, Pavel Naumov (Vassar College, USA)

      An agent comes to a fork in a road. There is a sign that says that one of the two roads leads to prosperity, another to death. The agent must take the fork, but she does not know which road leads where. Does the agent have a strategy to get to prosperity? On one hand, since one of the roads leads to prosperity, such a strategy clearly exists. On the other, the agent does not know how to apply the strategy. This introductory-level course will introduce students to formal models of knowledge, strategies, and know-how and review recent results in this area.

    6. Computational Social Choice and Complexity Theory, Ronald de Haan (University of Amsterdam, The Netherlands)

      This course will provide a rigorous introduction to the use of computational complexity methods in the research field of computational social choice. We will study the role of complexity theory in social choice by exploring several topics. We will cover various computational problems and issues in the setting of voting. We will also examine the framework of judgment aggregation where a group of agents form a collective opinion on logically interconnected issues. Finally, we will discuss the topic of finding stable matchings between individuals that have preferences over possible matching partners, and the algorithmic barriers that come up in this setting. During the course, we will introduce the framework of parameterized complexity theory, and discuss its role in the various social choice settings that we consider. We will start with basic results, and we will work up to current research in the field.

    7. Social Networks for Logicians , Zoé Christoff (University of Bayreuth, Germany) and Pavel Naumov (Vassar College, USA)

      Once a new commercial product, technology, political opinion, or social norm is adopted by a few people, these few often put peer pressure on others to consider adopting it as well. Those few who adopt next put even more pressure on the rest of the population. This cascading “epidemics” effect typically drives diffusion processes in social networks. There are many natural questions that can be asked about diffusion. Which initial group of people should get “infected” by a new product to ensure its adoption by the largest possible group? Which group should be convinced that an idea is bad, in order to avoid its wide spread? How does marketing affect diffusion? What can agents know about the global diffusion process when their observation power is restricted to the behavior of their friends or network neighbors? This course will introduce several logical systems in which such questions can be formally stated and answered.

    8. Logic, Ontology and Planning: the Robot’s Knowledge, Stefano Borgo (Institute of Cognitive Sciences and Technologies (ISTC)) and Oliver Kutz (Free University of Bozen-Bolzano, Italy)

      Robotics is a traditional research area which is rapidly expanding due to the ongoing exploitation of intelligent autonomous agents like self-driving cars and drones, industrial robots for production and humanoids for the elderly. The course focuses on the knowledge a robot needs to act in the environment and to understand what it can possibly do. It introduces and discusses the notions and relationships that are needed to “understand” a generic scenario and shows how to structure an ontology to organize such knowledge. In particular, it focuses on how to understand and model capacities, actions, contexts and environments. The flow of information between the knowledge module and the planning and scheduling modules in a generic artificial agent is presented.

    Advanced Courses

    1. Modal Logics of Provability and Interpretability, Tin Perkov (University of Zagreb, Croatia)

      This course presents modal treatment of provability and interpretability in arithmetical theories. The focus is on Kripke semantics for provability logic GL and Veltman semantics for interpretability logic IL and its extensions. The proofs of modal completeness for these logics will be presented and arithmetical ramifications will be discussed. The course is concluded with an overview of some of the latest developments in model theory of interpretability logic.

    2. Modal Logics for Model Change, Raul Fervari (The National University of Córdoba, Argentina) and Fernando R. Velázquez-Quesada (University of Amsterdam, The Netherlands)

      Dynamic Epistemic Logic (DEL) has become a useful tool for describing changes in different systems and concepts, as shown by its analysis of the effect of different forms of communication (public, private) on the knowledge of a set of agents, or its study of the effect of social influence on an agent’s preferences/opinions. One of DEL‘s key features is that changes are not represented by means of transitions within a system (as done, e.g., in propositional dynamic logic), but rather as operations that change the whole model in which formulas are evaluated. Thus, DEL can be abstractly understood as the study of modal logics for model change. This course provides a technical discussion on different operations that can be performed over DEL’spreferred models, relational ‘Kripke’ models.

    3. Multi-Agent Deontic Logic: Reasoning About Normative Multi-Agent Systems, Gabriella Pigozzi (University Paris-Dauphine, France) and Leon van der Torre (University of Luxembourg, Luxembourg)

      The aim of the course is to present deontic logic from a formal of view, highlighting the practical challenges raised by agents, detachment, exceptions and multiagent organizations. It is therefore complementary to most presentations of deontic logic focusing on the philosophical motivations and paradoxes, or giving a linguistic perspective on deontic modality. In particular, we first present STIT theory addressing the challenges of non-deterministic actions, moral luck and procrastination. We continue with alternative norm-based deontic logics addressing the challenge of multiagent detachment, when agents cannot assume that other agents comply with their norms. Conflicts among norms are resolved using formal argumentation, and organizations are described using collective attitudes and constitutive norms. We illustrate the logics with examples from applications in machine ethics and legal informatics, as well as other domains of normative multiagent systems.

    4. New Developments in Belief Revision, Richard Booth (Cardiff University, UK) and Giovanni Casini (University of Luxembourg, Luxembourg)

      Belief change is a research field that has been deeply investigated in the last thirty years. Rooted in the seminal paper by Alchourrón, Gärdenfors and Makinson (AGM), belief change continues to be a very active research area. Due to the interest that a number of areas of Knowledge Representation and Reasoning have in the belief change theory, the AGM approach has been developed from the original proposal in various directions in order to handle different levels of expressivity, different kinds of entailment relations and cognitive models that do not correspond to the ones associated to original AGM formulation. The aim of this course is to bring its participants up-to-date by presenting an overview of what we think are the most significant and exciting developments over the last five years, ranging from the introduction of new semantics, to handling previously ignored formal languages and entailment relations.

    5. Linear Arithmetic: Geometry, Algorithms, and Logic, Dmitry Chistikov (University of Warwick, UK) and Christoph Haase (University of Oxford, UK)

      Theories of linear arithmetic over R, Q, Z, and N are at the core of computational logic. Arising as generalizations of their existential conjunctive fragments-linear programming and integer programming-they occur in many contexts in computer science both in theory and practice. Reducing domain-specific constraints to a linear arithmetic theory is a standard and powerful problem-solving technique. Arithmetic theories are a fascinating object to study, as they lie at the interface of geometry, algorithmics, and logic. This course is an introduction to the theories of linear arithmetic, such as the first order linear theory of the reals and Presburger arithmetic, its analogue over the natural numbers. We will first explore the geometry of linear constraints over R and Z, defining convex polytopes and convex polyhedra, as well as so-called hybrid linear sets, their discrete analogue. Subsequently, we will cover classic algorithmic techniques for linear and integer programming (simplex method, interior-point methods, branch-and-bound and rounding techniques) and decision procedures for Presburger arithmetic, such as quantifier elimination and automata-based techniques. Finally, we will discuss applications of linear arithmetic theories in formal language theory and software verification.

    6. Logics with Probability Operators, Zoran Ognjanovic (Serbian Academy of Sciences and Arts, Serbia) and Dragan Doder (University of Belgrade, Serbia)

      Formal reasoning with uncertain knowledge is an ancient problem dating, at least, from Leibniz. Recently we are experiencing a growing interest in the field motivated by applications to computer science and artificial intelligence. Some of the formalisms for representing, and reasoning with, uncertain knowledge are based on probabilistic logics, that extend classical logic calculus with probabilistic operators.

      The main goal of this course is to provide a solid foundation for students that want to use results and ideas from probability logics in their field of study. We will present mathematical techniques, including the results of the authors, in a number of probabilistic logics. We will consider various propositional and first-order languages with probabilistic operators, with different expressive power and different co-domains of probability measures. For all those logics we will present existing axiomatization approaches, and we will discuss decidability issues. We will also consider how to extend the approaches for probabilistic languages thatextend different modal logics.

      The course will provide a thorough introduction to the field of probability logics, with basic prerequisites (familiarity with modal logic and probability theory). It will include many technical details, and in consequence it is intended for an advanced audience.