COURSE DESCRIPTIONS

Language and Logic Courses

 

Advanced Courses

 

Approaches to Parts and Wholes in Semantics

Friederike Moltmann

The course will give an overview of extensional and non-extensional mereological theories of parts and wholes, their historical predecessors and recent and new applications to natural language. It will discuss empirical and conceptual challenges for extensional mereological theories and discuss in depth non-extensional mereological approaches that make use of the notion of an integrated whole (such as the mereotopological notion of maximal selfconnectedness), a notion that was first applied to a range of semantic phenomena in my 1997  book Parts and Wholes in Semantics. It will also discuss more recent approaches in mereology on which a conceived whole takes priority over the parts ('slot mereology') and develop new linguistic applications of such a notion  Finally, the course will discuss the mass-count distinction and the various attempts by philosophers and linguists of clarifying the notion unity (or of being a single thing) that appears to be at the center of that distinction. The course aims to build a bridge between philosophical and linguistic traditions regarding mereology.

 

Explanation and Semantic Theory: The Case of Presupposition

Alexandros Kalomoiros and Patrick Elliott

In this class, we chart the development of explanatory approaches to presupposition projection. We begin by surveying how classical approaches to the projection problem (Karttunen 1974, Heim 1983) essentially stipulate projection behavior lexically. We'll subsequently consider the first wave of explanatory approaches: namely, the classic trivalent approaches, and the more recent local contexts approach of Schlenker (2009). We'll then survey recent experimental work on conjunction and disjunction, which reveals that the two connectives differ with respect to the effect of linear order on projection, something not predicted by the first wave of explanatory approaches. This is used as a stepping stone to a discussion of the accounts developed by George (2008) and Kalomoiros (2023), which aim for a more satisfactory empirical coverage. Finally, we'll consider explanatory theories of projection in light of more challenging configurations involving conditionals and quantifiers.

 

Composing Meaning via Dependent Types

Daisuke Bekki

Website: https://bekkilab.notion.site/ESSLLI2025-course-Composing-Meaning-via-Dependent-Types-152244489f42801a8b49fe4b629d5492

This course provides a comprehensive overview of Dependent Type Semantics (DTS), a proof-theoretic semantics of natural language based on dependent type theory. DTS marks a departure from the traditional model-theoretic semantic frameworks, with its uniform analysis of projective contents such as anaphora, presupposition, and conventional implicatures through the process of proof construction. I will also reflect on the extensive research and developments of DTS over the past decade, from empirical linguistic research to computational research. The latter includes discussion on the integration of DTS and neural networks, which will shed light on our future prospects and provide insights into DTS's potential applications in computational linguistics.

 

Dogwhistles at the intersection of semantics, pragmatics, and social meaning

Robert Henderson

This course provides a systematic introduction to dogwhistles—that is, language that sends one message to an out-group while at the same time sending a second (often taboo, controversial, or inflammatory) message to an in-group. Dogwhistles are an extremely active current research area, evidenced by the fact that there are three recent monographs across linguistics and philosophy in which they play a major role—i.e., Beaver and Stanley 2023, Henderson and McCready 2024, and Saul 2024. The interest is due to the fact that dogwhistles are a phenomenon that implicates semantics, pragmatics, and social meaning. They thus raise foundational questions for how we think about meaning in general. In this course we will survey the social science literature on dogwhistles, and then jump what philosophers and linguists have concluded through these three works. Students will leave with an understanding of the current contours of the literature and prepared to begin their own forays into this research area.

 

Information-based semantics

Vít Punčochář

In contrast to the usual truth-conditional semantics, which is based on the notion of truth in a possible world, information-based semantics relies on the notion of informational support relative to information states. By adapting the structures of information states, one can obtain information-based semantics for classical logic as well as for many kinds of non-classical logic, for example, intuitionistic logic, fuzzy logics or relevant logics. Information-based semantics overcomes the limits of the truth-conditional approach in various respects. It has been used most extensively in the context of inquisitive semantics, where it leads to an elegant logical analysis of questions. This course will provide a concise introduction to information-based semantics, emphasizing its application to the logic of questions and information types.

 

Foundational Courses

 

Context outside face-to-face settings

Merel Semeijn and Bart Geurts

It is generally agreed that face-to-face communication is the primary setting of language use, and to some degree at least, theories of language production, interpretation, and processing tend to be restricted to face-to-face settings. As a consequence, they don’t apply to speeches for large audiences, broadcasting, and most forms of written communication. This course is about some of the main ways in which language use in these settings deviates from face-to-face communication, with special emphasis on the roles that context plays, and what context is, outside face-to-face settings.

 

The Common Ground and its extensions

Robert Henderson

The term common ground, used for information that conversational participants take as shared background, has its origins in Grice’s William James lectures (Grice, 1991, pp. 65;274). Though it is not formalized there, the idea that conversational moves take place relative to a record of the current state of the conversation has been deeply influential. But what should that record look like? Answers have proliferated and the resulting conversational structures have been used to understand a huge variety of phenomena: assertion (Stalnaker, 1978), polarity (Roelofsen and Farkas, 2015), questioning (Gunlogson, 2001; Roberts, 2012), commitment (Scheffler and Malamud, 2023), illocutionary force (Rett, 2021), at-issueness (AnderBois et al., 2015), and more. The question is what should the common ground look like, how should we formally model it, and how can these models be used to understand linguistic phenomena like those above. This course will introduce students with no background into the very notion of common ground and its various extensions that have played a prominent role in the literature.

 

Introductory Courses

 

Model theory for phonology

Siddharth Bhaskar

...

 

Stoic Logic from a Model-Theoretic Perspective

Lucas Champollion

This course bridges millennia of thought to explore the intersection of formal semantics, contemporary philosophical logic, and ancient Stoic logic from a model-theoretic perspective. Students will examine the Stoics' pioneering work in propositional logic and its relevance to modern semantics and logics, focusing on temporal logic, modal logic, and two-dimensional semantics. Key topics include Stoic approaches to modality, conditionals, tense, and reference. Stoic logic has much to offer even for those who have no prior interest or expertise in it. Linguists and philosophers will appreciate the close connections to formal semantics, Fregean thought, and Kaplanian views on context sensitivity. Logicians will find resonance with recent debates in temporal and modal logic. By studying Stoic logic, we will uncover foundational concepts that predate and influence modern theories, deepening our understanding of logical and semantic frameworks old and new.

 

An introduction to Glue Semantics: Theoretical and computational perspectives

Mark-Matthias Zymla and Jamie Y. Findlay

This course introduces the theory of the syntax-semantics interface called Glue Semantics (Glue). In Glue Semantics, semantic analysis consists of logical deduction in a resource-sensitive logic (linear logic), which drives semantic composition via a correspondence between steps in a proof and combinatory operations in the lambda calculus (known as the Curry-Howard correspondence). If the terms in the logic represent syntactic objects of some kind, the logic thereby acts as the ‘glue’ between syntax and semantics. Such an intermediate scaffolding allows for a high level of autonomy between the syntactic and semantic modules, and a high degree of agnosticism with respect to which particular syntactic and semantic frameworks are chosen. Glue therefore offers an excellent vantage point from which to explore abstract questions relating to the nature of the syntax-semantics interface. This course introduces Glue and explores some of these questions from both a theoretical and a computational perspective; we believe that this combined approach offers a unique view of several interesting questions relating to formal semantics and its interfaces.

 

Predictive and Counterfactual Modality

Paolo Santorio and Jéssica Viana Mendes

Website: https://sites.google.com/umd.edu/jessicamendes/esslli-2025

Natural language contains expressions for talking about the future, and expressions for talking about counterfactual scenarios. With some notable exceptions, the literature on these topics has proceeded mostly on separate tracks, both in philosophical logic and in formal semantics. Our course will highlight empirical connections between the two domains, and will take steps towards developing a unified analytical framework. We will survey existing literature, as well as build on some of our own work. We will pay particular attention to the interaction between tense and modal domains and to the role of mood. The phenomena covered will include 'will'-sentences, non-epistemic conditionals, so-called 'fake tense' in X-marked constructions, and future reference in some subordinate environments.

 

Probability logic, language, and cognition

Niki Pfeifer

Website: https://go.ur.de/np

Uncertainty is ubiquitous in everyday life communication and reasoning. In this course, we will learn about methods and tools to understand language and cognition under uncertainty. We will get interdisciplinary perspectives by combining formal-philosophical and experimental-psychological approaches. In particular, we will understand why coherence-based probability logic offers a unified rationality framework for studying diverse phenomena including conditionals, counterfactuals, connexivity, quantification, reasoning, and argumentation on the normative level. Moreover, on the descriptive level, we will become familiar with recent experimental-psychological results on linguistic phenomena, cognition, and reasoning under uncertainty. Specifically, we will learn about formal and experimental work on nonmonotonic reasoning, conditionals, counterfactuals, quantification, and argumentation. Finally, we will achieve a deeper understanding of what it means to be rational under incomplete knowledge and uncertainty.

 

Probabilistic logic reasoning under coherence and compound conditionals

Giuseppe Sanfilippo

Website: https://sites.unipa.it/sanfilippo

In the subjectivistic theory, the probability P(E) measure the degree of belief on E being true. The consistency of the probability assessments is guaranteed by a coherence principle. All basic probabilistic properties follow from coherence. A large number of philosophers and psychologists assume valid that the probability of a natural language conditional, P(if H then A), coincides with the conditional subjective probability P(A|H) of the conditional event A|H. Usually, a conditional event is looked at as a three-valued object and compound conditionals have been defined in trivalent logics. We verify that none of these logics satisfies all the basic probabilistic properties valid for unconditional events. Then, we consider an approach to compound conditionals in the setting of conditional random quantities. We verify that all the basic logical and probabilistic properties are preserved and we illustrate applications to the psychology of uncertain reasoning, to connexive logic and to nonmonotonic reasoning.

Language and Computation Courses

 

Advanced Courses

 

Language model programming: paradigms, techniques and applications

Kyle Richardson and Gijs Wijnholds

...

 

Linguistics and NLP in political communication

Asad B. Sayeed and Ellen Breitholtz

The world has seen a drastic growth in the use of artificial intelligence techniques for manipulating public and political life. At the same time, research on political language use has not stood still. This course combines perspectives from linguistic theory, natural language processing, and political communication to provide students with the intellectual tools to apply methods from these fields into an integrated program of opinion research. Students will learn about opinion research techniques and scientific practices; theories of discourse based on Aristotelian notions of topos and their relationship to the personae of the speaker; slurs, dogwhistles, disinformation in the context of speech acts and communicative utility; games of ambiguous and manipulative communication; computational language modeling for the detection of semantic drift and political consolidation; and recent developments in the automated analysis of social media.

 

Probabilistic Dynamic Semantics

Julian Grove and Aaron Steven White

The advent of semantic inference datasets - and experimental and statistical approaches to meaning, generally - has given rise to two major kinds of questions. (1) How can semanticists use such datasets: specifically, how can the statistical properties of a dataset inform semantic theory directly? What guiding principles regulate the link between such properties and semantic theory? (2) How should semantic theories themselves be modified so that they may characterize not only informally collected acceptability and inference judgments, but statistical generalizations observed from datasets? This course brings the compositional, algebraic view of meaning employed by semanticists into contact with linguistic datasets by introducing and developing the framework of Probabilistic Dynamic Semantics (PDS). By way of illustrating this framework, we use it to model presupposition projection, vagueness and imprecision, and the Question Under Discussion. The course covers type theory and monadic semantics, as well as Bayesian computational modeling of inference data.

 

Spatial Gesture Semantics

Andy Lücking and Alexander Henlein

Website: https://aluecking.github.io/ESSLLI2025/

Visual communication means such as manual gestures interact with speech meaning. At the same time, it is well-known that gestures are sublinguistic in the sense that their contributions are neither at the at-issue nor at the non-at-issue level. The course aims to unravel this puzzle. Firstly, the sublinguistic status of gestures is captured by spatial semantics. Secondly, the course aims to explain why it is possible to talk about gestures, for instance, as part of clarification interaction. Thirdly, the preceding explanation requires a semantic framework that departs from the standard Frege/Montague models and is based on perceptual classification (with some affinity to the philosophy of Nelson Goodman).

 

Foundational Courses

 

Formal models of reasons

Aleks Knoks

Website: https://aleksknoks.com/esslli-2025-formal-models-of-reasons/

The notion of a normative reason plays crucial roles in a whole plethora of debates in ethics, practical reasoning, epistemology, the philosophy of action, the philosophy of normativity, and other areas. Some people even argue that it is crucial to making sense of all other normative notions (oughts, permissions, values, ideals, and so on) and indeed to understanding any genuinely normative domain. While most of the work on reasons is informal, recent years have seen a surge of interest in formal modeling of reasons, with the proposed models drawing on ideas from fields that include defeasible logic, formal argumentation, justification logic, decision theory, and probability theory. Against this backdrop, this course has two goals. The first is to introduce the students of ESSLLI to the philosophical literature on normative reasons and to provide them with a comprehensive overview of different formal models of reasons. The second goal is to discuss the applications of formal work on reasons to debates in philosophy, to legal reasoning, and to AI ethics (in particular, in the context of the challenge of designing artificial agents that can act ethically acceptable ways).

 

Foundations of Linguistic Data Science

John Philip McCrae

Website: https://john.mccr.ae/lds-esslli25/

Big data is fundamentally changing the way that linguists can investigate linguistic facts leading to a new research area which combines data science with linguistics. This course provides an introduction to the new area of linguistic data science by means of an introductory course with hands-on data analysis that is focused on key questions in linguistics. This course will first provide a basic introduction to data science and in particular how this can be applied to large corpora using natural language processing techniques. We will then show how this can be used to find answers to problems in syntax, semantics, multilinguality and other areas of linguistics, along with a summary giving perspectives on how these methods can be applied to students' own research.

 

Tutorial on Human Evaluation of NLP System Quality

Anya Belz and Craig Thomson

Human evaluation is considered the most reliable form of evaluation in Natural Language Processing (NLP), but there are concerns regarding how experiments are designed and executed: standardisation and comparability across different experiments is low, as is reproducibility, with repeat runs of the same evaluation often not supporting the same conclusions. This is partly due to how human evaluation is viewed in NLP: not as something that needs to be learnt before creating an experiment, but something that anyone can throw together without prior knowledge. This tutorial will inform participants about options, choices and implications from them, and present best practice principles and practical tools to help researchers design scientifically rigorous, informative and reliable experiments, over five content units each comprising a lecture and a practical component. Participants will be supported, step by step, in creating evaluation experiments and analysing results from them, using tools and other resources provided.

 

Introduction to Opinion Mining and Social Media Language Analysis

Omnia Zayed

Public opinion, speculation, and (mis)information have a profound impact on our lives in many ways. It affects democratic decisions, policymaking, and emergency reactions. The unprecedented amount of data available online creates a pressing need for automated language analysis to understand public discourse and visualise patterns/relationships through statistical analysis and results aggregation. In this course, we will introduce opinion mining and provide a comprehensive understanding of natural language processing techniques involved in analysing opinionated text on multiple levels. The course will cover the essential concepts of identifying a wide range of opinion dimensions, including sentiment, emotions, aspects, suggestions, figurative language, hate speech, misinformation, propaganda, and conspiracy, expressed in various textual data sources. Students will also gain hands-on experience with state-of-the-art approaches and algorithms used in this area in a real-world application. We conclude by discussing the potential benefits of utilising these tasks for analysing social media language, touching on their role in facilitating two-way communication during pandemics.

 

Introductory Courses

 

Reading Concordances: a training course in key corpus linguistics methodology

Stephanie Evert and Michaela Mahlberg

Website: https://www.dhss.phil.fau.eu/esslli-2025-reading-concordances

Concordance analysis via a KWIC (Key Word In Context) display is a mainstay of contemporary corpus linguistics, serving as a bridge between qualitative and quantitative approaches to the study of text. This course offers an introduction to “concordance reading”, i.e. analysis of concordance data supported by computational algorithms. We begin by situating concordance reading in the wider context of qualitative-quantitative research. We introduce key concepts for the description of patterns in concordances (e.g. collocations and colligations) as well as different examples of concordance software (e.g. AntConc, CLiC, and CQPweb). We then focus on specific concordance reading strategies, such as selecting, sorting, and grouping concordance lines, providing formal definitions and corresponding computational algorithms. Participants will gain hands-on experience working with FlexiConc, a computational library for concordance analysis to be released in March 2025, and other concordance tools. We will give participants the opportunity to consider the potential of concordance reading for their own research contexts.

 

Statistical Semantics via Logical Syntax

Mehrnoosh Sadrzadeh, Gijs Wijnholds

...

 

Text as pictures

Vincent Wang-Maścianica

A short course on the use of string diagrams for depicting and reasoning about syntax and semantics of text. We'll cover the basics of string diagrams (more scarily: monoidal category theory), and by the end we will see how to semantically interpret text in essentially any setting of computable processes (e.g. spatial configurations, games, deep learning, quantum computers). This course has no requirements outside of the ability to look at pictures.

 

Sparsity in Large Language Models: The New Odyssey

Shiwei Liu

Large language models (LLMs) are show-stealer in modern-day deep learning, and it becomes crucial to comprehend the parsimonious patterns that exist within them as they grow in scale. With the astonishing explosion of parameter counts (millions to billions) in the past few years, while chasing performance gains, training and fine-tuning these LLMs with non-industry standard hardware is becoming seemingly impossible. The exponential growth of LLMs has led to significant costs and energy consumption, prompting the new odyssey of exploration of techniques seeking more compact models. Sparsity has been a highly effective approach in the field of compressing models while maintaining performance. However, its exploration in LLMs appears to lag behind other traditional model compression techniques such as quantization. To address this research gap and provide insights into ways to promote the integration of sparsity in LLMs, we first revisit the existing works of sparse neural networks and provide a precise categorization for the community. Then we delve into the recent progress of sparsity in LLM compression and highlight the crucial role of layerwise importance to it. We further demonstrate that the root cause of such layer disparity is the widespread use of pre-layer normalization (Pre-LN) and consequently show that combining Pre-LN with Post-LN helps to mitigate this issue, leading to better utilization of computational resources to enhance model performance.

 

Structured and Unstructured Data: Knowledge Graphs, Logic and Language Models

Zuzana Nevěřilová

Website: https://github.com/popelucha/ESSLLI-2025

In unstructured data, only simple querying is possible, complex queries need structured data, i.e. data elements with defined properties and formalized relationships. The course introduces ontologies and knowledge graphs, their design and population with data. A special focus is on the data FAIRness, discovery, and reusability. Next, the course introduces the description logic and inference capabilities of knowledge graphs using standards such as RDF(S) and OWL. The power of knowledge graphs is demonstrated using special query language as well as English on our own and well-known knowledge graphs (DBPedia). Finally, we discover how large language models can support ontology building and querying.

Logic and Computation Courses

 

Advanced Courses

 

Categorical Realizability

Tom de Jong

Website: https://github.com/tomdjong/MGS-categorical-realizability#categorical-realizability

Realizability, as invented by Kleene, is a technique for elucidating the computational content of mathematical proofs. We study realizability from a categorical perspective. Starting from an abstract and general model of computation known as a partial combinatory algebra (pca), we construct the category of assemblies over it. Intuitively, an assembly is a set together with computability data and an assembly map is a function of sets that is computable. Here, the notion of computability is prescribed by the pca. Through the framework of categorical logic, the assemblies give rise to the realizability interpretation of logic, which we spell out in detail. The central theme of this course is the interplay between category theory, logic and computability theory. While category theory is not strictly required to follow the course, some familiarity with basic category theory will help in understanding the connections with more general constructions.

 

Semantics for first-order modal logics: a modern introduction

Valentin Shehtman and Dmitry Shkatov

Website: https://dshkatov.github.io/esslli2025_course.html

The course in a technical introduction to semantics of first-order modal logics aimed at students of philosophy, computer science, linguistics, and mathematics. We shall aim to present the state of the art in this area within a comprehensive, unified framework. The main focus will be on the Kripke semantics for first-modal logics, but other, more general, kinds of semantics shall be considered as tools for proving incompleteness in the Kripke semantics. The course assumes a solid background in classical first-order logic (including the completeness theorem) and propositional monomodal logic (including completeness through the canonical model construction).

 

An Invitation to Game Comonads

Tomáš Jakl

Website: https://tomas.jakl.one/teaching/2025-su-game-comonads

This course will introduce the emerging theory of game comonads, recently put forward by Abramsky, Dawar and their collaborators. Game comonads offer a novel approach to relating categorical semantics to finite model theory.

We will develop their basic theory and illustrate how they provide a structural and intrinsic account of several concrete notions that play a central role in finite model theory. These range from equivalence with respect to logic fragments (e.g., finite-variable, bounded quantifier rank, and modal fragments), to combinatorial parameters of structures (e.g., tree-width and tree-depth, or the height of a synchronization tree), and model comparison games (e.g., pebble, Ehrenfeucht-Fraïssé, and bisimulation games).

 

Logic-Based Explainable Artificial Intelligence

Joao Marques-Silva

This course offers an in-depth contact with the underpinnings of logic-based explainability in AI, a rigorous alternative to non-symbolic AI.

 

Epistemic Arithmetic

Eric Pacuit

Website: https://pacuit.org/esslli2025/epistemic-arithmetic/

This course examines epistemic extensions of formal arithmetic. We will begin by reviewing formal theories of arithmetic, including Peano Arithmetic, and Gödel's incompleteness theorems. With this foundation in place, we will study extensions of Peano Arithmetic that introduce an operator meant to represent the knowledge of an ideal mathematician. One of the key topics covered in the course is the Knower Paradox, which arises when the knowledge operator is treated as a predicate. We will analyze different versions of the paradox and evaluate proposed solutions. The course also explores attempts to formalize Gödel's disjunction, which asserts that either no algorithm can fully capture human mathematical reasoning or there are absolutely undecidable problems. While it will not be possible to cover all formal details of Gödel's theorems, our goal is to provide students with a clear understanding of the central ideas, the significance of the Knower Paradox, and the philosophical implications of the incompleteness theorems.

 

The Lambda-Calculus, from Minimal to Classical Logic

Giulio Guerrieri and Davide Barbarossa

Website: https://davidebarbarossa12.github.io/Enseignements/2024-25/ESSLLI.html

Starting from the observation that one can define a combinatory algebra on the powerset of the natural numbers that can be used to encode notions of computability theory, one can extract a syntax for a (functional) programming language: the lambda-calculus. This is actually just one instance of reflexive objects in the general framework of Cartesian closed categories, which indeed are semantics for this language. After having introduced these topics and having made some practice in programming basic boolean and arithmetic functions in the untyped lambda-calculus, we will see that its simply typed version actually represents proofs in minimal logic. The tight relation between these three (logical, computational and categorical) aspects is called the Curry-Howard correspondence, which we will explore in detail in a self-contained explanation. Such correspondence was thought to be limited to intuitionistic logic for a while, because a pure functional language like the lambda-calculus is constructive. However in the 90's it was discovered that allowing for impure features, like continuations, in the lambda-calculus, allows one to capture classical reasoning. At the end of the course, we will then focus on extending the Curry-Howard correspondence to classical logic by following these ideas as formalized in Krivine's classical realizability.

 

The quest for a logic for polynomial time

Benedikt Pago

The quest for a logic capturing polynomial time is one of the most prominent research strands in finite model theory. It seeks to find a logic in which precisely the polynomial time decidable properties of finite structures are expressible, or to prove that such a logic does not exist (which would separate P from NP). The course starts with an introduction to the problem and classic results like the Immerman-Vardi Theorem for least-fixed-point logic on ordered structures. It moves on to more recent developments such as Lichter’s separation of rank logic from polynomial time. Special emphasis will be given to lower bound techniques that are useful in a wider context within theoretical computer science, such as pebble games and the famous Cai-Fürer-Immerman construction.

 

Foundational Courses

 

Introduction to Proof Theory

Iris van der Giessen and Abhishek De

Proof theory is one of the major branches of logic that studies proofs as bona fide mathematical objects. It was born out of Hilbert’s program for the foundations of mathematics but after a century of research, it has seen applications in mathematics (proof mining, proof assistants), computer science (verification, programming language theory), and linguistics (formal natural language semantics). The course is designed to give a taste of the intuitions and techniques bespoke to proof theory emphasising the structural side. The student will become familiar with the history of structural proof theory, sequent calculi, cut-elimination, and its application. The course is intended to be foundational, and no prior knowledge of the topic is expected.

 

Introduction to Logical Argumentation

Kees van Berkel, Christian Straßer

This course shows how logical argumentation provides a natural model of defeasible reasoning. Most of our every day reasoning is defeasible. When we draw conclusions in light of incomplete and inconsistent information, the possibility to retract some of these conclusions on acquiring more information makes this reasoning defeasible. Nonmonotonic logics model such reasoning. Over the past decades, logical argumentation proved to be a highly unifying framework for such logics. In this course, we explain some of the core methods of logical argumentation in which arguments are premise-conclusions pairs are generated by an underlying logic (such as a proof calculus). We discuss how conflicts between arguments are tracked by means of various types of argumentative attacks. This gives rise to a so-called argumentation framework from which arguments can be selected following the rationale of an argumentation semantics, as defined by Dung in his seminal work on abstract argumentation. We furthermore apply this method to defeasible reasoning with normative codes that generate obligations for agents and to their doxastic interpretation for generating agent beliefs. Furthermore, we discuss important meta-theoretic desiderata for argumentative models of defeasible reasoning. In the last session, we discuss how AI explainability methods can be harnessed to explain defeasible reasoning in logical argumentation.

 

Introductory Courses

 

Temporal Logics: philosophical and computational aspects

Valentin Goranko

Website: https://www2.philosophy.su.se/goranko/Courses2025/TemporalLogics-esslli2025.html

The course will introduce and discuss the most important types and systems of temporal logics, including: instant-based logics for linear time; logics for branching time with Priorian, Peircean, and Ockhamist semantics; temporal logics of computations; (time permitting) interval-based temporal logics; and first-order temporal logics. Both philosophical and computational aspects of temporal logics will be discussed. The course is intended as interdisciplinary, for a broad audience of graduate students interested in logical, philosophical, and computational aspects of temporal reasoning.

 

Neurosymbolic learning: an introductory course to theory and applications

Efthymia Tsamoura, Emile van Krieken

Website: https://github.com/tsamoura/ESSLLI2025_NSL

Neurosymbolic learning (NSL) vows to transform AI by combining the strong induction capabilities of neural models with rigorous deduction from symbolic knowledge representation and reasoning techniques. This course focuses on various theoretical and practical aspects of NSL. We will start by introducing a fundamental NSL problem, that of training neural models so that their predictions adhere to given sentences in propositional or first-order logic. We will then focus on a problem that has received substantial attention lately: training neural models using weak supervision, using supervision produced by logical theories. We will discuss necessary and sufficient conditions that ensure learnability under PAC semantics, the relationship between this problem and other problems in the weakly supervised machine learning literature, and the phenomenon of learning imbalances. We will conclude this course with a hands-on programming demonstration of various NSL engines and discuss problems related to scalability.

 

Gossip and Knowledge

Hans van Ditmarsch and Malvin Gattinger

Website: https://malv.in/2025/esslli-gossip/

Gossip protocols facilitate peer-to-peer information sharing in networks. Each agent starts with some private information and the goal is to share this information among all agents. In distributed gossip, there is no central controller deciding who may call whom, but this is determined by independent pro-active agents and chance. In epistemic gossip protocols, knowledge conditions may restrict possible calls, for example you may avoid calling an agent who you know already to know your secret. In dynamic gossip, agents also exchange 'telephone numbers', leading to network expansion.
This course is based on the textbook "Reasoning about Gossip" by van Ditmarsch, to appear with Cambridge University Press. We will give a survey of results and methods in distributed epistemic gossip. Topics include constructing and revising gossip graphs, enumerating call sequences, and model checking gossip protocols. Next to the book we use software implementations of gossip protocols by Gattinger.

 

Deontic Logic, Normative Systems, and Their Application in AI&Law

Réka Markovich, Luca Pasetto

The course provides an introduction to the interdisciplinary field of Deontic Logic and Normative Systems by integrating considerations ranging from logic and computer science to legal theory and philosophy. The discussed concepts and concerns of normative reasoning are highly relevant for providing a solid foundation both for legal AI applications and for designing compliant (autonomous) artificial agents. First, we introduce the so-called standard systems of Monadic and Dyadic Deontic Logic. Then we show the alternative approach of rule-based systems, focusing on the notion of norm from which obligation and permission are obtained by detachment; we present Input/Output Logic as an example of this approach. For each, we discuss its ability to fulfill the purposes of legal knowledge representation. We then discuss normative concepts – like agency and rights –, and structural properties – like non-monotonicity – that are especially relevant for law, legal reasoning, and hence legal knowledge representation. Finally, we introduce the LogiKEy framework and methodology to automate and experiment with normative reasoning.

 

Logic and Graph Neural Networks

Michael Benedikt and David Jaime Tena Cucala

Graph Neural Networks (GNNs) are powerful machine learning models for graph data that have a wide range of applications. A key focus in GNN research is understanding their expressiveness, that is, having a clear understanding of the computations that they can perform. This course explores recent results that characterise the expressive power of GNNs in terms of logic-based languages. The formal properties of these languages are very well understood; therefore, they provide an ideal tool for this analysis. By showing the connections between GNNs and logic, we will bring closer the symbolic and neural approaches to artificial intelligence. Finally, we will also show how expressiveness results for GNNs can be exploited for practical applications such as explaining the predictions of GNNs.

 

Canonical Extensions: A Unifying Perspective on Semantics for Logics

Wesley Fussner

Canonical extensions are lattice completions that first appeared in Jónsson and Tarski's work on Boolean algebras with operators in the 1950s. In the decades since their introduction, canonical extensions have been generalized to handle extensions of arbitrary additional operations on lattice-ordered algebraic structures that are not necessarily Boolean, or even distributive. They have also attracted powerful applications, most prominently to the study of Kripke semantics for non-classical logics and duality theory. In both cases, these applications reach beyond the usual confines of distributive lattice based structures. Topics will include the general theory of lattice completions, MacNeille completions, the elements of canonical extensions, extensions of operators on lattices, canonical extensions and duality, and applications to relational semantics.

 

Introduction to Proof-theoretic Semantics

Alexander V. Gheorghiu and Tao Gu

Proof-theoretic Semantics (P-tS) is a rapidly developing, novel approach to meaning in logic. Here `proof' refers to a pre-logic notion of valid argument, not to derivation in a fixed system. While P-tS refers to both semantics *in terms of* proof and semantics *of* proofs, in this course, we concentrate on the former; that is, proof-theoretic notions of validity. Note that, as no proof system is fixed, soundness and completeness remain desirable features of formal systems as they have always been. This is a highly interdisciplinary subject spanning philosophy (of language and meaning), mathematics, and informatics (esp. systems modelling and AI). This course introduces P-tS by presenting the major ideas and themes and explaining the P-tS for several important families of logics. We also discuss some of the applications of P-tS in informatics. This will enable researchers to engage with the current literature and begin research in P-tS.

 

A Gentle Introduction to Deep Inferenc

Lutz Straßburger and Victoria Barrett

Website: https://www.lix.polytechnique.fr/~lutz/orgs/ESSLLI2025-course.html

The course will give a gentle introduction to deep inference, which is an intriguing design principle for proof formalisms. The basic idea is that the proof rules, which form the building blocks of formal proofs, are not restriced to the main connectives of the formulas, but instead can be applied at anywhere deep inside formulas.

In this course, we will provide a clear understanding of the intuitions behind deep inference, together with an exhibition of the properties that deep inference proof systems enjoy. There will be some attention with respect to normalisation, as this is surprisingy different from other more traditional formalisms.

Properties that particularly stand out are atomicity, locality, and regularity. Normalisation procedures specific to deep inference allow for new notions of normal forms for proofs, as well as for a general modular cut-elimination theory. Furthermore, the ability to track every occurrence of an atom throughout a proof allows for the definition of geometric invariants with which it is possible to normalise proofs without looking at their logical connectives or logical rules, obtaining a purely geometric normalisation procedure.

All theses properties also give rise to new notions of proof identity, capturing the "essence" of proof beyond mere rule permutations.

This course is intended to be introductory. That means no prior knowledge of proof theory is required. However, the student should be familiar with the basics of propositional logic.