The workshop, “Logic Meets Norms”, is scheduled in conjunction with the visit of Professor Leon van der Torre and his team of doctoral students to the JRC for Logic at Tsinghua University on May 29-30. The opportunity presented by this visit has given us the impetus to organize a workshop on the interaction between logic, normative systems, and artificial intelligence. We aim to delve deep into how these fields interact with and influence each other, and how their confluence could shape our understanding of the related concepts and the potential future developments.
- Time: May 29th, 2023
- Venue: Room 329, Meng Minwei Humanities Building, Tsinghua University
Program
Time | Title | Speaker |
---|---|---|
13:00-13:05 | Opening Remarks | Fenrong Liu (Tsinghua University) |
13:05-13:50 | The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation | Leon van der Torre (University of Luxembourg) |
13:50-14:30 | A Logical Approach to Learning Process | Dazhu Li (Institute of Philosophy, Chinese Academy of Sciences) |
14:30-15:10 | Modal Logics for Reasoning in Distributed Games | Lei Li (Tsinghua University) |
15:10-15:20 | Break | |
15:20-16:00 | Principles and Practice of Formal Argumentation: Argument Strength, Acceptance and Storage | Liuwen Yu (University of Luxembourg) |
16:00-16:40 | Norms from a Role-based Perspective | Fengxiang Cheng (Tsinghua University) |
16:40-17:20 | The Overtone of Monotonicity under Desire and Deontic Modals | Jialiang Yan (Tsinghua University) |
17:20-18:00 | A Logical Approach to Doxastic Causal Reasoning | Qingyu He (Tsinghua University) |
Abstracts

We present a framework for distributing normative reasoning across various normative systems, each with their own stakeholder and sets of norms, and a mechanism for resolving moral dilemmas based on formal argumentation and a defeat relationship between arguments. Dilemmas traverse an ‘escalation ladder’ at which dilemmas are solved by either combining the arguments of individual systems introducing possible new defeats, combining the systems to generate additional, combined arguments introducing new defeats or, finally, relying on an additional stakeholder, referred to as Jiminy, to provide a (context-dependent) priority relation between stakeholders in order to remove certain defeats. The framework is supported by a running example and a high-level discussion on the integration of the framework in chosen, existing agent architectures. The resulting and proposed Jiminy advisor model is discussed from the perspective of explainability and in comparison with related work.
Joint work with Beishui Liao, Pere Pardo and Marija Slavkovik
To appear in: Journal of Artificial Intelligence Research (JAIR)

In this talk, we develop a general framework—the supervised learning game—to investigate the interaction between Teacher and Learner in learning processes. In particular, our proposal highlights several features of the agents: on the one hand, Learner may make mistakes in the learning process, and she may also ignore the potential relation between different hypotheses; on the other hand, Teacher is able to correct Learner’s mistakes, eliminate potential mistakes and point out the facts ignored by Learner. To reason about strategies in this game, we develop a modal logic of supervised learning and study its properties. Broadly, this work takes a small step towards studying the interaction between graph games, logics and formal learning theory. This is joint work with Alexandru Baltag and Mina Young Pedersen.

Many games can be modeled from the internal perspective of players and the external perspective of the modeler. We refer to this type of game as the distributed game. For such a game, we characterize facets of the game using local arenas and propose the mechanisms by which these local arenas form the global arena. In terms of logical analysis, we propose several two-layer distributed game logics, with local formulas at local layers and global formulas at the global layer. Moreover, we investigate axiomatization, decidability, complexity, and other related issues. This is joint work with Fenrong Liu, Sujata Ghosh and R Ramanujam.

AI is always human centered and concerned with the interaction among intelligent agents. According to the K12 definition, AI consists of five challenges: vision, representation and reasoning, learning, interaction and AI in society. This thesis is concerned with AI and law, and contributes to knowledge representation and reasoning, and legal, ethical and social implications of AI. In knowledge representation and reasoning, we use not only traditional propositional, first order and modal logics, but in particular we study nonmonotonic logics to deal with practical and common sense reasoning. In particular, we use techniques from formal argumentation, and apply them to legal settings in which claims are explained and justified by legal norms, and where legal disputes are settled by balancing arguments in favor and against issues. This thesis consists of three parts. First, we address the construction of argumentation frameworks from legal norms, which is concerned not only with the construction of the arguments, but also with the relations among these arguments.

The understanding of norms has been central to deontic logic and normative theories. Examining the internal structure of a norm and the process of obligation generation within a norm can serve as a foundation for studying deontic concepts as well as providing a basis for reasoning in a deontic context. In this work, we present a new perspective on understanding norms and some deontic paradoxes. Within a game-theoretical framework, we discuss how general abstract norms relate to individual concrete obligations. We shall explore this subject by delineating it into two primary sections with several examples: “When does a person obey a norm?” and “Where does an obligation come from?” The roles of agents within a norm hold paramount importance in our thesis. This is a joint work with Chenwei Shi and Jialiang Yan.

In this talk, we explore the puzzles challenging monotonic semantics for bouletic and deontic modalities, and argue their genesis from monotonicity-induced weakening effects. We propose a unified theory for these puzzles, suggesting the weakening effects of monotonicity generate pragmatic inferences, similar to the ignorance and free choice inferences triggered by disjunctive statements. We introduce a reinterpretation theory and apply the logic-based QBSML framework (Aloni & van Ormondt, 2022) to formalize the mechanics of these inferences and the reinterpretation process.

Belief revision and causality play an important role in many applications, typically, in the study of database update mechanisms and data dependence. New contributions on causal reasoning are continuously added to the pioneering works by Pearl, Halpern and others. Though there is a long tradition of modeling belief revision in philosophical logic, the entanglement between belief revision and causal reasoning has not yet been fully studied from a logical view.
In this presentation, we will propose a new formal logic for doxastic causal reasoning. With examples, we will illustrate that our framework explains the rational way of belief revision based on causal reasoning. A complete axiomatization, as well as a decidability result, will be given. In addition, several issues regarding the contrast between qualitative and quantitative approaches will be discussed. This is joint work with Kaibo Xie and Fenrong Liu.