Logic Meets Norms

The workshop, “Logic Meets Norms”, is scheduled in conjunction with the visit of Professor Leon van der Torre and his team of doctoral students to the JRC for Logic at Tsinghua University on May 29-30. The opportunity presented by this visit has given us the impetus to organize a workshop on the interaction between logic, normative systems, and artificial intelligence. We aim to delve deep into how these fields interact with and influence each other, and how their confluence could shape our understanding of the related concepts and the potential future developments.

  • Time: May 29th, 2023
  • Venue: Room 329, Meng Minwei Humanities Building, Tsinghua University
Program
TimeTitleSpeaker
13:00-13:05Opening RemarksFenrong Liu (Tsinghua University)
13:05-13:50The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and ArgumentationLeon van der Torre (University of Luxembourg)
13:50-14:30A Logical Approach to Learning ProcessDazhu Li (Institute of Philosophy, Chinese Academy of Sciences)
14:30-15:10Modal Logics for Reasoning in Distributed GamesLei Li (Tsinghua University)
15:10-15:20Break
15:20-16:00Principles and Practice of Formal Argumentation: Argument Strength, Acceptance and StorageLiuwen Yu (University of Luxembourg)
16:00-16:40Norms from a Role-based PerspectiveFengxiang Cheng (Tsinghua University)
16:40-17:20The Overtone of Monotonicity under Desire and Deontic ModalsJialiang Yan (Tsinghua University)
17:20-18:00A Logical Approach to Doxastic Causal ReasoningQingyu He (Tsinghua University)
Abstracts
Leon van der Torre

We present a framework for distributing normative reasoning across various normative systems, each with their own stakeholder and sets of norms, and a mechanism for resolving moral dilemmas based on formal argumentation and a defeat relationship between arguments. Dilemmas traverse an ‘escalation ladder’ at which dilemmas are solved by either combining the arguments of individual systems introducing possible new defeats, combining the systems to generate additional, combined arguments introducing new defeats or, finally, relying on an additional stakeholder, referred to as Jiminy, to provide a (context-dependent) priority relation between stakeholders in order to remove certain defeats. The framework is supported by a running example and a high-level discussion on the integration of the framework in chosen, existing agent architectures. The resulting and proposed Jiminy advisor model is discussed from the perspective of explainability and in comparison with related work.

Joint work with Beishui Liao, Pere Pardo and Marija Slavkovik

To appear in: Journal of Artificial Intelligence Research (JAIR)

Dazhu Li

In this talk, we develop a general framework—the supervised learning game—to investigate the interaction between Teacher and Learner in learning processes. In particular, our proposal highlights several features of the agents: on the one hand, Learner may make mistakes in the learning process, and she may also ignore the potential relation between different hypotheses; on the other hand, Teacher is able to correct Learner’s mistakes, eliminate potential mistakes and point out the facts ignored by Learner. To reason about strategies in this game, we develop a modal logic of supervised learning and study its properties. Broadly, this work takes a small step towards studying the interaction between graph games, logics and formal learning theory. This is joint work with Alexandru Baltag and Mina Young Pedersen.

Lei Li

Many games can be modeled from the internal perspective of players and the external perspective of the modeler. We refer to this type of game as the distributed game. For such a game, we characterize facets of the game using local arenas and propose the mechanisms by which these local arenas form the global arena. In terms of logical analysis, we propose several two-layer distributed game logics, with local formulas at local layers and global formulas at the global layer. Moreover, we investigate axiomatization, decidability, complexity, and other related issues. This is joint work with Fenrong Liu, Sujata Ghosh and R Ramanujam.

Liuwen Yu

AI is always human centered and concerned with the interaction among intelligent agents. According to the K12 definition, AI consists of five challenges: vision, representation and reasoning, learning, interaction and AI in society. This thesis is concerned with AI and law, and contributes to knowledge representation and reasoning, and legal, ethical and social implications of AI. In knowledge representation and reasoning, we use not only traditional propositional, first order and modal logics, but in particular we study nonmonotonic logics to deal with practical and common sense reasoning. In particular, we use techniques from formal argumentation, and apply them to legal settings in which claims are explained and justified by legal norms, and where legal disputes are settled by balancing arguments in favor and against issues. This thesis consists of three parts. First, we address the construction of argumentation frameworks from legal norms, which is concerned not only with the construction of the arguments, but also with the relations among these arguments.

Second, we address the creation of so-called extensions from such argumentation frameworks. In the case of legal disputes, these extensions may represent a common agreement on the arguments where the agent agree, or multiple extensions where the agents agree to disagree. In the first two parts of the thesis, we use a principle based approach, which means that we not only present various constructions, but we also introduce formal properties to distinguish the constructions. In the third part of this thesis, we discuss architectures how such interactions among agents can be represented and implemented in a distributed system, using state of the art technologies like multiagent systems and blockchains. We propose the IHiBO architecture and illustrate it using a case study from FinTech. We also discuss the interdisciplinary challenges of the research reported in this thesis.
Fengxiang Cheng

The understanding of norms has been central to deontic logic and normative theories. Examining the internal structure of a norm and the process of obligation generation within a norm can serve as a foundation for studying deontic concepts as well as providing a basis for reasoning in a deontic context. In this work, we present a new perspective on understanding norms and some deontic paradoxes. Within a game-theoretical framework, we discuss how general abstract norms relate to individual concrete obligations. We shall explore this subject by delineating it into two primary sections with several examples: “When does a person obey a norm?” and “Where does an obligation come from?” The roles of agents within a norm hold paramount importance in our thesis. This is a joint work with Chenwei Shi and Jialiang Yan.

Jialiang Yan

In this talk, we explore the puzzles challenging monotonic semantics for bouletic and deontic modalities, and argue their genesis from monotonicity-induced weakening effects. We propose a unified theory for these puzzles, suggesting the weakening effects of monotonicity generate pragmatic inferences, similar to the ignorance and free choice inferences triggered by disjunctive statements. We introduce a reinterpretation theory and apply the logic-based QBSML framework (Aloni & van Ormondt, 2022) to formalize the mechanics of these inferences and the reinterpretation process.

Qingyu He

Belief revision and causality play an important role in many applications, typically, in the study of database update mechanisms and data dependence. New contributions on causal reasoning are continuously added to the pioneering works by Pearl, Halpern and others. Though there is a long tradition of modeling belief revision in philosophical logic, the entanglement between belief revision and causal reasoning has not yet been fully studied from a logical view.

In this presentation, we will propose a new formal logic for doxastic causal reasoning. With examples, we will illustrate that our framework explains the rational way of belief revision based on causal reasoning. A complete axiomatization, as well as a decidability result, will be given. In addition, several issues regarding the contrast between qualitative and quantitative approaches will be discussed. This is joint work with Kaibo Xie and Fenrong Liu.

Participants: Qian Chen, Fengxiang Cheng, Penghao Du, Rui Fan, Sujata Ghosh, Qingyu He, Feng Jiang, Dazhu Li, Lei Li, Fenrong Liu, Jeremy Seligman, Leon van der Torre, Jialiang Yan, Liuwen Yu.