2024- Past Events

Events in 2025-2026 Fall Semester

Abstract:

In game theory, an elementary and fundamental class of games is impartial combinatorial games (ICGs). The majority of classical and interesting ICGs are LIA-definable and terminating. One of the challenging and long-standing problems of ICGs is to compute winning strategies for possibly infinite number of winning states. To this end, we first propose a logical framework to formalize ICGs based on the linear integer arithmetic fragment of numeric part of PDDL. We then propose two approaches to generating the winning formula that exactly captures the states in which the player can force to win. Furthermore, we compute winning strategies for ICGs based on the winning formula. Experimental results on several games demonstrate the effectiveness of our approach.

Abstract:

In game theory, an elementary and fundamental class of games is impartial combinatorial games (ICGs). The majority of classical and interesting ICGs are LIA-definable and terminating. One of the challenging and long-standing problems of ICGs is to compute winning strategies for possibly infinite number of winning states. To this end, we first propose a logical framework to formalize ICGs based on the linear integer arithmetic fragment of numeric part of PDDL. We then propose two approaches to generating the winning formula that exactly captures the states in which the player can force to win. Furthermore, we compute winning strategies for ICGs based on the winning formula. Experimental results on several games demonstrate the effectiveness of our approach.

Abstract:

In this paper I apply the new Pragmatics to Assertions. Contrary to the prevalent perspective of viewing assertions, viz., epistemic, I argue that assertions are Pragmatic and moreover, are constitutively Pragmatic (in the type of Pragmatics to which our specific Pragmatics belongs). This perspective will cast a new light on whether there is a constitutive Epistemic Norm of Assertion (Williamson). We’ll explore various new features of assertions, viewed from this perspective.

Abstract:

In this paper I apply the new Pragmatics to Assertions. Contrary to the prevalent perspective of viewing assertions, viz., epistemic, I argue that assertions are Pragmatic and moreover, are constitutively Pragmatic (in the type of Pragmatics to which our specific Pragmatics belongs). This perspective will cast a new light on whether there is a constitutive Epistemic Norm of Assertion (Williamson). We’ll explore various new features of assertions, viewed from this perspective.

Abstract:

Bi-intuitionistic logic  is intuitionistic logic with co-implication , which is a logical connective dual to usual implication. Roughly speaking, while

must hold between conjunction and implication,

must hold between disjunction and co-implication. In classical logic, as  means , nothing interesting will occur by introducing co-implication. The main aim of my talk is to examine how intuitionistic world will be affected by the introduction of co-implication, by checking basic logical properties of extensions of BiInt in comparison with those of extensions of intuitionistic logic.

First, we will discuss the subject from syntactical aspects, that include (cut-free) sequent formulation, (local) deduction theorems, and also negative translation. Next we will focus our attention on the symmetry features peculiar to extensions of BiInt. It is pointed out that an interesting duality exists between a given logic and its mirror image, which can preserves some interesting logical properties. Also, algebraic approaches based on bi-Heyting algebras will be discussed.

Events in 2024-2025 Spring Semester

Abstract:

Scholars have long been captivated by repetitive and parallel structures in early Chinese texts. They have described, classified and defined repetitive and parallel figures in these texts and offered explanations of the function and operating principles of these figures. This talk critically engages with such explanations and explores new pathways of analysis in this field. In a first part, the talk provides a short historical overview of methods and theories that explain literary repetitions and parallelisms either as a reflection of structures that exist outside of the texts or with regard to their literary effects on the readers. Subsequently, the talk offers an own approach emphasising time and space aspects of repetitive and parallel structures in early Chinese philosophical texts, arguing that such structures construct a spatial dimension in these texts. In doing so, the talk argues, they go beyond what Ricoeur calls the “Model of the Text” and produce textual objects more akin to the spatial mode of visual objects and ritual performances. In its final part, the talk conducts an analysis of visual materials to illustrate how they reinforce, complement, or even inspire novel perspectives on the geometrical logic of repetition and parallelism within texts. It first explores scholarly conceptual discourse surrounding repetition and parallelism in ornamentation before finally turning to an analysis of Han mural art to discuss basic principles of composition in early Chinese textual and visual art and the roles that repetition and parallelism play to construct meaning therein.

Abstract:

Modal μ-calculus, introduced by D. Kozen, is a propositional modal logic extended with greatest and least fixpoint operators. In general, the μ-calculus is much more expressive than modal logic. The alternation hierarchy of μ-formulas is generated by calibrating the entanglement of the fixed-point operators in a μ-formula. 
Alberucci and Facchini [1] demonstrated that the alternation hierarchy of the modal μ-calculus collapses to the alternation-free fragment over transitive frames (for K4) and further to modal logic over equivalence relations (for S5). We extend their results to a broader range of frames, and then characterize such collapsing phenomena in terms of special μ-equations. 
Furthermore, we apply our findings to epistemic logics, investigating how the alternation hierarchy behaves in systems such as S4.2, S4.3, S4.3.2, and S4.4. From this perspective, we analyze degrees of ignorance in these logics, providing insights into their epistemic structures. 
This research is conducted in collaboration with Dr. Leonard Pacheco (Institute of Science, Tokyo), and an earlier version of this work was presented in [2]. 
[1] L. Alberucci and A. Facchini. The modal μ-calculus hierarchy over restricted classes of transition systems. J. Symbolic Logic 74 (4) 1367 – 1400, 2009. 
[2] L. Pacheco and K. Tanaka. The Alternation Hierarchy of the μ-calculus over Weakly Transitive Frames. WoLLIC 2022, LNCS 13468, 207-220, 2022. 

Abstract:

This is a survey talk about modal logics of model constructions with a particular emphasis on modal logics of forcing. We shall discuss what we can learn from them, how we determine such a modal logic, and what the most relevant open questions are.

Abstract:

This talk will focus on the Craig interpolation theorem, including its meaning, history, and applications. We will present a standard proof-theoretic approach known as Mahara’s method to demonstrate the proof of this theorem.  The foundational system under consideration is a first-order intuitionistic epistemic logic IEL with distributed knowledge. We will demonstrate the Craig interpolation theorem in the system that does not include function symbols. The contents discussed are based on a collaborative work with Katsuhiko Sano and Ryo Murai.

Abstract:

Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing. Fine-tuning these models involves adapting a pre-trained model to specific tasks or domains using smaller datasets, thereby enhancing their performance and relevance. While machine learning techniques are commonly employed for fine-tuning, logic-based approaches—inspired by neural-symbolic learning and formal learning theory—offer an alternative pathway. In this talk, I will present two projects that utilize logic-based methods to enable autonomous agents to learn effectively from limited data, facilitating personalized outcomes. The first project focuses on tailoring explanations through conversational interactions, while the second aims to infer desires from emotions. Both projects highlight the potential of logic-based fine-tuning in enabling agents to achieve sophisticated understanding and reasoning with small data.

Abstract:

In this talk, I will present the proof of Nick Bezhanishvili’s conjectures on generalized Medvedev logics, which is a joint work with Gaëlle Fontaine, and provide an overview of the landscape of this class of logics. Additionally, I will discuss a strengthened version of Inamdar’s conjecture on the logic of spiked Boolean algebras. If time permits, I will also share some very very recent ideas on a potential result related to Cheq logic.

Abstract:

Relating logic is a logic of relating connectives — just as Modal Logic is a logic of modal operators. The basic idea behind relating connectives is that the logical value of a given complex proposition is the result of two things: 
(i) the logical values of the main components of this complex proposition;   supplemented with 
(ii) a valuation of the relation between these components. 
The latter element is a formal representation of an intensional relation that emerges  from the connection of several simpler propositions into one more complex proposition. In the presentation I will present a general outline of relating semantics and selected application examples (introductory article:   Relating Semantics as Fine-Grained Semantics for Intensional Logics,   https://link.springer.com/chapter/10.1007/978-3-030-53487-5_2).

Abstract:

There exist various works on intuitionistic modal logics which originate from different sources. In this talk, I will construct the system ILS5 (the S5 modal expansion of intuitionistic first-order logic) which maintains the Brouwer-Heyting-Kolmogorov (BHK) interpretation. Meanwhile, I will explore whether ILS5 accepts the Barcan Formula from two perspectives: intuitive interpretation and relational semantics.
In providing an intuitive interpretation for ILS5 based on the BHK interpretation, the main difficulty is that the BHK interpretation is confined to first-order logical constants, because logic relies on mathematics in intuitionism. So I will provide a layered intuitive interpretation for ILS5. In this interpretation, the Barcan formula will not be accepted as a general principle within ILS5. This interpretation relies on a hierarchy of truths between intuitionistically necessary truth and classically necessary truth. The proto-ontological difference between Brouwer’s mathematical intuitionism and classical mathematics (C. Posy proposes) can provide the philosophical foundation for this hierarchy.
I will also construct a relational semantics for ILS5 based on a slight variation of a frame which K. Došen gives. This semantics can help to show that the Barcan formulais not a theorem of ILS5. Basic properties of the ILS5 model will be shown. Typical metatheorems of ILS5 e.g. the monotonicity theorem, soundness theorem, and completeness theorem will be proved. Finally, this semantics has an interesting application in modeling knowledge and belief transfer in social settings.

Abstract:

Spohn’s ranking function provides a semi-quantitative, quasi-probabilistic measure for an algebra 𝒜 over a set of possibilities W, assigning numerical values to sets in 𝒜 and thus raising a question of how to interpret and generate non-trivial ranking numbers. In this paper, we adopt a belief-first epistemological perspective and introduce a new consistency violation counting algorithm (CVCA), which generates ranking numbers based on an agent’s existing beliefs. The central idea of CVCA is to assign a unique numerical value to a proposition by counting the minimal number of reference beliefs it contradicts. To develop this approach, we first introduce two assumptions regarding a reference set and how the CVCA works. Based on these assumptions, we define CVCA and demonstrate its constructive feature by proposing two implementation methods: one using kernel contraction in belief revision theories and the other the breadth-first search algorithm in computer science. Finally, we show how ranking numbers can be generated and explained by integrating a belief-first epistemological view, a computational algorithm, and a ranking function into a unified framework. 

Abstract:

Higher order likes and desires sometimes lead agents to have ungrounded or paradoxical preferences. This situation is particularly problematic in the context of games. If payoffs are interdependent, the overall assessment of particular courses of action becomes ungrounded; in such cases the matrix of the game is radically under-determined. Paradigmatic examples of this phenomenon occur when players are ‘perfect altruists’ or ‘perfect haters’, in a sense to be explained. In this paper I rely on a dynamic doxastic logic to mimic the search for a suitable matrix. Upgrades are triggered by conjectures on other players’ utilities, which can in turn be based on behavioral or verbal cues. We can prove that, under certain conditions, pairs of agents with paradoxical preferences eventually come to believe that they are not able to interact in a game. As a result I hope to provide a better understanding of game-theoretic ungroundedness, and, more generally, of the nuances of higher order preferences and desires. 

Abstract:

We are unaware of many things, and we are often unaware that we are unaware of them. But what is (un)awareness, and how does it relate to traditional epistemic notions such as belief, knowledge, and uncertainty? One influential model of unawareness in economics is the model developed by Heifetz, Meier, and Schipper (hereafter HMS). In their model, the objects of knowledge and awareness can be viewed as what we call directed events, which are sets of states relativized to a level of awareness (intuitively a subject matter or a question). However, their algebra of directed events does not form a Boolean algebra or even a lattice. This feature of their model raises two questions: What is the algebraic structure of directed events, and how does knowledge of directed events relate to knowledge of standard events (represented as sets of states)? This paper addresses both questions. First, we show that HMS event algebras correspond to (i) a sub-class of agglomerative algebras (Goodman 2019) and (ii) a sub-class of relativized Boolean algebras (Piermont 2019). Second, we show that knowledge and awareness of directed events is reducible to knowledge and awareness of standard events. One conceptual upshot of our results is that, despite apparent differences, there is much commonality between the HMS model of awareness and the Boolean-algebra-based models of Lederman and Fritz (2016) and Holliday (forthcoming).
This is joint work with Wesley Holliday.

Abstract:

For many decades now, logics which permit inconsistent but non-trivial theories have been investigated and discussed. However, of recent years, we have seen the recognition that there are logics which not only permit contradictions, but which deliver contradictions: the logical truths are themselves inconsistent. As yet, they have no standard name as far as I know. Let us call them überconsistent logics. Dialetheism is the view that some contradictions are true. It might well be thought that these logics which deliver contradictory logical truths provide a slam dunk for dialetheism. After all, as Quine puts it, ‘if sheer logic is not conclusive, what is?’ Matters are not that straightforward, however. This talk is an initial investigation of the relationship between überconsistent logics and dialetheism. In the first part of the talk I give the appropriate background for the discussion. In the second I discuss how three well known überconsistent logics bear on the matter of dialetheism.

Abstract:

Vredenburgh (2021) argues for a collective interest in “explainability” of machine‐learning outputs on the grounds that, without genuine causal explanations, agents lack the means to revise their strategies. This paper begins by examining the implicit theory of explanation at stake, showing it must satisfy two classical desiderata: truth‐tracking (each explanans must be factually and causally sound) and verifiability (the inferential steps must be inspectable and checkable). I then introduce a decision‐theoretic model—analogous in human hiring and grading—demonstrating that imposing fully transparent, ex-ante rules enforces a shallow proof structure but forfeits accuracy when novel, unanticipated data arise. By contrast, any rule that learns and adapts must embed latent premises, deepening the “proof tree” and eluding ex-ante inspection. This accuracy–explainability trade-off undermines Vredenburgh’s case for an unqualified Right to Explanation in dynamic contexts, for it shows that insisting on deductive transparency can incur unacceptable epistemic and practical costs.