previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 63E: Argumentation
Location: Taub 2
Rediscovering Argumentation Principles Utilizing Collective Attacks
PRESENTER: Matthias König

ABSTRACT. Argumentation Frameworks (AFs) are a key formalism in AI research. Their semantics have been investigated in terms of principles, which define characteristic properties in order to deliver guidance for analysing established and developing new semantics. Because of the simple structure of AFs, many desired properties hold almost trivially, at the same time hiding interesting concepts behind syntactic notions. We extend the principle-based approach to Argumentation Frameworks with Collective Attacks (SETAFs) and provide a comprehensive overview of common principles for their semantics. Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs. We introduce the notion of the reduct as well as the modularization principle for SETAFs which will prove beneficial for this kind of investigation. We then demonstrate how our findings can be utilized for incremental computation of extensions and give a novel parameterized tractability result for verifying preferred extensions.

A Credal Least Undefined Stable Semantics for Probabilistic Logic Programs and Probabilistic Argumentation
PRESENTER: Fabio Cozman

ABSTRACT. We present an approach to probabilistic logic programming and probabilistic argumentation that combines elements of the L-stable semantics and the credal semantics. We derive the complexity of inferences, propose an extended version of argumentation graphs with a semantics that maps to the L- stable semantics, and introduce a definition for the probability of an argument.

Computing Stable Argumentative Conclusions under the Weakest-Link Principle in the ASPIC+ Framework
PRESENTER: Tuomo Lehtonen

ABSTRACT. The study of computationally hard reasoning tasks in different argumentation formalisms is a topical research direction in KR. Here we focus on ASPIC+ as a central structured argumentation formalism. It has recently been shown that rephrasing argumentation semantics in terms of subsets of defeasible elements allows for gaining new insights for reasoning about acceptance in established fragments of ASPIC$^+$. In this paper we provide a non-trivial generalization of these recent results, capturing preferences as a major ingredient of ASPIC+. In particular, considering preferences under the weakest-link principle, we show that the stable semantics can be phrased in terms of subsets of defeasible elements. We employ the rephrasing for establishing both complexity results and practical algorithms for reasoning about acceptance in this variant of ASPIC+. Justified by completeness for the second level of the polynomial hierarchy, we develop an iterative answer set solving based approach to reasoning about acceptance under the so-called elitist lifting in ASPIC$^+$ frameworks. We also show that an implementation of the approach scales well in practice.

09:00-10:30 Session 63F: Belief Revision
Location: Taub 3
On Syntactic Forgetting with Strong Persistence

ABSTRACT. It is generally agreed upon that so-called strong persistence (SP) captures best the essence of forgetting in logic programming. While classes of operators, such as FR and FSP, that satisfy immediate relaxations of (SP), and in the case of FSP, even (SP) whenever possible, have been characterized semantically, many practical questions have yet to be addressed. This technical paper aims to answer one of them: How can atoms be forgotten from a program without having to calculate its exponential number of models. To this end, we introduce two concrete representatives of FR and FSP that forget sets of atoms by syntactical manipulation of a program’s rules. This may in many cases prevent exponential blowups and produce a forgetting result that is close to the original program.

Kernel Contraction and the Order of Relevance

ABSTRACT. The postulate of relevance provides a suitable and general notion of minimal change for belief contraction. Relevance is tightly connected to smooth kernel contractions when an agent's epistemic state is represented as a logically closed set of formulae. This connection, however, breaks down when an agent's epistemic state is represented as a set of formulae not necessarily logically closed. We investigate the cause behind this rupture, and we reconnect relevance with smooth kernel contractions by constraining the behaviour of their choice mechanisms and epistemic preference relations. Our first representation theorem connects smooth kernel contractions with a novel class of epistemic preference relations. For our second representation theorem, we introduce the principle of symmetry of removal that relates relevance to epistemic choices. For the last theorem, we devise a novel class of smooth kernel contractions, satisfying relevance, which are based on epistemic preference relations that capture the principle of symmetry of removal.

Who’s the Expert? On Multi-source Belief Change
PRESENTER: Joe Singleton

ABSTRACT. Consider the following belief change/merging scenario. A group of information sources give a sequence of reports about the state of the world at various instances (e.g. different points in time). The true states at these instances are unknown to us. The sources have varying levels of expertise, also unknown to us, and may be knowledgeable on some topics but not others. This may cause sources to report false statements in areas they lack expertise. What should we believe on the basis of these reports? We provide a framework in which to explore this problem, based on an extension of propositional logic with expertise formulas. This extended language allows us to express beliefs about the state of the world at each instance, as well as beliefs about the expertise of each source. We propose several postulates, provide a couple of families of concrete operators, and analyse these operators with respect to the postulates.

10:30-11:00Coffee Break
11:00-12:00 Session 65E: Invited Talk
Location: Taub 2
Graph Queries: Do We Study What Matters?

ABSTRACT. Graph queries are studied extensively by many communities: data management, AI, Semantic Web. The development of the theoretical models of such queries happened in between the two eras of prominence of graphs: the early network data model (later overridden by relational) and more recent models of RDF and especially property graphs, gaining prominence in the last decade. Classical theory gives us the fundamental notion of Regular Path Queries (RPQ) and its many derivatives: CRPQs, UCRPQs, 2UCRPQs, ECRPQs, RDPQs, GXPath etc. This is still the model that dominates in research literature.

Applications follow a different path however. A number of successful graph query languages including Cypher, PGQL, and GSQL led to the development of a new international standard called GQL (Graph Query Language). The core of GQL is inspired by RPQs in the same way the Hoover dam is inspired by a beaver blocking a local creek: the essence is similar, but the end result is not. However, GQL is still work in progress and even when finished it will come in the shape of hundreds of pages of specs, rather than a simple mathematical definition. The goal of this talk is to give such a definition to the community, and issue a call to change the main language of study to reflect graph languages that will dominate the practical landscape for decades to come.

12:00-12:30 Session 66A: Automated Reasoning
Location: Taub 2
Automating Reasoning with Standpoint Logic via Nested Sequents

ABSTRACT. Standpoint logic is a recently proposed formalism in the context of knowledge integration, which advocates a multi-perspective approach permitting reasoning with a selection of diverse and possibly conflicting standpoints rather than forcing their unification. In this paper, we introduce nested sequent calculi for propositional standpoint logics---proof systems that manipulate trees whose nodes are multisets of formulae---and show how to automate standpoint reasoning by means of non-deterministic proof-search algorithms. To obtain worst-case complexity-optimal proof-search, we introduce a novel technique in the context of nested sequents, referred to as "coloring," which consists of taking a standpoint formula as input, guessing a certain coloring of its subformulae, and then running proof-search in a nested sequent calculus on the colored input. Our technique lets us decide the validity of standpoint formulae in CoNP since proof-search only produces a "partial" proof relative to each permitted coloring of the input. We show how all partial proofs can be fused together to construct a complete proof when the input is valid, and how certain partial proofs can be transformed into a counter-model when the input is invalid. These "certificates" (i.e. proofs and counter-models) serve as explanations of the (in)validity of the input.

12:00-12:30 Session 66B: Strategic Reasoning
Location: Taub 3
Private and public affairs in strategic reasoning
PRESENTER: Aniello Murano

ABSTRACT. Do agents know each others’ strategies? In multi-process software construction, each process has access to the processes already constructed; but in typical human-robot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multi-agent systems. Recently, the impact of this distinction on epistemic reasoning was studied. In this work, we study how it impacts strategic reasoning.

To do so we consider Strategy Logic (SL), a well-established and highly expressive logic for strategic reasoning. Its usual semantics, which we call “white-box semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “black-box semantics”, in which agents keep their strategies private. We consider the model-checking problem and show that the black-box semantics has much lower complexity than white-box semantics for an important fragment of Strategy Logic.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 67E: Systems & Robotics
Location: Taub 2
Online Grounding of Symbolic Planning Domains in Unknown Environments
PRESENTER: Leonardo Lamanna

ABSTRACT. To effectively use an abstract (PDDL) planning domain to achieve goals in an unknown environment, an agent must instantiate such a domain with the objects of the environment and their properties. If the agent has an egocentric and partial view of the environment, it needs to act, sense, and abstract the perceived data in the planning domain. Furthermore, the agent needs to compile the plans computed by a symbolic planner into low level actions executable by its actuators. This paper proposes a framework that aims to accomplish the aforementioned perspective and allows an agent to perform different tasks. For this purpose, we integrate machine learning models to abstract the sensory data, symbolic planning for goal achievement and path planning for navigation. We evaluate the proposed method in accurate simulated environments, where the sensors are RGB-D on-board camera, GPS and compass.

Stream Reasoning with Cycles

ABSTRACT. Temporal specifications, such as those found in multi-agent systems, often include cyclic dependencies. Moreover, there is an increasing need to evaluate such specifications in an online manner, upon streaming data. Consider, for example, the online computation of the normative positions of the agents engaging in an e-commerce protocol. We present a formal computational framework that deals with cyclic dependencies in an efficient way. Moreover, we demonstrate the effectiveness of our framework on large synthetic and real data streams, from the fields of multi-agent systems and composite event recognition.

Symbolic Knowledge Extraction from Opaque Machine Learning Predictors: GridREx & PEDRO
PRESENTER: Roberta Calegari

ABSTRACT. Procedures aimed at explaining outcomes and behaviour of opaque predictors are becoming more and more essential as machine learning (ML) black-box (BB) models pervade a wide variety of fields and, in particular, critical ones - e.g., medical or financial -, where it is not possible to make decisions on the basis of a blind automatic prediction. A growing number of methods designed to overcome this BB limitation is present in the literature, however some ML tasks are nearly or completely neglected-e.g., regression and clustering. Furthermore, existing techniques may be not applicable in complex real-world scenarios or they can affect the output predictions with undesired artefacts.

In this paper we present the design and the implementation of GridREx, a pedagogical algorithm to extract knowledge from black-box regressors, along with PEDRO, an optimisation procedure to automate the GridREx hyper-parameter tuning phase with better results than manual tuning. We also report the results of our experiments involving the application of GridREx and PEDRO in real case scenarios, including GridREx performance assessment by using as benchmarks other similar state-of-the-art techniques. GridREx proved to be able to give more concise explanations with higher fidelity and predictive capabilities.

14:00-15:30 Session 67F: Belief Revision/RDFS
Location: Taub 3
A Minimal Deductive System for RDFS with Negative Statements
PRESENTER: Umberto Straccia

ABSTRACT. It is well-known that the triple language RDFS (Resource Description Framework Schema) is designed to represent and reason with positive statements only (e.g., “antipyretics are drugs”).

In this paper we show how to extend RDFS to express and reason with various forms of negative statements under the Open World Assumption (OWA). To do so, we start from ρdf, a minimal, but significant RDFS fragment that covers all essential features of RDFS, and then extend it to ρdf⊥¬, allowing express also statements such as “radio therapies are non drug treatments”, “Ebola has no treatment”, or ”opioids and antipyretics are disjoint classes”.

The main and, to the best of our knowledge, unique features of our proposal are: (i) ρdf⊥¬ remains syntactically a triple language by extending ρdf with new symbols with specific semantics and there is no need to revert to the reification method to represent negative triples; (ii) the logic is defined in such a way that any RDFS reasoner/store may handle the new predicates as ordinary terms if it does not want to take account of the extra capabilities; (iii) despite negated statements, every ρdf⊥¬ knowledge base is satisfiable; (iv) the ρdf⊥¬ entailment decision procedure is obtained from ρdf via additional inference rules favouring a potential implementation; and (v) deciding entailment in ρdf⊥¬ ranges from P to NP.

Hyperintensional Partial Meet Contractions

ABSTRACT. Formal frameworks for Epistemology need to have enough logical structure to enable interesting conclusions regarding epistemic phenomena and be expressive enough to model competing positions in the philosophical and logical literature. While beliefs are commonly accepted as hyperintensional attitudes, i.e., epistemic attitudes which may differ even towards necessarily equivalent sentences, most work on standard epistemic logic has relied on idealised and intensional agents. This is particularly true in the area of AGM-inspired Belief Change. Although a few recent studies investigate hyperintensional models of belief change, few have been well connected to the AGM framework, the main paradigm in the area. This work investigates hyperintensional notions of belief base contraction and belief set contraction, as studied in the AGM framework, and its connections to partial meet contractions. We also provide suitable representation theorems, characterising the constructions by means of rationality postulates.

On the Representation of Darwiche and Pearl’s Epistemic States for Iterated Belief Revision
PRESENTER: Nicolas Schwind

ABSTRACT. The seminal characterization of iterated belief revision was proposed by Darwiche and Pearl, which uses an abstract notion of epistemic states. In this work we look for a canonical representation of these epistemic states. Total preorders are not expressive enough to be used as such a canonical representation. Actually, we show that some operators can even not be represented on a countable epistemic space. Nonetheless, under a very reasonable assumption on the epistemic space, we show that OCFs (Ordinal Conditional Functions) can be considered as a canonical representation.

15:30-16:00Coffee Break
16:00-17:00 Session 70: Plenary
Complexity Measures for Reactive Systems

ABSTRACT. A reactive system maintains an on-goint interaction with its environment. At each moment in time, the system receives from the environment an assignment to the input signals and responds with an assignment to the output signals. The system is correct with respect to a given specification if, for all environments, the infinite computation that is generated by the interaction between the system and the environment satisfies the specification. Reactive systems are modeled by transducers: finite state machines whose transitions are labeled by assignments to the input signals and whose states are labeled by assignments to the output signals. In the synthesis problem, we automatically transform a given specification into a correct transducer.

While complexity measures receive much attention in the design of algorithms, they are less explored in the context of synthesis. This is a serious drawback: just as we cannot imagine an algorithm designer that accepts a correct algorithm without checking its complexity, we cannot expect designers of reactive systems to accept synthesized systems that are only guaranteed to be correct. It is not clear, however, how to define the complexity of a transducer. Unlike the case of algorithms (or Turing machines in general), there is no "size of input" to relate to, and measures like time and space complexity are irrelevant. Indeed, we care for on-going interactions, along which the transducer reacts instantly according to its transition function. One immediate complexity measure for a transducer is its size, but many more complexity measures are of interest. The talk discusses such measures and describes how the search for optimal reactive systems affects the synthesis problem.

17:00-17:30 Piano concert


  • Mozart Sonata n°3 K. 281 B-flat major
  • Liszt Rhapsody n°7
  • Gerschwin Three preludes

Bio: François Schwarzentruber is associate professor at école normale supérieure de Rennes. He took piano lessons by Suzanne Muller-Gunst. He performed Beethoven's piano concertos n° 3 and 4 with the orchestra of institut national polytechnique (INP) of Toulouse,Rhapsody in Blue of Gershwin together with the university orchestra of Rennes. He gave several concerts, in the public library Champs Libres in Rennes, in retirement houses, but also in festivals. He won the "special Mozart and "Amateurs virtuoses" awards at International Ile-de-France Piano Competition (amateurs). He also composes, mostly short pieces for piano.