SUM 2024: SUM 2024: THE 16TH INTERNATIONAL CONFERENCE ON SCALABLE UNCERTAINTY MANAGEMENT
PROGRAM FOR THURSDAY, NOVEMBER 28TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:50 Session 6: Second Invited Speaker: Lluis Godo
09:00
Looking at conditionals within the possibilistic framework

ABSTRACT. Conditionals play a key role in different areas of logic, probabilistic and non-monotonic reasoning, and they have been studied and formalised from different angles. In this talk we will focus on recent developments on various foundational aspects of conditionals related to the possibilistic model of uncertainty.  We will first show that  a suitable notion of conditional possibility, for which a triviality result similar to Lewis' in the probabilistic setting can be proved, is fully compatible with an algebraic setting of Boolean algebras of conditionals in the sense that an analogue of Stalnaker's Thesis holds. On the other hand, we will argue that the approach to conditionals as random quantities by Gilio, Sanfilippo and colleagues, based on de Finetti's notion of conditional as a three-valued object and shown to be compatible with the above Boolean algebraic setting, admit a faithful possibilistic counterpart. In that approach, conditionals are interpreted as possibilistic variables instead, and their possibilistic expectation provides a means of extending a possibility on plain events to arbitrary (compound) conditionals. Finally, if time permits, we will also discuss recent results regarding Lewis-Stalnaker conditionals, formalized in Lewis's logics C1 and C2, their algebraic characterization, and a possibilistic imaging update rule that avoids the triviality result, so that the (plain) possibility of these conditionals is nothing but their imaged possibility.

This presentation is based on the below joint works with a number of colleagues.

09:50-10:30Coffee Break

Quid Restaurant Maps

10:30-12:30 Session 7: 3. Methods in Probability, Statistics, and Conditional Reasoning
10:30
Entropy and extropy for partial probability assessments on arbitrary families of events
PRESENTER: Lydia Castronovo

ABSTRACT. In subjective probability theory, a coherent probability assertion represents an honest expression of its promoter's uncertain knowledge about the value of an unknown quantity. The theory of proper scoring rules was central to de Finetti’s ideas about assessing the relative values of different subjective probability assessments. In this paper, we consider an asymmetric proper scoring rule for the probability of an event, which belongs to the 2-parameter Beta family of scoring rules. We then consider the associated loss function of a probability assessment on an arbitrary family of $n$ events. We observe that, in the particular case of a probability mass distribution of a random quantity, the expected loss function associated with the asymmetric score coincides with the Shannon entropy. Likewise, we show similar properties between the notion of extropy and the expected loss function associated with the complement of the asymmetric score. Then, we suitably extend the notion of entropy and extropy from partitions of events to arbitrary families of events. We also introduce Bregman divergences associated with these measures of information. Finally, we introduce a symmetric proper scoring rule for an event, showing that the associated expected loss function for an arbitrary family of events coincides with the sum of entropy and extropy.

10:50
The reverse hypergeometric distribution and entrainment statistics
PRESENTER: Andrea Simonetti

ABSTRACT. In the framework of urn models, we introduce a new probability distribution to measure the attribute concentration among group members. The related urn problem is a particular occupancy problem \cite{C1} where, rather than focusing on which or how many urns are filled \cite{C2}, we are interested in the configuration of how specific items are allocated to the urns. We show that the new probability distribution, named Reverse Hypergeometric distribution, properly manages this configuration problem through an illustrative example. Then, we make a comparison with other related exchangeable probability distributions, such as the Multivariate Hypergeometric and the Multinomial distributions \cite{C3}. We also compare the asymptotic behavioral differences between the Multinomial model and our model. We also introduce a test statistics that allows us to test an excess of intra-group similarity against the null hypothesis in which similarity among attributes of group members occurs randomly. This excess of attribute concentration involves the examination of the right tail of the test-statistics distribution, which can be obtained by selecting the corresponding configurations in the Reverse Hypergeometric distribution. We also examine the left tail of the test-statistics distribution testing a weak attribute concentration among group members. Through a simulation study, we show that the proposed distribution outperforms the Multinomial distribution in terms of statistical power. Additionally, we present a real-world application in social science, investigating the similarity excess among children in Italian households based on sex (male or female). We test this attribute concentration in families with two and three children, all of whom are young adults aged 21 to 30. Our results show that the Reverse Hypergeometric distribution is more suitable and flexible than the Multinomial distribution to model the probabilities of attribute configurations. Finally, as a generalization, we present an extension of the probability model, allowing different sizes of groups.

11:10
Scaling Up Reasoning From Conditional Belief Bases

ABSTRACT. This paper introduces the online reasoning platform InfOCF-Web 2.0 that provides easy access to implementations of various inference methods for conditional belief bases. We present an overview of the realization of the inductive inference operators p-entailment, system Z, c-inference, and system W. In order to address the fact that the possible worlds to be taken into account grow exponentially with the propositional signature over which the conditionals in the belief base are defined, the implementations employ SAT and Partial MaxSAT concepts and use the power of current SAT and SMT solvers. Our evaluation shows that each of the four inference operators can handle belief bases over signatures containing more than 100 variables and with more than 100 conditionals. Thus, InfOCF-Web 2.0 scales up nonmonotonic reasoning from conditionals to a new dimension because apart from the implementations now available in InfOCF-Web 2.0, there is no other implementation of an inference operator for conditional belief bases for which such problem sizes are feasible.

11:30
Contribution of Subsets of Variables in Global Sensitivity Analysis with Dependent Variables

ABSTRACT. Global Sensitivity Analysis aims at explaining how much each random variable contributes to the variance of the output of a black-box model. The standard approach - namely Sobol indices -- computes the contribution of each subset of variables but requires that the variables are independent. The Shapley effect (based on the Shapley value) has been defined for dependent variables, but gives the contribution of each variable individually instead of the contribution of subsets of variables. The aim of this work is to propose a novel approach for dependent variables that defines the level of contribution of each subset of variables so that they sum up to the total variance of the output of the model. We show that we come up with known concepts - namely the Banzhaf values and interaction indices, up to a multiplicative factor.

11:50
MIP Outer Belief Approximations of Lower Conditional Joint CDFs in Statistical Matching Problems
PRESENTER: Andrea Capotorti

ABSTRACT. We propose a mixed integer programming (MIP) procedure to find an outer belief approximation of a lower conditional joint cumulative distribution function (lower conditional joint CDF) obtained by the statistical matching of several sources of information, given a common variable. We assume that the variables have finite supports and we provide a procedure based on the MIP technique that produces a sparse solution with at most a given finite number of focal elements, permitting to obtain an outer approximation with a conditional belief function. In turn, the family of sparse solutions given the common variable, allows us to efficiently perform coherent inferences on new items, relying on the generalized Bayesian conditioning rule. We finally show the effectiveness of the proposed approach in the domain of company fraud detection.

12:10
Estimating Causal Effects in Partially Directed Parametric Causal Factor Graphs
PRESENTER: Malte Luttermann

ABSTRACT. Lifting uses a representative of indistinguishable individuals to exploit symmetries in probabilistic relational models, denoted as parametric factor graphs, to speed up inference while maintaining exact answers. In this paper, we show how lifting can be applied to causal inference in partially directed graphs, i.e., graphs that contain both directed and undirected edges to represent causal relationships between random variables. We present partially directed parametric causal factor graphs (PPCFGs) as a generalisation of previously introduced parametric causal factor graphs, which require a fully directed graph. We further show how causal inference can be performed on a lifted level in PPCFGs, thereby extending the applicability of lifted causal inference to a broader range of models requiring less prior knowledge about causal relationships.

12:30-13:10 Session 8: Third Tutorial: Cassio de Campos
12:30
Probabilistic Circuits: An Overview

ABSTRACT. This tutorial presents a view on the tractability and practical usability of probabilistic circuits. They are a class of probabilistic generative models that represent computations explicitly and can be seen as a bridge between interpretative Bayesian networks and high-performing neural networks. We discuss on their relations to other models, including Markov networks, random forests, mixture models, and neural networks. We look at their capabilities for large-scale uncertainty treatment, neuro-symbolic ideas, fairness, and explainability. The talk also illustrates applications using cases in image analysis, multi-typed tabular benchmarks, fairness measures, and data imputation.

13:10-14:40Lunch Break

A'nica Restaurant,  Link

14:40-16:20 Session 9: 4. Argumentation and Inconsistency Handling
14:40
Stability of Extensions in Incomplete Argumentation Frameworks
PRESENTER: Anshu Xiong

ABSTRACT. Existing works on stability of incomplete argumentation frameworks (IAFs) discuss the status of an argument or a set of arguments under a given semantics during the evolving of an IAF towards complete AFs. We argue that the stability of an IAF itself is worth studying, i.e., checking whether all extensions under a semantics are the same in every completion of the IAF. When an IAF becomes stable in this sense, there is no need to investigate the uncertain arguments or attacks within the IAF as in the end all its complete AFs will share the same extensions. We further present a relaxed notion called weak stability so that the same extensions are required solely within the shared arguments of every two completions rather than all arguments in the IAF. In this paper we study six classical semantics of AF and show precise complexity results of checking the (weak) stability of an IAF, which turns to be more complex than checking a set of arguments whereas less complex than checking an argument. Meanwhile, we give a kind of SAT encoding for the stability problems with coNP-c complexity.

15:00
Towards a Dialogue Game-based Semantics for Extended Abstract Argumentation Frameworks based on Indecision-Blocking
PRESENTER: Yamil Soto

ABSTRACT. Dialogue game-based semantics for abstract argumentation are relevant for several reasons. From a theoretical point of view, they provide a different perspective regarding extension-based or labeling-based approaches to study the theoretical properties of the argument evaluation process. From a more practical perspective, they allow us to examine whether or not an argument belongs to a given extension (or labeling) without computing an entire (set of) extension(s) or labeling(s), and guide the development of efficient algorithms. This last point is significant in the context of the development of argumentation-based knowledge representation and reasoning tools for real-world applications. In this paper, we expand the dialogue game-based semantics available for extended abstract argumentation frameworks, a generalization of abstract argumentation frameworks where two kinds of defeat are considered, proper and blocking, and the sub-argument relation is taken into account. The novel dialogue game-based semantics we propose is inspired in a specific interpretation of cycles that considers them as an indecision, a situation in which we do not have enough information to decide the status of the arguments involved.

15:20
A Reinforcement Learning Approach for Resolving Inconsistencies in Qualitative Constraint Networks
PRESENTER: Michael Sioutis

ABSTRACT. In this paper, we present a reinforcement learning approach for resolving inconsistencies in qualitative constraint networks (QCNs). QCNs are typically used in constraint programming to represent and reason about intuitive spatial or temporal relations like x {is inside of ∨ overlaps} y. Naturally, QCNs are not immune to uncertainty, noise, or imperfect data that may be present in information, and thus, more often than not, they are hampered by inconsistencies. We propose a multi- armed bandit approach that defines a well-suited ordering of constraints for finding a maximal satisfiable subset of them. Specifically, our learning approach interacts with a solver, and after each trial a reward is returned to measure the performance of the selected action (constraint addition). The reward function is based on the reduction of the solution space of a consistent reconstruction of the input QCN. Early experimental results obtained by our algorithm suggest that we can do better than the state of the art in terms of both effectiveness, viz., lower number of repairs obtained for an inconsistent QCN, and efficiency, viz., faster runtime

15:40
Inconsistency Measurement in LTLf Based on Minimal Inconsistent Sets and Minimal Correction Sets

ABSTRACT. We investigate the problem of measuring inconsistency in linear temporal logic on finite traces (LTLf). In particular, we present Answer Set Programming-based approaches to compute a selection of traditional inconsistency measures w.r.t. LTLf knowledge bases. In contrast to existing works (mostly on propositional logic), these approaches are novel in the sense that they allow to assess logical inconsistency in presence of temporal operators, as offered by LTLf. In an experimental evaluation on real-world data from the area of business process management, we show that our approaches are practically feasible.

16:00
Compact Solution Representation in Qualitative Constraint-based Reasoning
PRESENTER: Michael Sioutis

ABSTRACT. In the framework of Qualitative Spatio-Temporal Reasoning (QSTR), we can consider constraints like x {is above ∨ is under} y, and combinations thereof, to represent and reason about spatial or temporal information in an intuitive, human-like way. QSTR becomes particularly important in view of possible lack, uncertainty, and/or imperfection of metric data, as treating such quantitative information qualitatively would provide more leeway to perform sound reasoning. Adding to the usefulness of QSTR, in this paper, we introduce the notion of multi- scenario for representing solutions of networks of qualitative spatio- temporal constraints in a compact manner, as a means to assessing and enhancing the explainability and robustness of AI systems that involve spatio-temporal information. Further, we prove certain theoretical properties pertaining to this novel notion, and even introduce some robustness measures relating to our notion of multi-scenario.

16:20-16:50Coffee Break

Quid Restaurant Maps

16:50-18:30 Session 10: 5. Decision-Making under Uncertainty
16:50
Flexible risk aware sequential decision making

ABSTRACT. In this work, we study risk aware sequential decision making in a Markov Decision Process (MDP). Unlike many works in the literature, where MDPs are solved by optimizing expected rewards (ER), and thus assuming neutrality w.r.t. risk, we use a more sophisticated operator: the Weighted Ordered Weighted Average (WOWA), a parameterized operator that allows to model a wide range of behaviors, from extreme risk seeking to extreme risk aversion (as well as compromises between both behaviors). This operator has thus a high descriptive capacity, but is rather difficult to optimize in an MDP because of its non-linearity that makes standard solving algorithms sub-optimal. In this paper, we introduce and justify a ranking algorithm that allows to determine an optimal (or nearly optimal) policy for a wide range of attitudes w.r.t. risk (averse, seeking, neutral, intermediate) using WOWA. Empirical results are given to illustrate the relevance and the efficiency of the approach.

17:10
Imprecise dynamic Value-at-Risk induced by a DS-bivariate random walk

ABSTRACT. Referring to Dempster-Shafer theory, we introduce a bivariate random walk enforcing Markovianity and time-homogeneity under a pessimistic view concerning ambiguity. This is done through a suitable family of joint $t$-step transition belief functions, generalizing the product of two independent binomial transitions, where ambiguity is expressed by the excessive weight to unity. Given a real-valued function of the pair at a fixed horizon, we define the dynamic lower and upper Value-at-Risk (VaR), generated by the corresponding dynamic p-box.

17:30
Elicitation for Decision Problems under severe uncertainties

ABSTRACT. In this paper, we investigate the problem of eliciting new information, where the assumed uncertainty model are upper coherent previsions (or equivalently convex sets of probabilities), and where the elicitation goal is to solve, as quickly as possible, a decision problem where maximality is used as a decision rule. To address this question, we study the potential range of upper bounds an expert may give on a given query taking the form of an uncertain reward, providing new results and characterisation for such a range. We then use this range to provide an algorithm of elicitation. We illustrate our findings and proposal on an example.

17:50
Judicial Support Tool: Finding the k Most Likely Judicial Worlds

ABSTRACT. Judges sometimes make mistakes. We propose JUST, a logical framework within which judges can record propositions about a case and witness statements where a witness says that certain propositions are true or false. JUST allows the judge/jury to assign a rating of credibility to witness statements. A world is an assignment of true/false to each proposition, which is required to satisfy case specific integrity constraints. JUST's “explicit” algorithm calculates the k most likely worlds without using independence assumptions between propositions. The judge may use these calculated top-k most likely worlds to make her final decision. For this computation, JUST uses a suite of “combination” functions. We also develop JUST's “implicit” algorithm, which is far more efficient. We test JUST using different combination functions on 5 real-world court cases and 19 TV court cases, showing that JUST works well in practice.

18:10
Social ranking under incomplete knowledge: elicitation of the lex-cel necessary winners
PRESENTER: Ariane Ravier

ABSTRACT. The problem of social ranking consists in determining a ranking over elements in a population, based on a ranking over %all coalitions within that population's coalitions, with the objective to rank elements according to their overall influence over coalitions. The lex-cel method, for instance, ranks elements based on the lexicographic comparison of their occurrences in the ranking over coalitions. As the number of coalitions grows exponentially with the size of the population, ranking them may prove cognitively costly for the user. As such, it is interesting to consider a setting of incomplete knowledge over that ranking. In this paper, we introduce two elicitation approaches for the determination of lex-cel necessary winners (i.e., the elements that are ranked highest according to the lex-cel) in a social ranking problem, when the knowledge about preferences over the coalitions is incomplete, and the initially accessible input is a subset of an existing total preorder. The first approach is preorder-driven, and elicitates enough of the underlying total preorder to determine the lex-cel necessary winners. The second approach is element-driven and guides comparisons based on strategically-located coalitions. Finally, we present experimental results and discuss the performance of each approach depending on various parameters and scenarios.