SUM 2024: SUM 2024: THE 16TH INTERNATIONAL CONFERENCE ON SCALABLE UNCERTAINTY MANAGEMENT
PROGRAM FOR FRIDAY, NOVEMBER 29TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-09:50 Session 11: Third Invited Speaker
09:00
From de Finetti’s coherence to new theories for reasoning under uncertainty
09:50-10:20Coffee Break

Quid Restaurant Maps

10:20-12:20 Session 12: 6. Preference Modelling and User-Centric Decision Support
10:20
User's Preference Modelling with Gödel Integral

ABSTRACT. The Gödel integral is a variant of the Sugeno integral. It is an expressive aggregation operator to compute a global evaluation combining local values taken for a set of considered criteria and a capacity, a set function that represents the importance of these criteria and their interactions. A crucial question is then the elicitation of this capacity so as to match the user preferences. In this paper, we focus on approaches to elicit such a capacity when user preferences are expressed as an ordering relation between classes of equivalent objects. We place ourselves in an XAI context where objects are described by means of two criteria, as, for instance, the generation of explanations in the form of counterfactual examples evaluated by a pair of objective and subjective criteria. We first proposes a theoretical characterisation of the set of admissible capacities: it allows to determine lower and upper bounds of possible capacities when it is not empty and it provides explanations when it is empty. Then, we introduces the algorithm GICEP (for Gödel Integral for Capacity Elicitation from Preference relations), to compute these bounds and to provide explanations. Experiments conducted on synthetic data highlighting the relevance and efficiency of GICEP are also given.

10:40
Exploring distances for preference-approvals

ABSTRACT. This extended abstract refers to the paper “A family of distances for preference– approvals” by Albano, A., Garc´ıa-Lapresta, J.L., Plaia, A., Sciandra, M. published in 2023 in Annals of Operations Research https://doi.org/10.1007/ s10479-022-05008-4. In social choice theory, preference rankings and approvals are two popular ways to collect the preferences of a group of agents on a set of alternatives. Preference rankings order the alternatives from best to worst without distinguishing between acceptable and unacceptable alternatives. In contrast, the approval voting system [1] consists of separating the set of acceptable alternatives from the set of unacceptable alternatives without considering preferences neither over acceptable nor over unacceptable alternatives. In this paper, we focus on preference-approval structures. They combine preferences over the alternatives, through a weak order, and establish which alternatives are acceptable (Brams 2, Chapter 3; Brams and Sanver 3; Sanver 4). Within this framework we propose a new distance for preference-approvals, following the approach of the Kemeny distance. We show that using, as an aggregation function, the family of weigthed power means (a class of weighted quasiarithmetic means) brings the benefit of many interesting properties. The final aggregated distance is derived the sum of the pairwise preference-approval discordances. We show that our distance respects the fundamental properties to be defined as a metric and that, under certain assumptions, it has a precise geometric interpretation. Moreover, we highlight that our proposal can be regarded as the generalization of the Erdamar et al. [5] distance measure, with the two coinciding for a specific parameter setting. However, we show that the proposed distance family has some advantages over the existing one as it is more versatile and performs better in cluster analysis both of simulated and real data. Specifically, using a simulation study and the adjusted Rand index, we show that our metric more accurately reveals the true clustered structure of the data. Additionally, employing a cluster-wise stability index, we demonstrate it produces more stable clusters in real data examples. Future research will consider extensions to ternary preferences, where voters are allowed to split alternatives into three categories: acceptable, neutral and unacceptable.

11:00
Integrating User Preferences into Gradual Bipolar Argumentation for Personalised Decision Support
PRESENTER: Antonio Rago

ABSTRACT. Gradual bipolar argumentation has been shown to be an effective means for supporting decisions across a number of domains. Individual user preferences can be integrated into the domain knowledge represented by such argumentation frameworks and should be taken into account in order to provide personalised decision support. This however requires the definition of a suitable method to handle user-provided preferences in gradual bipolar argumentation, which has not been considered in previous literature. Towards filling this gap, we develop a conceptual analysis on the role of preferences in argumentation and investigate some basic principles concerning the effects they should have on the evaluation of strength in gradual argumentation semantics. We illustrate an application of our approach in the context of a review aggregation system, which has been enhanced with the ability to produce personalised outcomes based on user preferences.

11:20
Entropic Regularization Schemes for Learning Fuzzy Similarity Measures based on the d-Choquet Integral
PRESENTER: Davide Petturiti

ABSTRACT. We consider the problem of learning one of three possible fuzzy generalizations of the Jaccard similarity measure, based on the d-Choquet integral. Each of the resulting fuzzy similarity measures is parameterized by a capacity and by a real parameter. The capacity describes the weights assigned to groups of attributes and their interactions, while the real parameter is related to the restricted dissimilarity function used to evaluate differences among attributes. To face identifiability issues and in view of a XAI use of the learned capacity, the parameter set is restricted to the set of (at most) $2$-additive completely monotone capacities. Next, under a suitable definition of entropy for completely monotone capacities, we address different entropic regularization schemes to single out interactions between groups of attributes. This is done by taking as reference a local uniform M\"obius inverse over sets of attributes with the same cardinality.

11:40
On the Completeness and Complexity of Lifted Temporal Inference

ABSTRACT. For static lifted inference algorithms, completeness, i.e., domain liftability, is extensively studied. However, so far no domain liftability results for temporal lifted inference algorithms exist. In this paper, we contribute the first completeness and complexity analysis for a temporal lifted algorithm, the so-called lifted dynamic junction tree algorithm (LDJT), which is the only exact lifted temporal inference algorithm out there. To handle temporal aspects efficiently, LDJT uses conditional independences to proceed in time, leading to restrictions w.r.t. elimination orders. We show that these restrictions influence the domain liftability results and show that one particular case while proceeding in time, has to be excluded from FO 2 . Additionally, for the complexity of LDJT, we prove that the lifted width is in even more cases smaller than the corresponding treewidth in comparison to static inference.

12:00
Accelerate K-Modes Using The Triangle Inequality
PRESENTER: Vu-Linh Nguyen

ABSTRACT. Clustering is an unsupervised machine learning task that aims to discover natural groups in the given dataset. K-Modes, which are adaptions of K-means clustering for continuous data, are among the most popular algorithms for discovering clusters in categorical data. In this paper, we present some first results on how to accelerate them using the triangle inequality, while still always computing exactly the same result as the original K-Modes. We also provide some empirical evidence to illustrate the potential gains provided by leveraging the triangle inequality. Finally, we envision future work aimed at providing a comprehensive understanding of the use of triangle inequality in accelerating (other) clustering algorithms for categorical data.

12:20-13:00 Session 13: Fourth Tutorial: Michele Tumminello
12:20
Machine learning and network techniques to investigate complex systems with an application to antifraud in the insurance sector

ABSTRACT. Machine learning (ML) systems learn and make decisions based on patterns and insights extracted from data–Statistically Validated Networks [1] in this lecture. ML encompasses a variety of methods and techniques, ranging from supervised learning, where models are trained using labeled data (e.g., logistic regression and support vector machines), to unsupervised learning, which involves uncovering hidden structures in unlabeled data (e.g., clustering). As a technique that adapts well to a wide range of data types, ML is increasingly used across diverse sectors such as finance, management, economics, healthcare, and the social sciences, introducing a change of paradigm for analyzing and modeling complex systems–from understanding "why" something happens to predicting "how" and "when" it will occur. In this lecture, I will adopt a bottom-up approach to explain how statistical, machine learning, and network techniques can fruitfully be combined to tackle a complex task, such as identifying organized groups of fraudsters from the analysis of the Antifraud Integrated Archive (AIA) managed by the IVASS. Indeed, the presented methods allow one to learn about preferential patterns of connectivity [2] among subjects, vehicles, and events, by testing the local network structure surrounding each entity against a null hypothesis of random connectivity [3]. As a crucial feature, this null hypothesis suitably takes into consideration the heterogeneity of all the involved entities (individuals, professionals, companies, etc.). One of the most useful outcomes of the procedure is an integrated score of potential fraud-fraudster associated with the joint pair car accident-subject [4], which is currently evaluated and used by the IVASS.

References [1] Tumminello, M., Miccichè, S., Lillo, F., Piilo, J., Mantegna, R.N. (2011). Statistically validated networks in bipartite complex systems. PLOS ONE, vol. 6(3), e17994. [2] Tumminello, M., Edling, C., Liljeros, F., Mantegna, R.N., Sarnecki, J. (2013). The Phenomenology of Specialization of Criminal Suspects, PLoS one, 8(5), e64703. [3] Tumminello, M., Petruzzella, F., Ferrara, C., Miccichè, S. (2021). Anagraphical relationships and crime specialization within Cosa Nostra, Social Networks 64, 29-41. [4] Tumminello, M., Consiglio, A., Vassallo, P., Cesari, R., Farabullini, F. (2023). Insurance fraud detection: A statistically validated network approach. Journal of Risk and Insurance, 90(2), 381-419

13:00-14:30Lunch

A'nica Restaurant,  Link

14:30-16:30 Session 14: 7. Ontologies, Games, and Advanced Uncertainty Management
14:30
SDF-FuzzIA: a Fuzzy-Ontology based plug-in for the intelligent analysis of geo-thematic data

ABSTRACT. This short paper presents a description of SDF-FuzzIA, a Fuzzy-Ontology LLM-based system for the intelligent analysis of geo-thematic data that serves as a plug-in to the Sustainability Decision Framework (SDF) Decision Support System (DSS). A description of the components implemented in the system is given, followed by an explanation of the interaction between the components and the main system. As this still is a work in progress, future directions and possible hurdles are explored.

14:50
An Ontology-Based Approach for Handling Inconsistency in Explainable and Prioritized Access Control Models
PRESENTER: Ahmed Laouar

ABSTRACT. The development of secure and efficient solutions for access control is an important issue in a variety of applications. One of the main challenges is to avoid situations that make access control decision-making impossible. However, avoiding such situations hampers the evolution of the model, as it means either adding a large set of constraints or dealing with each conflict situation apart using priorities. It is, therefore, important to use methods that deal with conflicts as they arise while providing explanations of the decision taken. In this work, we develop an ontology to manage dynamic and abstract access control rules based on the OrBAC (Organization Based Access Control) model and integrate an ordering relation over any instance of the ontology. Our method takes advantage of the application of inconsistency-tolerant semantics to resolve conflicts and generate explanations for transparency and trust in decisions made. Our results show that the approach efficiently preserves the consistency of the decision taken and provides potentially useful and human-friendly explanations.

15:10
Frank’s triangular norms in Piaget’s logical proportions
PRESENTER: Henri Prade

ABSTRACT. Starting from the Boolean notion of logical proportion in Piaget’s sense, which turns out to be equivalent to analogical proportion, this note pro- poses a definition of analogical proportion between numerical values based on triangular norms (and dual co-norms). Frank’s family of triangular norms is par- ticularly interesting from this perspective. The article concludes with a compara- tive discussion with another very recent proposal for defining analogical propor- tions between numerical values based on the family of generalized means.

15:30
Credibility-Limited Revision for Epistemic Spaces

ABSTRACT. We consider credibility-limited revision in the framework of belief change for epistemic spaces, permitting inconsistent belief sets and inconsistent beliefs. In this unrestricted setting, the class of credibility-limited revision operators does not include any AGM revision operators. We extend the class of credibility-limited revision operators in a way that all AGM revision operators are included while keeping the original spirit of credibility-limited revision. Extended credibility-limited revision operators are defined axiomatically. Additionally, a semantic characterization of extended credibility-limited revision operators is presented.

15:50
Bel Coalitional Games
PRESENTER: Silvia Lorenzini

ABSTRACT. We introduce Bel coalitional games, that generalize classical coalitional games, where uncertainty is modelled through the Dempster-Shafer theory and every agent can have different knowledge. We propose the notion of contract in our framework, that specify how agents divide the values of the coalitions and we use the Choquet integral to model the agents’ preferences between contracts. Next, we study the core under two different moments of the game by defining the ex-ante core and the ex-t-interim core, where, in the latter, we need the Dempster conditional rule to update the mass functions of agents. In particular, in the last step of the ex-t-interim case and when the set of states reduces to singleton, i.e. when there is no uncertainty, we recover the classical definition of the core. Finally, we show some results about the ex-ante and the ex-t-interim core of Bel coalitional games, following the well-known results about classical coalitional games.

16:10
Lifting Partially Observable Stochastic Games

ABSTRACT. Partially observable stochastic games are a Markovian formalism used to model a set of agents acting in a stochastic environment, in which each agent has its own reward function. As is common with multi-agent decision making problems, the space and runtime complexity is exponential in the number of agents, which can be prohibitively large. Lifting is a technique that treats groups of indistinguishable instances through representatives if possible. yielding tractable inference in the number of objects in a model. This paper applies lifting to partially observable stochastic games, reducing the complexity to polynomial, and presents a lifted solution approach.

16:40-17:10Farewell Coffee Break

Quid Restaurant Maps