SUM 2024: SUM 2024: THE 16TH INTERNATIONAL CONFERENCE ON SCALABLE UNCERTAINTY MANAGEMENT
PROGRAM FOR WEDNESDAY, NOVEMBER 27TH
Days:
next day
all days

View: session overviewtalk overview

09:00-09:30 Opening: Program Chairs; Vice-Rector; Head of the DMI Department.

Conference Chair: Giuseppe Sanfilippo

Program Chairs: Sébastien Destercke, Vanina Martinez,

Rector: Massimo Midiri (To be confirmed)

Vice-Rector for Education and International Affairs: Fabio Mazzola

Head of the Department of Mathematics and Computer Science: Cinzia Cerroni

 

09:30-10:20 Session 1: First Invited Speaker: Meghyn Bienvenu
09:30
Repair-Based Semantics for Querying Inconsistent Data: From Databases to Knowledge Bases and Back

ABSTRACT. Consistent query answering was introduced twenty-five years ago as a principled means of querying inconsistent databases. It is based upon a simple idea: when it is impossible or infeasible to identify the true consistent database, then define instead a space of possible repairs (consistent databases that `minimally' differ from the input database) and output those query answers that hold w.r.t. every repair. This approach has subsequently inspired an active line of research within the KR community, which has extended and adapted the framework to the case of inconsistent knowledge bases (consisting of a dataset and an ontology). In this talk, I will survey recent advances on repair-based semantics and highlight the insights that have been gained, considering both the database and ontology settings. 

10:20-10:50Coffee Break

Quid Restaurant Maps

10:50-12:30 Session 2: 1. Classification and Learning under Uncertainty
10:50
Discrete Minimax Binary Relevance Classifier for Imbalanced Multi-label Classification

ABSTRACT. Multi-label classification (MLC) is a supervised learning problem where each instance can be associated with none, one, or multiple labels. MLC has received increasing attention due to its wide range of applications, such as text categorization and medical diagnosis. Despite a rich literature on MLC, handling imbalanced data, often encountered in real-world MLC datasets, has not been tackled satisfactorily. Based on a thorough literature review, it appears that the existing methods for imbalanced MLC are either hard to be coupled with sound theoretical guarantees or of limited scalability. This paper discusses the potential (dis)advantages of existing methods for imbalanced MLC, when being coupled with Binary relevance classifier (BRC), and introduces Discrete Minimax BRC (DMBRC), which would be a promising attempt to robustify the BRC by leveraging theoretically sound properties of the Discrete Minimax Classifier. We also provide empirical evidence to illustrate how DMBRC may be advantageous in balancing the label-wise error rates. Finally, we envision future works on further strengthening DMBRC in both label-wise error rates and conventional MLC evaluation metrics.

11:10
Transfer Learning Approach for High-Imbalance and Multi-Class Classification of Fluorescence Images

ABSTRACT. Recent advances in deep learning have often surpassed human perfor-mance in image classification. Among the most renowned cases, just think of the ImageNet Large Scale Visual Recognition Challenge competition. However, challenges persist in complex fields such as medical imaging. An example is the Human Protein Atlas which maps all human proteins in more than 171,000 im-ages that makes a computation challenge due to high class imbalance. To address these challenges from a green perspective, we propose a transfer learning ap-proach using Convolutional Neural Networks (CNNs) pre-trained on the ImageNet dataset. We use CNN layers as feature extractors, feeding the extracted features into a Support Vector Machine with a linear kernel. Our method com-bines both image-level and cell-level perspectives. At the cell level, we segment nuclei and extract the surrounding nuclear membrane area. The ensemble classi-fication shows promising performance with limited computational effort.

11:30
Robust Discrete Bayesian Classifier under Covariate and Label Noise
PRESENTER: Wenlong Chen

ABSTRACT. In this paper, we focus on the Discrete Bayesian Classifier (DBC), which discretizes the input space into regions where class probabilities are estimated. We investigate fuzzy partitioning as an alternative to the hard partitioning classically used to discretize the space. We show that our approach not only boosts the DBC’s performance and resilience to noise, but also mitigates the loss of information due to discretization. The benefits of soft partitioning are demonstrated experimentally on several synthetic and real datasets.

11:50
r-ERBFN : an Extension of the Evidential RBFN Accounting for the Dependence Between Positive and Negative Evidence
PRESENTER: Serigne Diène

ABSTRACT. Recently, it was shown that a radial basis function network (RBFN) with a softmax output layer amounts to pooling by Dempster's rule positive and negative evidence for each class, and approximating the resulting belief function by a probability distribution using the plausibility transform. This so-called latent belief function offers a richer uncertainty quantification than the probabilistic output of the RBFN. In this paper, we show that there exists actually a set of latent belief functions for a RBFN. This set is obtained by considering all possible dependence structures, which are described by correlations, between the positive and negative evidence for each class. Furthermore, we show that performance can be enhanced by optimizing the correlations brought to light.

12:10
Extended Boltzmann Machine - Generative Model
PRESENTER: Maria Rifqi

ABSTRACT. The increase in computing power in recent years has brought generative models and the use of synthetic data back to the fore to solve a variety of previously unsolved problems, in particular when fields are subject to constraints linked to the sensitivity of the information processed. This article proposes a modified version of restricted Boltzmann machines (RBM), known as Bernoulli machines, to improve its ability to handle non-binary data without making the methodology more complex to understand and manipulate. To assess the performance of our algorithm, we compare it with various generative models that are well documented in the scientific literature and have repeatedly proven their effectiveness in a variety of contexts. We also chose to use a large number of open source datasets with different features in terms of both their structure and the purpose they address in order the verify the generalization capacity of our results to different scientific area.

12:30-13:10 Session 3: First Tutorial: Andrea Capotorti
12:30
The Milestone of Coherent Probabilities in the AI and Big Data pathway

ABSTRACT. After a review of the main contributions to AI and Expert Systems given by the so-called “subjectivist” community inspired by de Finetti opera, the tutorial will be focused on the peculiarities of the coherent approach to probabilities, both conditional or not, and how they can guide us on an original paradigm for Artificial Intelligence, Big Data analysis and Machine Learning algorithms.

Specifically, attention will be given to partial knowledge and consequent partial initial evaluations, logical (i.e. structural) constraints among events/variables; conditioning also to unexpected scenarios (formalized into the so-called "zero-layers" structure); inference through the generalized de Finetti's theorem of previsions.

Some prototypical examples will guide during the exposition.

13:10-14:40Lunch Break

A'nica Restaurant,  Link

14:40-16:30 Session 4: 2. Logic and Reasoning Frameworks for Uncertainty
14:40
Towards a logical framework for reasoning with stereotypes

ABSTRACT. Stereotypes are necessary for human cognition. Indeed our limited computational capabilities and our need for quick decision making require using shortcuts for reasoning. In this work, we discuss how to formalize reasoning with stereotypes using uncertain default rules with an anchorage degree.

15:00
Boolean weighting in possibilistic logic
PRESENTER: Henri Prade

ABSTRACT. Propositional possibilistic logic handles pairs made of a proposition and a level expressing a degree of certainty; these levels belong to a totally or- dered scale. This basic possibilistic logic only allows for the conjunction of pos- sibilistic formulas, in agreement with the min decomposability of necessity mea- sures for this connective. Generalized possibilistic logic extends this formalism to negation and disjunctions of weighted pairs of formulas. In this paper, we con- sider a class of possibilistic logics where propositions are labeled by elements of a Boolean algebra. We first consider the example where propositions are associated with groups of agents that believe in them. This multiagent logic is then extended by attaching degrees of necessity to pairs (proposition, set of agents), and a mul- tiagent counterpart of generalized possibilistic logic is proposed as well. Other examples of Boolean-valued formulas are discussed, where the Boolean labels represent time intervals, or yet other propositional formulas representing reasons to believe propositions.

15:20
Possibilistic provenance
PRESENTER: Henri Prade

ABSTRACT. Provenance calculus based on two operations forming a semi-ring, enables the combination and propagation of annotations associated with data. This note emphasizes that this calculation, if based on max and min operations, corresponds exactly to query evaluation when data are la- beled with levels of certainty in the sense of possibility theory.

15:40
Counterfactuals as Modal Conditionals, and Their Probability
PRESENTER: Giuliano Rosella

ABSTRACT. Counterfactual conditionals are statements like “if a were the case, then b would be the case" and they are crucial for various fields like logic, linguistics, and AI. In this work we introduce a novel modal algebraic framework to analyze Lewis counterfactuals that allows us to combine logical, algebraic, and probabilistic approaches.

16:00
On Decisive Revision Operators and Totally Ordered Information
PRESENTER: Jérome Gaigne

ABSTRACT. This paper focuses on decisive belief revision operators, i.e. operators leading to totally informed situations. Such situations can be represented in the Katsuno-Mendelzon (KM) revision framework by a complete propositional formula, from which either any formula or its negation can be entailed. From a semantic point of view, this kind of operator leads to a single, most plausible interpretation. Despite their prevalence in decision theory, this class of operators has not been previously studied in the context of the Katsuno-Mendelzon (KM) revision framework.

We propose in this paper to characterize decisive operators by a set of postulates. We also provide a representation theorem leading to linear orders on interpretations. Finally, we exhibit a concrete operator family satisfying this new set of postulates by combining KM revision operators with tie-breaking functions.

16:20-16:50Coffee Break

Quid Restaurant Maps

16:50-17:30 Session 5: Second Tutorial: Niki Pfeifer
16:50
Artificial and human cognition under uncertainty

ABSTRACT. This tutorial presents applications of coherence-based probability logic (CPL) to selected problems of philosophy and psychology as well as in the field of nonmonotonic reasoning. In particular, it illustrates how CPL can be used as a rationality framework for artificial and human reasoning under uncertainty. Normatively CPL prescribes how ideal cognition works under uncertainty. Descriptively, CPL allows for deriving testable psychological hypotheses.

What is probability logic? In a nutshell, probability logic is about propagating uncertainties from the premises to the conclusion in a rational way. For example, the premises of the Probabilistic Modus Ponens $P(B|A)$ and $P(A)$ constrain the conclusion $P(B)$ by a lower bound $(P(B|A)P(A))$ and an upper bound $(P(B|A)P(A)+1-P(A))$, where it can be shown that these bounds are the best possible and hence rational bounds on the conclusion. Thus, a key inference problem is about finding rational bounds on the conclusion in the light of the premise set. For coherence-based probability logic, the key rationality criterion for the assessment of the premises and for the propagation to the conclusion is coherence. Coherence, going back to Bruno de Finetti, refers to the subjective approach to probability which interprets probabilities by degrees of belief. The avoidance of bets that lead to sure loss (i.e., Dutch books) allows for justifying coherence: probability assessments, which satisfy this rationality criterion of avoiding Dutch books are coherent and hence (formally) rational. Violations of coherence are irrational and yield bad inferences. Each probabilistic assessment within these bounds is coherent.

After a brief characterisation of CPL, I illustrate why CPL provides a unified rationality framework for investigating problems in philosophy and psychology. Specifically, I show that coherence allows for a better understanding of many philosophical problems in logic, like the understanding and proper treatment of conditionals (if-then-constructions), nonmonotonic reasoning (i.e., reasoning systems which allow for retracting conclusions in the light of new evidence), and Aristotelian syllogisms (one of the oldest logic systems in Western Europe). Moreover, I argue that CPL makes many interesting psychological predictions, some of which have been validated experimentally in recent years. For example, most people interpret beliefs in conditionals as conditional probabilities, draw coherent conclusions, reason nonmonotonic, and connexively.

The aim of my tutorial is to provide an overview on how CPL can serve as a unified rationality framework for studying diverse problems in different disciplines including the philosophy of logic, formal epistemology, and the psychology of reasoning.