View: session overviewtalk overview
| 11:10 | Efficient computation and estimation of generalized cumulants via complementary set partitions ABSTRACT. Generalized cumulants provide a powerful framework for the analysis of non-linear statistical quantities, playing a central role in problems involving ratios of quadratic forms, saddlepoint approximations, likelihood expansions, and bootstrap procedures. They naturally arise as intermediate objects between joint moments and joint cumulants, allowing one to express the cumulants of polynomial functions of random variables in a systematic way \cite{ref_article1}. Despite their theoretical relevance and wide range of applications, generalized cumulants remain underused in practice. The main obstacle is the severe computational burden associated with their evaluation, which relies on the enumeration of a specific class of set partitions known as complementary set partitions \cite{ref_article2}. In practice, the lack of efficient computational tools has led to a widespread reliance on precomputed tables of complementary set partitions, most notably those reported in McCullagh’s monograph \cite{ref_book2}. While these tables remain a fundamental reference, they are necessarily incomplete and become impractical to extend as the order increases. Existing computational approaches are mostly graph-theoretic or algebraic in nature and are typically confined to symbolic software, severely limiting their accessibility in widely used numerical environments such as {\tt R}. This work addresses these limitations by introducing a novel combinatorial and algorithmic approach for the efficient computation of complementary set partitions. The proposed method is based on two-block partitions and avoids the traditional use of connected graphs, Laplacian matrices, or symbolic algebra. By exploiting simple combinatorial constructions, the algorithm identifies all non-complementary partitions and recovers the complementary ones by set difference. This strategy leads to a procedure that is both conceptually simple and computationally scalable. A new implementation in {\tt R} is developed, filling a gap in the available software landscape. Since alternative methods are not currently implemented in {\tt R}, computational comparisons are carried out in Maple, where the proposed algorithm consistently outperforms existing techniques in terms of execution time. From a theoretical perspective, we extend the classical definition of generalized cumulants to include more complex dependence structures. This extension is formulated using multiset subdivisions and multi-index partitions, which provide a natural framework for handling powers of random variables. Within this setting, generalized multivariate cumulants are defined as intermediate quantities between multivariate moments and multivariate cumulants, and explicit closed-form expressions are derived in terms of products of multivariate cumulants. Finally, we propose a novel approach to the unbiased estimation of generalized cumulants. Building on the theory of $k$-statistics and multivariate polykays \cite{ref_article3}, the proposed estimators exploit dummy variables, labeling rules, and tailored transformations between multi-index partitions and set partitions. This strategy substantially reduces computational complexity compared to traditional plug-in or symbolic methods, making the practical use of generalized cumulants feasible in high-dimensional statistical applications. |
| 11:35 | Correlation Structure for Random Waves PRESENTER: Riccardo Maffucci ABSTRACT. Arithmetic Random Waves are the eigenfunctions of the Laplacian on the torus in $d\geq 2$ dimensions. Their geometry has been extensively investigated in the last two decades, starting from the seminal papers \cite{ref_article2,ref_article3}. We will discuss \cite{ref_article1} the correlation of various functionals including nodal length, boundary length of excursion sets, and the number of intersection of nodal sets with deterministic curves in different classes; the amount of correlation depends in a subtle fashion from the values of the thresholds considered and the symmetry properties of the deterministic curves. In particular, we prove the existence of \emph{resonant pairs} of threshold values where the asymptotic correlation is full, that is, at such values one functional can be perfectly predicted from the other in the high energy limit. We focus mainly on $d=2$ with some extensions to $d=3$. We will also briefly discuss a related problem of independent interest in geometry, concerning the characterisation of certain special classes of curves and surfaces that naturally come into play. |
| 12:00 | An Analogue Fr\'echet-Shohat Moments Convergence Theorem for Indeterminate Moment Problems PRESENTER: Pier Luigi Novi Inverardi ABSTRACT. Originally established in 1931, the Fr\'echet-Shohat Theorem is a fundamental result in the method of moments. It provides sufficient conditions under which the convergence of a sequence of moments $\{m_{k, n}\}_{n=1}^{\infty}$ to a limit sequence $\{m_k\}$ ensures the weak convergence of the associated distribution functions $\{G_n\}$ to a limit $G$. A critical requirement of the classical theorem is the determinacy of the underlying moment problem; that is, $G$ must be the unique distribution characterized by $\{m_k\}$. This study extends the foundational Fr\'echet-Shohat framework to the setting of indeterminate Hamburger and Stieltjes moment problems, where the classical theorem traditionally fails due to the non-uniqueness of the limiting measure. We demonstrate that by imposing an entropic constraint on the sequence $\{G_n\}$ - specifically, convergence in Shannon entropy - one can recover a unique limit entropy-distinguishable distribution, $G_{hmax}$, from the indeterminate class having the given moments. This result facilitates a unified treatment of the Fr\'echet-Shohat theorem across both determinate and indeterminate frameworks. The proposed approach provides a rigorous probabilistic foundation for the application of Maximum Entropy methods within statistical inference. Indeed, it strictly adheres to Jaynes' principle of objective inference by ensuring that the derived distribution is uniquely determined by the prescribed moment constraints, thereby precluding the imposition of unjustified or unwarranted assumptions. This methodology operationalizes the MaxEnt desideratum that "{\it \dots we should not assume more than what we know}", maintaining informational parsimony in the presence of indeterminacy. |
| 12:25 | The moment problem beyond finite dimensions PRESENTER: Maria Infusino ABSTRACT. In this talk we present a new approach to the following general instance of infinite dimensional moment problem: when can a linear functional on an infinitely generated algebra $A$ be represented as an integral with respect to a Radon measure on the space $X(A)$ of all characters of $A$? Our approach is based on projective limit techniques, which allow us to exploit the results for the classical finite dimensional moment problem in the infinite dimensional case. In fact, we prove that under the so-called Prokhorov condition, the infinite dimensional moment problem on $A$ is solvable if and only if for any finitely generated subalgebra $S$ of $A$ the corresponding finite dimensional moment problem is solvable. Among other applications, we present a new characterization of all linear functionals $L$ on $A$ representable as an integral w.r.t. a compactly supported Radon measure solely in terms of a growth condition on $L$, that permits to exactly identify the compact support. This is particularly surprising as the other characterizations available in the literature only show that the support of the representing Radon measure is contained in a compact set and so do not provide exact support descriptions. |
| 11:10 | Hierarchical Random Measures without Tables PRESENTER: Marta Catalano ABSTRACT. Bayesian multilevel models provide an effective framework to borrow information between different data sources through the sharing of common features. In a nonparametric setting, a classic example is the hierarchical Dirichlet process, whose generative model can be described through a set of latent variables, commonly referred to as tables in the popular restaurant franchise metaphor. The latent tables greatly simplify the expression of the posterior and allow for the implementation of a Gibbs sampling algorithm to approximately draw samples from it. However, managing their assignments can become computationally expensive, especially as the size of the dataset and of the number of levels increase. In this talk, we identify a prior for the concentration parameter of the hierarchical Dirichlet process that (i) induces a quasi-conjugate posterior distribution, and (ii) removes the need of tables, bringing to more interpretable expressions for the posterior, with both a scalable and an exact algorithm to sample from it. This construction extends beyond the Dirichlet process, leading to a new framework for defining normalized hierarchical random measures and a new class of algorithms to sample from their posteriors. |
| 11:35 | Bayesian calculus and predictive characterizations of extended feature allocation models PRESENTER: Lorenzo Ghilotti ABSTRACT. We introduce and study a unified Bayesian framework for extended feature allocations which flexibly captures interactions -- such as repulsion or attraction -- among features and their associated weights. We provide a complete Bayesian analysis of the proposed model and specialize our general theory to noteworthy classes of priors. This includes novel priors based on (i) determinantal point processes, which yield promising results in a spatial statistics application, and (ii) shot noise Cox processes, illustrated on genetics and ecological examples. Within the general class of extended feature allocations, we further characterize those priors that yield predictive probabilities of discovering new features depending either solely on the sample size or on both the sample size and the distinct number of observed features. These predictive characterizations, known as ``sufficientness'' postulates, have been extensively studied in the literature on species sampling models starting from the seminal contribution of the English philosopher W.E. Johnson for the Dirichlet distribution. Within the feature allocation setting, existing predictive characterizations are limited to very specific examples; in contrast, our results are general, providing practical guidance for prior selection. |
| 12:00 | Bayesian Nonparametric Community Detection in Stochastic Block Models with Structural Constraints PRESENTER: Martina Amongero ABSTRACT. Network-structured data are becoming increasingly common across many fields, including the social sciences, biology, physics, and computer science. A central task in network analysis is community detection, which involves partitioning nodes into groups so that nodes within the same group exhibit similar connectivity patterns. A generative model well suited to capturing such communities is the stochastic block model (SBM). Recent work has applied Bayesian nonparametric methods to jointly infer both community structures and the number of communities in the SBM by placing a prior on the number of blocks and estimating block assignments via collapsed Gibbs samplers. However, efficiently incorporating structural community constraints through the prior remains an open challenge. In this work, we address this gap by studying the effect of enforcing weak and strong assortativity as well as core–periphery structure on Bayesian nonparametric community detection for the SBM. We identify scenarios in which these constraints improve performance over the standard SBM and illustrate our results using benchmark datasets. |
| 12:25 | Exchangeable random permutations with an application to Bayesian graph matching PRESENTER: Francesco Gaffi ABSTRACT. We introduce a general Bayesian framework for graph matching grounded in a new theory of \emph{exchangeable random permutations}. Leveraging the cycle representation of permutations and the literature on exchangeable random partitions, we define, characterize, and study the structural and predictive properties of these probabilistic objects. A novel sequential metaphor, the \emph{position-aware generalized Chinese restaurant process}, provides a constructive foundation for this theory and supports practical algorithmic design. Exchangeable random permutations offer flexible priors for a wide range of inferential problems centered on permutations. As an application, we develop a Bayesian model for graph matching that integrates a correlated stochastic block model with our novel class of priors. The cycle structure of the matching is linked to latent node partitions that explain connectivity patterns, an assumption consistent with the homogeneity requirement underlying the graph matching task itself. Posterior inference is performed through a node-wise blocked Gibbs sampler directly enabled by the proposed sequential construction. To summarize posterior uncertainty, we introduce \emph{perSALSO}, an adaptation of SALSO to the permutation domain that provides principled point estimation and interpretable posterior summaries. Together, these contributions establish a unified probabilistic framework for modeling, inference, and uncertainty quantification over permutations. |
| 11:10 | Massive Particle Systems, Wasserstein Brownian Motions, and the Dean--Kawasaki SPDE ABSTRACT. Abstract: Let W be a conservative, ergodic Markov diffusion on some arbitrary state space M, converging exponentially fast to equilibrium. We consider: (1) Systems of up to countably many massive particles in M, with finite total mass. Each particle is subject to an independent instance of the noise W, with volatility the inverse mass carried by the particle. We prove that the corresponding infinite system of SDEs has a unique solution, for every starting configuration and every distribution of the masses in the infinite simplex. (2) Solutions to the Dean--Kawasaki SPDE with singular drift, driven by the generator L of W. We prove that the equation may be given rigorous meaning on M, and that it has a unique `distributional’ solution. This extends Konarovskyi--Lehmann--von Renesse's `ill-posedness vs. triviality' to the case of infinitely many massive particles. (3) Diffusions with values in the space P of all probability measures on M, driven by the geometry induced by L. (4) In the case when M is a manifold, differential-geometric and metric-measure Brownian motions on P induced by the geometry of optimal transportation and reversible for a normalized completely random measure. We show that all these objects coincide. Based on arXiv:2411.14936 |
| 11:35 | A semiconcavity approach to stability of entropic plans and exponential convergence of Sinkhorn's algorithm - Part 2 PRESENTER: Giacomo Greco ABSTRACT. We study stability of optimizers and convergence of Sinkhorn's algorithm for the entropic optimal transport problem. In the special case of the quadratic cost, our stability bounds imply that if one of the two entropic potentials is semiconcave, then the relative entropy between optimal plans is controlled by the squared Wasserstein distance between their marginals. When employed in the analysis of Sinkhorn's algorithm, this result gives a natural sufficient condition for its exponential convergence, which does not require the ground cost to be bounded. By controlling from above the Hessians of Sinkhorn potentials in examples of interest, we obtain new exponential convergence results. For instance, for the first time we obtain exponential convergence for log-concave marginals and quadratic costs for all values of the regularization parameter, based on semiconcavity propagation results. These optimal rates are also established in situations where one of the two marginals does not have subgaussian tails. Other interesting will be presented in this joint-talk. |
| 12:00 | A semiconcavity approach to stability of entropic plans and exponential convergence of Sinkhorn’s algorithm - Part 1 PRESENTER: Luca Tamanini ABSTRACT. We study stability of optimizers and convergence of Sinkhorn’s algorithm for the entropic optimal transport problem. In the special case of the quadratic cost, our stability bounds imply that if one of the two entropic potentials is semiconcave, then the relative entropy between optimal plans is controlled by the squared Wasserstein distance between their marginals. When employed in the analysis of Sinkhorn’s algorithm, this result gives a natural sufficient condition for its exponential convergence, which does not require the ground cost to be bounded. By controlling from above the Hessians of Sinkhorn potentials in examples of interest, we obtain new exponential convergence results. For instance, for the first time we obtain exponential convergence for log-concave marginals and quadratic costs for all values of the regularization parameter, based on semiconcavity propagation results. These optimal rates are also established in situations where one of the two marginals does not have sub-Gaussian tails. Other interesting will be presented in this joint-talk. |
| 12:25 | Properties of diffusion transport maps via creation of log-semiconcavity along heat flows PRESENTER: Katharina Eichinger ABSTRACT. Finding regular transport maps between measures is an important task in generative modelling and a useful tool to transfer functional inequalities. The most well-known result in this field is Caffarelli’s contraction theorem, which shows that the optimal transport map from a Gaussian to a uniformly log-concave measure is globally Lipschitz. Note that for our purposes optimality of the transport map does not play a role. This is why several works investigate other transport maps, such as those derived from diffusion processes, as introduced by Kim and Milman. Here, we establish a lower bound on the log-semiconcavity along the heat flow for a class of what we call asymptotically log-concave measures. We will see that this implies Lipschitz bounds for the heat flow map introduced by Kim and Milman. I will also comment on its implication for stability of these maps. Based on a joint work with Louis-Pierre Chaintron and Giovanni Conforti. |
| 11:10 | A probabilistic approach for optimal stopping mean-field games PRESENTER: Andrea Cosso ABSTRACT. We propose a probabilistic formulation of optimal-stopping mean field games by introducing a new class of BSDEs, termed McKean-Vlasov reflected backward stochastic differential equations. An equilibrium is characterized by a quadruple $(Y,Z,A,L)$, where $L$ is a $[0,1]$-valued, non-increasing càdlàg process, representing a randomized stopping strategy. Two additional Skorokhod-type conditions involving the process $L$ enforce the optimality of the stopping rule at equilibrium. We prove the existence of equilibria $(Y,Z,A,L)$ by applying the Kakutani–Fan–Glicksberg fixed-point theorem to a set-valued best-response map; and, under alternative assumptions, we also obtain existence via Tarski's fixed-point theorem. Furthermore, we establish uniqueness under specifc conditions. We also show that the mean field equilibrium induces an approximate Nash equilibrium for the associated $N$-player stopping game. Finally, we connect our probabilistic formulation to the analytical approach, which is characterized by a system of constrained partial differential equations. |
| 11:35 | Mean-Field Optimal Control Approach to Deep Learning ABSTRACT. In this talk I will explain how the training phase of certain deep neural networks can modeled, in some asymptotic regimes, as a mean-field optimal control problem. I will then explain how this view point allows to address uniqueness and stability properties of the optimal distribution of parameters and what can be said regarding the associated gradient descent. This is based on joint works with F. Delarue. |
| 12:00 | Mean field games with option to buy information PRESENTER: Markus Fischer ABSTRACT. We introduce a class of continuous time finite horizon mean field games where the objective function of the representative player depends on a hidden state, in addition to position, control, and the population distribution. While acting on the position dynamics, the agent has the option to pay for seeing the hidden state. We connect the original formulation of our model with a mean field model of optimal control with discretionary stopping and discuss questions of existence and characterization of solutions. For a class of N-player games with compatible information structure, we show that approximate Nash equilibria can be constructed starting from a solution to the limit model. |
| 12:25 | Uniqueness for Finite-State Mean Field Games with Non-Separable Hamiltonians PRESENTER: Nicola Fraccarolo ABSTRACT. Mean field games (MFGs), introduced independently by Lasry and Lions and by Huang, Malham\'e and Caines, describe strategic interactions among a large population of agents through the coupled evolution of a value function and of the distribution of a representative player. A fundamental issue in this theory is the uniqueness of equilibria, which is essential both for modelling purposes and for the stability of numerical approximations. In the classical framework, uniqueness is usually obtained under the Lasry-Lions monotonicity condition, which relies on a separable structure of the Hamiltonian with respect to the state and the population distribution. In this work we study finite-state continuous-time mean field games with distribution-dependent jump intensities, leading to Hamiltonians that are genuinely non-separable. The state of a representative player evolves in a finite set $\Sigma=\{1,\ldots,d\}$ and, when the player is in state $x$, the transition rate towards a different state $y$ is of the form \[ \alpha_y(t,x)+b(x,\mu(t)), \] where $\alpha$ is the control and $b$ is a nonnegative interaction term depending on the population distribution $\mu(t)$. This structure naturally arises in models with congestion, network effects or endogenous transition mechanisms. For a fixed flow of measures, the associated Hamilton-Jacobi-Bellman equation involves the Hamiltonian \[ H(x,\mu,p)=\sum_{y\neq x}\left(\frac12 (p_y)_{-}^{2}-b(x,\mu)p_y\right), \] which couples the population variable and the finite differences of the value function in a non-separable way. The mean field equilibrium is characterised by a forward-backward system consisting of this Hamilton-Jacobi-Bellman equation and a Kolmogorov equation with distribution-dependent transition rates. We provide a uniqueness result for this class of non-separable finite-state mean field games, valid on arbitrary finite time horizons. Uniqueness is established under a combination of a strong monotonicity condition on the running cost, a standard monotonicity condition on the terminal cost, and Lipschitz continuity of the interaction term $b$ with respect to the population distribution. In contrast with the classical Lasry-Lions theory, the monotonicity of the costs alone is not sufficient: the dependence of the dynamics on the distribution generates additional coupling terms which must be controlled by explicit quantitative conditions. Our results highlight the precise balance between cost monotonicity and the strength of the distribution-dependent transition rates required to recover uniqueness in non-separable mean field game models. |
| 11:10 | Multivariate additive subordination with applications in finance PRESENTER: Giovanni Amici ABSTRACT. We introduce a tractable multivariate pure jump process in which the trading time is described by an additive subordinator. The multivariate process retains the additivity property, and therefore is time inhomogeneous, i.e., its increments are independent but non stationary. We provide the theoretical framework of our process, perform a sensitivity analysis with respect to the time inhomogeneity parameters, and design a Monte Carlo scheme to simulate the trajectories of the process. We then employ the model in the context of option pricing in the FX market. We take advantage of the specific features of currency triangles to extract the joint dynamics of FX log-rates. Extensive tests based on observed market data show that our model outperforms well established pure jump benchmarks. Moreover, we explore applications of our stochastic process to financial optimization problems and propose state-of-the-art derivative-free adaptive sampling algorithms to efficiently compute solutions. |
| 11:35 | Parametric local volatility: Exact prices lead to sound continuous Markovian models PRESENTER: Michele Azzone ABSTRACT. Local volatility models are generally seen as insufficient for handling the many nuances of modern derivative markets. By reverse-engineering a family of no-arbitrage call price functions, this research questions a number of claims in this direction. We introduce a class of continuous Markovian asset pricing models with closed-form option prices, leading -- by construction -- to identifiable risk-neutral marginal distributions, and then specialize to a significant instance where the SDE well-posedness can be shown, the generalized beta local volatility (GBLV) model. The GBLV finite-dimensional distributions coincide with those of a known discontinuous martingale model that exhibits an at-the-money implied volatility skew divergence. These findings contrast with the commonly accepted wisdom that LV is unsuitable for capturing the implied volatility surface's singular behavior as time-to-maturity approaches zero, and that option prices from jump models cannot be fitted to continuous Markov models. Such claims, typically regarded as valid for the \emph{whole} local volatility class, ultimately hinge on auxiliary assumptions, most notably, regularity of the diffusion coefficient at initial time. By directly embedding in the risk-neutral distributions the desirable properties an implied volatility surface should have, an LV model is freed from the constraints that make it unsuitable for capturing certain phenomena. As a consequence, the GBLV model does not suffer from several of the commonly exposed drawbacks of continuous Markovian models. |
| 12:00 | A Double Jump Stochastic Volatility model based on a Compound CARMA(p,q)-Hawkes PRESENTER: Edit Rroji ABSTRACT. In this paper we introduce a stochastic volatility model with correlated jumps, incorporating a self-exciting effect in the intensity dynamics. First we derive a pricing formula based on the compound CARMA(p, q)-Hawkes framework, where the stochastic volatility is influenced by the quadratic variation of the counting process in the log-price dynamics. Additionally, we construct a simulation algorithm for the jump term founded on the thinning algorithm. This algorithm is rooted in the existence of a Hawkes intensity with exponential kernel, which serves as an upper bound for the CARMA(p, q)-Hawkes intensity. Finally, we present numerical and empirical analyses. |
| 12:25 | Functional PCA for Risk-Neutral densities in Bayes space PRESENTER: Anna Maria Gambaro ABSTRACT. In this work, we investigate the main drivers of risk-neutral densities of quoted stocks, using the functional principal component analysis (FPCA). To this end, we first construct a historical series of risk-neutral densities corresponding to quoted option prices with fixed time to maturity, using exponential expansions of orthogonal polynomials. Then, we apply the centered log-ratio transformation (CLRT) to the extracted densities and we perform the FPCA in the Bayes–Hilbert space. The CLRT provides an isometric isomorphism between the Bayes space of square log-integrable densities and the classical Hilbert space of square-integrable functions. As a result, the projected data onto the principal component basis correspond to the CLRT-transformed densities, and the application of the inverse CLRT yields proper density functions. Furthermore, by modeling the historical series of FPCA scores as a stochastic process, we exploit the FPCA representation for forecasting purposes. Finally, we discuss extensions of this framework to cross-asset analyses and to the modeling of option price surfaces. |
| 11:10 | Some results on general $\Lambda$-quantiles PRESENTER: Felix-Benedikt Liebrich ABSTRACT. Lambda-quantiles are a generalisation of classical quantiles and have originally introduced in the financial literature by Frittelli et al.~\cite{ref_article1}. They are obtained by replacing the fixed probability level $\lambda \in [0,1]$ in the usual definition of a quantile with a functional parameter $\Lambda \colon \mathbb{R} \to [0,1]$. When $\Lambda$ is decreasing, $\Lambda$-quantiles are known to share many properties with classical quantiles, and they have thus received growing attention in recent years in financial and insurance applications as well as from a decision-theoretic perspective. In this talk, we advocate the use of general, possibly non-monotonic functional parameters~$\Lambda$. Under minimal assumptions, we examine how the choice of~$\Lambda$ affects the mathematical properties of the resulting functional. In particular, we study aggregation behaviour, weak continuity, mixture representations, and generalised ordinal covariance properties. Additionally, we show that the latter also provides an axiomatic characterisation of a broad class of~$\Lambda$-quantiles, even when the functional parameter is not monotone. |
| 11:35 | Ordering and measuring the complexity of lotteries ABSTRACT. We model complexity by introducing a complexity order that ranks lotteries by their Wasserstein distances from degenerate lotteries, which carry no risk. The resulting relation is a continuous incomplete preorder whose properties reflect the geometry of the outcome space. We relate it to the convex order, showing that they coincide for univariate monetary lotteries, while this equivalence fails in higher dimensions. To address incompleteness, we introduce a complexity measure defined by how well a lottery can be approximated by a degenerate one. This measure provides a natural completion of the complexity order and inherits many of its properties. It enables comparative statics for mixtures of lotteries and yields explicit maximally complex lotteries in several cases. Finally, we apply these notions to choice under risk. Combining the complexity order with first-order stochastic dominance yields a choice criterion that, for monetary lotteries, is equivalent to second-order stochastic dominance. Using our complexity measure, we define Complexity-Sensitive Expected Utility (CSEU) preferences. For this class of preferences, we analyze how complexity aversion interacts with risk aversion and, in particular, prove that complexity aversion is a component of risk aversion. |
| 12:00 | Event Valence and Subjective Probability ABSTRACT. This paper introduces the signed subjective expected utility (SSEU) model in which an individual’s willingness-to-bet (WTB) on an event reflects not only the event's subjective likelihood but also its ``valence''---a measure of intrinsic attractiveness or aversiveness of the event. As a result, an event's WTB may be greater than $1$ or less than $0$. Our model directly extends the subjective expected utility (SEU) model by weakening the Monotonicity axiom. We show that SSEU accounts for behavioral phenomena such as hedging aversion, the conjunction fallacy, coexistence of insurance and gambling, the choice of dominated actions in strategy-proof mechanisms, and the home equity bias puzzle. Finally, we show how to extend SSEU to allow for a stake-dependent (and non-additive) WTB. This extension accommodates recent experimental evidence showing that subjects jointly violate monotonicity and independence. |
| 12:25 | Disappointment aversion and expectiles PRESENTER: Fabio Bellini ABSTRACT. The central result of the theory of choice under uncertainty is Von Neumann and Morgenstern's expected utility theorem, stating that an economic agent whose preference relation among discrete probability measures satisfies suitable rationality axioms, is represented by an expected utility, i.e. by the expected value of a monetary utility function of outcomes. A remarkable extension of expected utility is Gul's (1991) theory of disappointment aversion, based on a slight weakening of the independence axiom of the vNM theory. Our first contribution is to point out the connection between the representing functional of Gul's preferences and the probabilistic notion of expectile, a one-parameter family of functionals introduced by Newey and Powell (1987) for asymmetric least squares regression. Indeed, it turns out that the Gul's functional is an expectiled utility, depending on two parameters: a vNM utility function $u$ and a disappointment-aversion parameter $\beta$. Further, we recast Gul's theory in a Savage framework where the preference is defined over acts with general, possibly non-monetary outcomes, relying on the notion of subjective mixture of acts with general outcomes introduced by Ghirardato et al. (2003). We introduce a novel axiom of disappointment hedging, that is a stronger version of the axiom of ambiguity hedging introduced by Ghirardato et al. (2003), and we show in our main result that a preference relation over Savage acts is probabilistically sophisticated, invariant biseparable, and disappointment hedging if and only if it is an expectiled utility. |
| 11:10 | Can the introduction of resetting expedite the first passage of a diffusion process? ABSTRACT. We show that the introduction of resetting is able to expedite the first passage of a diffusion process. To this end, we address the problem of minimizing the expected first-passage time (FPT) and the expected first-exit time (FET) of a one-dimensional diffusion process with Poissonian resetting, with respect to the resetting rate $r.$ We first derive a general analytical relationship that expresses the Laplace transform (LT) and the expected value of the FPT (and FET) for the process with resetting in terms of the LT of the FPT (and FET) of the underlying diffusion without resetting. This framework is then applied to determine the optimal resetting rate $r$ that minimizes the expected FPT (and FET). We provide explicit results for drifted Brownian motion and Ornstein-Uhlenbeck (OU) process. For Brownian motion, we extend existing literature by considering the case where the initial position $x$ differs from the resetting position $x _ R$, providing a comprehensive parametric analysis. For the OU process, we provide new insights into the minimization of the expected FPT, a case that has remained largely unexplored. Our results demonstrate how a strategic choice of the resetting rate can effectively regularize and accelerate search processes across one or two boundaries. |
| 11:35 | Master Equations for Continuous-Time Random Walks with Stochastic Resetting PRESENTER: Gianni Pagnini ABSTRACT. We study a general continuous-time random walk (CTRW), by including non-Markovian cases and Lévy flights, under complete stochastic resetting to the initial position with an arbitrary law, which can be power-lawed as well as Poissonian. We provide three linked results. First, we show that the random walk under stochastic resetting is a CTRW with the same jump-size distribution of the non-reset original CTRW but different counting process. Later, we derive the condition for a CTRW with stochastic resetting to be a meaningful displacement process at large elapsed times, i.e., the probability to jump to any site is higher than the probability to be reset to the initial position, and we call this condition the zero-law for stochastic resetting. This law joins with the other two laws for reset random walks concerning the existence and the non-existence of a non-equilibrium stationary state. Finally, we derive master equations for CTRWs when the resetting law is a completely monotone function. The talk is based on the recent paper [1]. [1] Colantoni F, Pagnini G.: Master equations for continuous-time random walks with stochastic resetting. Proc. R. Soc. A 481, 20250641 (2025) |
| 12:00 | On some drift-based transformations of multidimensional diffusion processes and their applications PRESENTER: Verdiana Mustaro ABSTRACT. We investigate a class of drift-based transformations for multidimensional diffusion processes. The approach is finalized to construct a transformed diffusion whose transition probability density function (p.d.f.) admits a product-form representation with respect to the p.d.f. of the original process. In particular, the ratio between the transformed and original transition densities reduces to a simple expression involving a weight function w. The framework is formulated in terms of stochastic differential equations, from which the weight function w is obtained. Moreover, we establish general conditions under which the transformed p.d.f. remains analytically tractable in the multidimensional setting. Specific choices of the weight function yield mixture representations of the transformed density, revealing structural properties such as bimodality and modified stochastic ordering. The analysis also shows how the product-form relation persists under Poissonian resetting mechanisms, leading in certain cases to explicit stationary distributions and offering insight into diffusions evolving in potential fields. Two fundamental case studies are examined in detail, based on transformations of the Wiener and Ornstein--Uhlenbeck processes. For these models, explicit expressions of the weight function, potential structure, and transition densities are derived. Special attention is devoted to the two-dimensional setting, for which the conditions and behaviors of the transformed processes are analyzed in depth. \par Beyond its theoretical relevance, the representation suggests practical applications in simulation. In particular, it naturally supports rejection sampling schemes, where the original transition density serves as a proposal distribution. Under suitable boundedness conditions, the acceptance probability can be expressed directly in terms of the weight function, resulting in an efficient and implementable algorithm. The results highlight the flexibility of drift-based transformations as a tool for constructing analytically tractable diffusions. While the present work focuses on prototypical Gaussian models, the methodology suggests several possible extensions, including more general diffusion classes, alternative resetting mechanisms, and further analytical and computational developments. This contribution is based on [1]. |
| 12:25 | Modeling Monkeypox transmission through stochastic dynamics with self-excitation PRESENTER: Barbara Martinucci ABSTRACT. The transmission of Mpox, a zoonotic Orthopoxvirus with rodents as primary reservoirs, exhibits marked clustering during mass gatherings and superspreader events, a feature overlooked by existing models \cite{Rahman2025}. We introduce a stochastic compartmental model incorporating Hawkes processes \cite{Hoks} to capture these self-exciting dynamics in human populations, complemented by Brownian noise for environmental fluctuations in both human and rodent compartments \cite{DiNunno}. \par We prove global existence, uniqueness, and positivity of solutions. Furthermore, we derive the basic reproduction number and establish explicit persistence-in-the-mean conditions for both infected rodents and humans. Numerical simulations are provided to illustrate the impact of self-exciting jumps on epidemic trajectories, highlighting how Hawkes dynamics significantly enhance the predictive capacity of Mpox modeling compared to classical stochastic approaches. Our results suggest that incorporating temporal dependence in jump processes is essential for evaluating the effectiveness of public health interventions, such as quarantine and public awareness campaigns, in the face of clustered transmission patterns. |
| 11:10 | Large-sample asymptotics of coalescent importance sampling algorithms PRESENTER: Jere Koskela ABSTRACT. The coalescent is a foundational model of latent genealogical trees under neutral evolution, but suffers from intractable sampling probabilities. Methods for approximating these sampling probabilities either introduce bias or fail to scale to large sample sizes. We identify a class of functionals of the coalescent which describe the variance of estimators from classical importance sampling algorithms, and which have tractable infinite-sample limits. These functionals provide the first mathematical descriptions of the performance of some seminal coalescent inference methods, and reveal that coalescent importance sampling differs markedly from the behaviour of (sequential) importance samplers in more standard settings, with or without resampling. |
| 11:35 | Gamma duality and a tractable transition density for the Wright-Fisher diffusion with selection PRESENTER: Jaromir Sant ABSTRACT. The transition function of the Wright--Fisher diffusion with selection is central to understanding non-neutral evolution but, unlike in the neutral case, is not available in a form that is straightforward to evaluate: duality-based approaches typically lead to dual processes with intractable rates, while spectral methods rely on truncation whose computational burden grows rapidly with the strength of selection and model complexity. We develop a \emph{gamma duality} framework for a multi-allelic Wright--Fisher diffusion with parent-independent mutation and genic selection, based on an exponential augmentation of polynomial duality. We show that the resulting birth-and-death dual process has tractable infinitesimal rates, identify its stationary distribution, and describe its small-time behavior. This dual yields an explicit representation of the transition function as a mixture of standard Dirichlet components, with mixing weights characterized by the dual started from an entrance boundary. The representation supports computation for arbitrary numbers of alleles and selection coefficients, including regimes where existing approaches are unavailable or impractical. We illustrate that our algorithms deliver substantially improved runtimes over specialized methods, when these do apply. |
| 12:00 | Multi-type logistic branching processes with selection: frequency process and genealogy for large carrying capacities ABSTRACT. We present a model for growth in a multi-species population. We consider two types evolving as a logistic branching process with mutation, where one of the types has a selective advantage. We first study the frequency of the disadvantageous type and show that, once the population approaches the carrying capacity, its evolution converges to a Gillespie-Wright-Fisher diffusion process. We then study the dynamics backward in time: we fix a time horizon at which the population is at carrying capacity and we study the ancestral relations of a sample of individuals. We prove that, provided that the advantageous and disadvantageous branching measures are ordered, this ancestral line process converges to the moment dual of the limiting diffusion. This talk is based on joint work with Julian Kern. |
| 12:25 | Dice processes, moment duality, and the propagation of exchangeability ABSTRACT. We introduce the dice process, a probabilistic model describing the evolution of a collection of particles moving on a graph according to random local rules. At each time step, all particles occupying the same site use a common, randomly chosen “dice” to determine their next move. This construction gives rise to a rich class of (partially) exchangeable Markov chains. The first result of the talk establishes that every partially exchangeable col- lection of Markov chains on a finite state space can be represented as a dice process. As an application, we obtain a natural characterization of multitype Λ-coalescents without restrictions on the migration mechanism (based on joint work with Noemi Kurt, Imanol Nu ̃nes, and Jos ́e Luis P ́erez). We will also briefly discuss a related detour involving the evolutionary rate of plasmid-bearing bacteria. This part of the talk is based on recent experimen- tal work by Paula Ramiro-Mart ́ınez, Ignacio de Quinto, Laura Jaraba-Soto, Val F. Lanza, Cristina Herencias-Rodr ́ıguez, Rafael Pe ̃na-Miller, and Jer ́onimo Rodr ́ıguez-Beltr ́an. Finally, we discuss a way to construct functions of sequences of exchangeable random variables that preserve exchangeability. Surprisingly, this leads to a transparent connection between moment duality and de Finetti’s theorem. This last part is based on joint work with Arno Siri-J ́egousse and Ariel Offenstadt. |