IMPMS 2026: THE 5TH ITALIAN MEETING ON PROBABILITY AND MATHEMATICAL STATISTICS
PROGRAM FOR TUESDAY, JUNE 9TH
Days:
previous day
next day
all days

View: session overviewtalk overview

11:10-12:50 Session 5A: CS104: SPDEs for physical models
11:10
Strong Feller property and irreducibility for stochastic PDEs with degenerate multiplicative noise
PRESENTER: Luca Scarpa

ABSTRACT. We establish strong Feller property and irreducibility for the transition semigroup associated to a class of nonlinear stochastic partial differential equations with multiplicative degenerate noise. As a by-product, we prove uniqueness of the invariant measure under very mild assumptions. The drift of the equation diverges exactly where the noise coefficient vanishes, resulting in a competition between the dissipative effects and the degeneracy of the noise. The main idea is to introduce a mathematical method to measure the accumulation of the solution towards the potential barriers, allowing to give rigorous meaning to the inverse of the noise operator even in the degenerate case. If the singularity of the drift and the degeneracy of the noise are suitably balanced, the dynamics are shown to stabilise for large times. From the mathematical point of view, the results provide a first generalisation of the classical work by Peszat & Zabczyk [1] to the case of degenerate multiplicative diffusions. From the application perspective, the models cover interesting scenarios in physics, in the context of evolution of relative concentrations of mixtures, under the influence of thermodynamically-relevant potentials of Flory-Huggins type.

[1] Peszat, S., Zabczyk, J.: Strong Feller property and irreducibility for diffusions on Hilbert spaces. The Annals of Probability, 157–172, (1995).

11:35
Anomalous Regularization and Dissipation for 2D Euler Equations with Rough Kraichnan Noise
PRESENTER: Eliseo Luongo

ABSTRACT. In the 1960s, Robert Kraichnan introduced an idealized model for passive scalar turbulence, in which the scalar field is advected by a Gaussian velocity that is delta-correlated in time and Hölder continuous in space. Despite its simplicity, the corresponding linear stochastic PDE captures key features of turbulent flows, including anomalous dissipation. Renewed interest in this model followed the work of Coghi and Maurelli, which showed that the same transport-type noise restores well-posedness in regimes where the deterministic 2D Euler equations admit non-unique weak solutions. In this talk, we further develop this line of research by investigating additional properties of the solutions constructed by Coghi and Maurelli. In particular, we present new results on anomalous fractional Sobolev regularity and anomalous dissipation of the mean enstrophy for solutions to the 2D Euler equations with rough Kraichnan noise. Time permitting, we will also discuss implications for the well-posedness theory of more singular nonlinear advection models, such as the Surface Quasi-Geostrophic and Incompressible Porous Media equations. This talk is based on ongoing joint work with L. Galeati and U. Pappalettera.

12:00
Stochastic diffuse interface models driven by conservative noise
PRESENTER: Andrea Di Primio

ABSTRACT. In this talk, we deal with a class of stochastic diffuse interface models driven by conservative noise. More precisely, we introduce the Cahn–Hilliard and the conserved Allen–Cahn equations with logarithmic type potential and conservative noise in a periodic domain. These features ensure on one hand that the order parameter takes its values in the physical range and, on the other, albeit the stochastic nature of the problems, that the total mass is conserved almost surely in time. Existence and uniqueness of probabilistically-strong solutions is discussed, highlighting the key technical points arising from the structure of the noise. Further directions of research will also be presented.

12:25
Ergodicity of a Stochastic Energy Balance Model for Global Temperature
PRESENTER: Giulia Carigi

ABSTRACT. A simple yet extremely valuable approach to the study of the climate system comes from the use of Energy Balance Models (EBMs). Such models describe the key features of the zonally averaged temperature on the Earth’s surface. The classical EBM can be improved by increasing the vertical resolution. This talk presents a two-layer energy balance model that allows for vertical exchanges between a surface layer and the atmosphere. Considering random perturbations of the model will allow to better study its long-time average behaviour. Thanks to the weak Harris’ theorem, we will establish exponential ergodicity. This is the first step to study the model dependence on different forcing scenarios via response theory.

11:10-12:50 Session 5B: CS123: Asymptotic properties of Gaussian fields
11:10
Fractional Cointegration of Geometric Functionals
PRESENTER: Anna Vidotto

ABSTRACT. In this talk, we show that geometric functionals (e.g., excursion area, boundary length) evaluated on excursion sets of sphere-cross-time long memory random fields can exhibit fractional cointegration, meaning that some of their linear combinations have shorter memory than the original vector. These results prove the existence of long-run equilibrium relationships between functionals evaluated at different threshold values; as a statistical application, we discuss a frequency-domain estimator for the Adler-Taylor metric factor, i.e., the variance of the field’s gradient. Our results are illustrated also by Monte Carlo simulations.

11:35
Limit theorems for functionals of stationary Gaussian fields

ABSTRACT. In this talk, we explore central and non-central limit theorems for functionals of stationary Gaussian random fields, together with their regularity properties, through the lens of their decomposition into Wiener chaoses. We will emphasize three main settings: the case in which the functional is ``concentrated on a single chaos''; the Breuer–Major setting, where all chaotic components contribute equally to the limit; and the challenging scenario in which the dominant contribution to the variance arises from the tail of the chaotic expansion. Throughout the presentation, we will illustrate these phenomena with key examples and discuss some open questions.

This talk is mainly based on joint works with L. Maini, N. Turchi and G. Zheng.

12:00
Statistics on Yau's conjecture
PRESENTER: Michele Stecconi

ABSTRACT. Yau’s conjecture predicts that the nodal volume of Laplace–Beltrami eigenfunctions on a compact Riemannian manifold grows proportionally to the frequency. After more than four decades of progress, this fundamental problem is now close to resolution. In this talk, I will discuss a complementary probabilistic viewpoint: rather than focusing only on deterministic bounds, we seek to understand the typical behavior of nodal sets and, in particular, their fluctuations. Random wave models provide a natural framework for this question, reflecting the rich stochastic structure that high-frequency eigenfunctions are expected to exhibit, as suggested by Berry’s conjecture. I will survey recent developments concerning the variance problem in Yau’s conjecture, relying on new chaotic expansion and covariance asymptotics.

12:25
Berry-Heisenberg random waves
PRESENTER: Marco Carfagnini

ABSTRACT. In the 70s, Berry argued that in the high-energy limit wave functions locally look like random superpositions of independent plane waves, having all the same wavenumber. He introduced a Gaussian random field whose sample paths are generalized Laplace eigenfunctions. The aim of this talk is to present a similar model for the sub-Laplacian on the Heisenberg group, which is the analogue of the Euclidean space in sub-Riemannian geometry. It combines ideas from PDE, representation theory, and stationary random fields.

11:10-12:50 Session 5C: CS135: Stochastic Systems, Mean-Field Models, and Control
11:10
Mean-field control of heterogeneous systems

ABSTRACT. We study the optimal control of mean-field systems with heterogeneous and asymmetric interactions. This leads to considering a family of controlled Brownian diffusion processes with dynamics depending on the whole collection of marginal probability laws. We prove the well-posedness of such systems and define the control problem together with its related value function. Leveraging tools tailored for this framework, such as derivatives along flows of measures and associated Itô calculus, we establish that the value function for this control problem satisfies a Bellman dynamic programming equation in a L2-set of Wasserstein space-valued functions. To illustrate the applicability of our approach, we present a linear- quadratic graphon model with analytical solutions, and apply it to a systemic risk example involving heterogeneous banks.

11:35
PDEs driven by Dirichlet-Ferguson laplacian in Wasserstein-Sobolev spaces
PRESENTER: Mattia Martini

ABSTRACT. In this talk, we discuss linear and nonlinear PDEs defined on the space of probability measures over the flat torus, equipped with the Dirichlet-Ferguson measure. We first present an analytic framework based on the Wasserstein-Sobolev space associated with the Dirichlet form induced by the infinite-dimensional Laplacian acting on functions of measures. Within this setting, we establish existence and uniqueness results for transport-diffusion and Hamilton-Jacobi equations in the Wasserstein space. Our analysis connects the PDE approach with a corresponding interacting particles system providing a probabilistic (Kolmogorov-type) representation of strong solutions. Finally, we extend the theory to semilinear equations and mean-field optimal control problems, together with consistent finite-dimensional approximations.

12:00
Mean-Field Games in Hilbert Spaces: A Viscosity Approach
PRESENTER: Lukas Wessels

ABSTRACT. We investigate a Mean-Field Game (MFG) posed in an infinite-dimensional Hilbert space and driven by degenerate noise. The associated MFG system consists of a Hamilton–Jacobi–Bellman (HJB) equation coupled with a nonlinear Fokker–Planck (FP) equation, both governed by a degenerate Kolmogorov operator.

The degeneracy of the noise introduces significant analytical challenges. In particular, the HJB equation is treated in the viscosity sense, while the FP equation is interpreted in a suitable weak formulation. A major difficulty stems from the degeneracy of the Kolmogorov operator, which makes the uniqueness of solutions to the FP equation particularly delicate.

Under appropriate structural assumptions, we establish well-posedness of the MFG system. As an application, we consider Mean-Field Games arising from stochastic delay differential equations, highlighting how delay effects naturally lead to infinite-dimensional and degenerate dynamics.

This talk is based on joint work with Andrzej \'{S}wi\k{e}ch.

12:25
Environmental asset, Optimal control, N-players game, Social Planner, Pigouvian tax, Mean field game

ABSTRACT. The aim of our work is to compare the optimal consumption path under various perceptions of the environment: either only the local environmental quality is considered, either both local and the whole environmental quality are taken into account. A benchmark model for which each representative agents takes into account in her welfare function only local environmental quality is studied. When representative agent in each locality is only concerned by his own local environmental quality, the optimal trajectory follows a balanced growth path solution, that is environmental quality grows at a constant rate all over the time. The consequences on global environmental quality in this benchmark model are then considered. We then study whether taking into account global environmental quality may better local environmental quality. To this aim we study a N-player game where both local and global amenity value of the environmental quality are considered. Some remarks on the behaviour are deduced when the number of the agents tends to infinity. We show that local environmental quality may not necessarily be better by this consideration.

11:10-12:50 Session 5D: CS140: Community structure of Complex Networks
11:10
Mixing trichotomy for random walks on directed stochastic block models

ABSTRACT. In this talk, we will analyze the convergence to equilibrium of a simple random walk on a directed version of the classical Stochastic Block Model with $m$ communities. We show that the mixing behavior of the walk exhibits a trichotomy governed by the parameter $\alpha$, which controls the strength of inter-community interactions. In the subcritical regime (large $\alpha$) the dynamics displays cutoff at at the entropic timescale $T^* \sim \log(n)/\log\log(n)$. In the supercritical regime (small $\alpha$) the mixing is driven by rare inter-community transitions, leading to a metastable behavior. After an abrupt jump at timescale $T^*$, the distance to equilibrium decays smoothly at an exponential rate on the timescale $1/\alpha$. At criticality (when $1/\alpha\sim T^*$), an intermediate behavior emerges, characterized by an interplay between entropic mixing and inter-community transitions. Joint work with G. Passuello and M. Quattropani.

11:35
Homophily within and across groups: a maximum-entropy framework for analyzing different social scales

ABSTRACT. Homophily is the tendency of people to interact more with others who are similar to them (for instance, by age, gender, or opinion). It is often summarized by a single assortativity number for the entire network, but this can hide important structural features. From a probabilistic viewpoint, this is a loss of information: two networks may share the same global homophily yet have very different local organization and large-scale behavior.

A key observation is that social interactions naturally occur at different “scales”. Some interactions are one-to-one, while others occur within small groups (for example, a work team, a classroom, or a recurring social circle). These scales need not display the same mixing pattern: a society may have many cross-group acquaintances but mostly same-group close circles, or vice versa. Standard network statistics typically blend these effects together.

In this talk, we present a modeling framework that makes this multi-scale structure explicit while remaining analytically tractable. The idea is to represent a sparse network as the superposition of group interactions of different sizes. For each size we control, we estimate how strongly interactions tend to occur within versus across groups. The model is built using a maximum-entropy principle: among all random networks consistent with basic constraints (such as the overall group proportions and the observed amount of same-group interaction at each scale), we select the least biased distribution. This provides a clean probabilistic baseline, a transparent interpretation of parameters, and a practical route to inference from data.

Empirically, fitting the model to social networks reveals that homophily can be strongly scale-dependent. Networks that appear similar when judged by a single assortativity score can differ markedly once we separate direct contacts from small-group structure. This yields an informative “homophily profile” across scales and helps explain which parts of the network architecture are responsible for perceived segregation or integration.

We also discuss why these distinctions matter for probabilistic questions about connectivity and spreading. In many applications, transmission (of information, behaviors, or infections) occurs through a combination of reinforcement inside groups and occasional bridges between groups. Changing where homophily sits, mostly in direct links versus mostly inside groups, can change the onset and size of large connected components and alter the effectiveness of interventions. Overall, the framework offers a principled way to connect interpretable social mechanisms to the behavior of random graph models at scale.

12:00
Localized geometry detection in scale-free random graphs
PRESENTER: Gianmarco Bet

ABSTRACT. We consider the problem of detecting whether a power-law inhomogeneous random graph contains a geometric community, and we frame this as a hypothesis-testing problem. More precisely, we assume that we are given a sample from an unknown distribution on the space of graphs on $n$ vertices. Under the null hypothesis, the sample originates from the inhomogeneous random graph with a heavy-tailed degree sequence. Under the alternative hypothesis, $k=o(n)$ vertices are given spatial locations and connect following the geometric inhomogeneous random graph connection rule. The remaining $n-k$ vertices follow the inhomogeneous random graph connection rule. We propose a simple and efficient test based on counting normalized triangles to differentiate between the two hypotheses. We prove that our test correctly detects the presence of the community with high probability as $n\to\infty$, and identifies large-degree vertices of the community with high probability.

12:25
node2vec random walks: Regular graphs and recurrence
PRESENTER: Lars Schroeder

ABSTRACT. node2vec random walks are tuneable random walks that come from the popular algorithm node2vec which is used for network embedding. The transition probabilities of the random walks depend on the previous visited node and on the triangles that contain the current and the previous node. In the node2vec algorithm, node2vec random walks are used to sample neighborhoods for each node of the network and by comparing these an embedding of the network into a Euclidean space can be computed. Since the parameters of the random walks can be tuned to create different types of neighborhoods, this approach is very flexible and advantageous over just using simple random walks.

Even though the algorithm is widely used in practice, mathematical properties of node2vec random walks almost have not been investigated and even basic questions such as how the stationary distribution depends on the walk parameters and if the random walk is recurrent are nearly unexplored. In this talk, we study the behavior of node2vec random walks on regular graphs. By going to a higher-order state space, the space of directed wedges, we can prove a simple expression of the stationary distribution on this space which is determined by the transition type of the wedge. We also formalize a pullback mechanism to retrieve the stationary distribution on the original state space. Further, we show that on infinite regular graphs, node2vec random walks are recurrent if and only if the simple random walk is recurrent.

11:10-12:50 Session 5E: CS145: Frontiers in Infinite-Dimensional Stochastic Control: Theory and Applications
11:10
A Novel Approach to Peng's Maximum Principle for McKean-Vlasov Stochastic Differential Equations

ABSTRACT. We consider the control of a McKean-Vlasov stochastic differential equation (SDE) and present a novel approach to the proof of Peng's maximum principle. The main step is the introduction of a third adjoint equation, a conditional McKean-Vlasov backward SDE: Peng's maximum principle is derived from a second-order Taylor expansion of the cost functional, which in the McKean-Vlasov case, due to the structure of the Lions derivative, introduces quadratic terms that contain independent copies of the variational processes. To accommodate the dualization of these terms, we introduce this third adjoint equation. We only treat SDEs in $\mathbb{R}^d$ but the dependence on the distribution already makes these equations inherently infinite dimensional. Our approach will also be useful in further extensions to the common noise setting in mean-field control and the control of Hilbert space valued McKean-Vlasov SDEs.

11:35
Optimal Control of Infinite-Dimensional Differential Systems with Randomness and Path-Dependence and Stochastic Path-Dependent Hamilton–Jacobi Equations
PRESENTER: Yang Yang

ABSTRACT. This talk is devoted to the stochastic optimal control problem of infinite-dimensional differential systems allowing for both path-dependence and measurable randomness. As opposed to the deterministic path-dependent cases studied by Bayraktar and Keller [J. Funct. Anal. 275 (2018) 2096-2161], the value function turns out to be a random field on the path space and it is characterized by a stochastic path-dependent Hamilton-Jacobi (SPHJ) equation. A notion of viscosity solution is proposed and the value function is proved to be the unique viscosity solution to the associated SPHJ equation.

12:00
Deep Hilbert Galerkin methods for PDEs on Hilbert spaces via derivative-informed operator learning with applications to stochastic optimal control of infinite-dimensional systems
PRESENTER: Jackson Hebner

ABSTRACT. Our previous research (joint with S. Cohen, F. de Feo, J. Sirignano) shows that Hilbert Neural Operators are able to approximate classical solutions of fully nonlinear second-order partial differential equations on Hilbert spaces, such as Hamilton-Jacobi-Bellman and backwards Kolmogorov equations. Based on this result, we propose two actor-critic algorithms for solving Hilbert-valued HJB equations and two algorithms for solving Hilbert-valued backwards Kolmogorov equations. We then apply these algorithms to the control of the stochastic heat equation, a stochastic delay equation, the stochastic Burgers equation, and a mean-field control problem. To the best of our knowledge, these algorithms are the first methods for solving PDEs directly on their whole Hilbert space domain.

12:25
Derivative-informed Hilbert neural operators solve PDEs on Hilbert spaces and infinite-dimensional optimal control problems
PRESENTER: Filippo de Feo

ABSTRACT. We consider infinite-dimensional partial differential equations (PDEs) on separable Hilbert spaces with unbounded operators. These challenging equations arise in most applied sciences, e.g., as Kolmogorov PDEs and Hamilton-Jacobi-Bellman PDEs related to deterministic, stochastic, and controlled evolution equations (including PDEs and SPDEs, path-dependent deterministic and stochastic DEs, partially observed stochastic systems, and mean field systems) and functional differential equations. While a theoretical framework for these PDEs is well established in the literature, the development of numerical methods is an open area of research. In this talk, we provide a new theoretical analysis that rigorously justifies the development of Deep Galerkin methods, which will be presented in a companion talk. We start by parameterizing the solution of these PDEs via a Hilbert Neural Operator (HNO). We prove that HNOs can accurately represent classical solutions of these PDEs by showing new Universal Approximation Theorems for Frechet derivatives. Using these preliminary results, we show that HNOs approximately solve these PDEs. Finally, we consider optimal control problems of deterministic and stochastic evolution equations and we derive universal approximation results of optimal feedback controls in terms of our approximate solution HNO. Based on joint work Samuel Cohen, Jackson Hebner, and Justin Sirignano.

11:10-12:50 Session 5F: CS152: Optimal stopping, stochastic control and stochastic games I
11:10
Optimal Annuitization under Partially Observable Mortality

ABSTRACT. This paper studies the optimal timing of annuitization when individual mortality is only partially observable. Annuities provide insurance against longevity risk by converting wealth into a lifelong income stream, but the decision to annuitize is typically irreversible and depends crucially on one's life expectancy. While insurers price annuities using objective mortality tables, individuals base their decisions on a subjective mortality force. We assume that the individual is uncertain about their mortality and instead relies on partial information about their health status.

Building on recent work on optimal annuitization ([3], [2] and [1]), we consider an individual who invests wealth in a financial fund modelled as a geometric Brownian motion and chooses when to irreversibly convert all wealth into a life annuity. The individual’s mortality force follows a two-state piecewise deterministic process, switching from a low to a high level at an unobservable random time that represents a serious and permanent health deterioration. The individual does not directly observe the change in mortality when it occurs. Instead, they receive noisy information about their health status over time. As a consequence, the individual must form and continuously update beliefs about whether the mortality force has already switched from the low to the high state. Mathematically, this translates into including as a state variable the posterior probability of the occurrence of the change in mortality.

The annuitization problem is formulated as an optimal stopping problem under partial information. The stopping region is shown to be connected and free of isolated boundary points, which ensures continuous differentiability of the value function. The optimal strategy is of a threshold type: the state dynamics is two dimensional, and it includes a wealth process and the posterior belief process of the agent about the occurrence of the health shock. Annuitization becomes optimal when wealth crosses a belief-dependent threshold, either from below or from above, depending on the model parameters. We then analyse the qualitative behaviour of the free boundary, studying in particular its monotonicity properties with respect to beliefs about deteriorating health. Our results bridge optimal annuitization and quickest detection theory, highlighting how health uncertainty and learning dynamics significantly shape retirement timing decisions.

[1] - Buttarazzi, M., De Angelis, T., & Stabile, G. (2025). Optimal annuitization with stochastic mortality: Piecewise deterministic mortality force. arXiv preprint arXiv:2509.13091. [2] - De Angelis, T., & Stabile, G. (2019). On the free boundary of an annuity purchase. Finance and Stochastics, 23(1), 97-137. [3] - Hainaut, D., & Deelstra, G. (2014). Optimal timing for annuitization, based on jump diffusion fund and stochastic mortality. Journal of Economic Dynamics and Control, 44, 124-146.

11:35
An optimal transport foundation for a class of dynamically consistent risk measures
PRESENTER: Max Nendel

ABSTRACT. In this talk, we study a class of dynamically consistent risk measures that robustify a time-homogeneous Markovian reference model by allowing for distributional uncertainty in its transition laws. We start from one-step convex risk evaluations in which ambiguity is captured by penalized worst-case expectations over alternative transition laws. Imposing time consistency then yields a convex monotone semigroup on bounded continuous payoff functions, and this semigroup represents the associated dynamic risk measure. The semigroup is uniquely characterized by its risk generator. Under a lower bound on the family of penalties in terms of suitable optimal transport costs relative to the reference laws, we identify the generator on smooth test functions. For optimal transport bounds with linear small-time scaling, this produces a first-order, drift-type correction given by a convex Hamiltonian acting on the gradient. Under martingale-transport constraints and a different scaling, however, the leading correction is genuinely of second order and is described by a convex monotone functional acting on the Hessian. We illustrate both regimes for Wasserstein and martingale Wasserstein penalizations and derive explicit formulas via convex conjugates of the underlying transport costs.

12:00
Play longer when It matters: optimal match length in knock-out tournaments
PRESENTER: Yuqiong Wang

ABSTRACT. Later-stage matches in sports tournaments, especially semifinals and finals, are often treated as more important and therefore played longer. We ask whether this intuition can be justified from a sequential testing viewpoint. We study a knock-out tournament with $2^n$ players in which each match is modeled by a Brownian motion with an unobservable drift, representing the players' relative abilities. The tournament designer chooses how long each match should be played so that the strongest player wins the tournament with a prescribed probability.

We analyze two design regimes: $(i)$ deterministic designs, where all match lengths are fixed in advance, and $(ii)$ sequential designs, where match duration can adapt to the observed paths. Our main structural result shows that in both regimes, the optimal schedule makes the late-round matches longer than early-round matches, providing a formal statistical justification for common tournament practice.

We then quantify the efficiency gain from allowing sequential decisions, comparing the expected total observation time under optimal sequential designs to that of optimal deterministic schedules achieving the same success probability. We derive explicit bounds on the average reduction in sample size: sequential testing saves at least $36\%$ and at most $75\%$ on average. Moreover, the relative advantage of sequential methods grows as one requires higher precision.

11:10-12:50 Session 5G: CS160: Non-Homogeneous Random Graphs
Chair:
11:10
The grass-bushes-trees process on a scale-free network
PRESENTER: John Fernley

ABSTRACT. The grass-bushes-trees process is a two-type contact process in which one type (the trees), of infection parameter λ1, can invade the other type (the bushes) of infection parameter λ2. We look to show which graph parameters lead to the possibility of coexistence versus the necessity of competitive displacement, i.e. metastability of both types or fast extinction of the bushes.

11:35
The spectrum of dense kernel-based random graphs
PRESENTER: Michele Salvi

ABSTRACT. We study a broad class of inhomogeneous spatial random graphs, including long-range and scale-free percolation and preferential attachment-like models. Vertices are placed on the discrete d-dimensional torus and are equipped with heavy tailed random weights. The probability of linking any pair of vertices decays in their distance but increases as a function of the weights. We focus on the adjacency matrix of such graphs in the dense regime and prove that, as the size of the torus goes to infinity, the empirical spectral distribution converges. The corresponding limiting measure is given by an operator-valued semicircle law that we show to be absolutely continuous and to have finite second moment, even when the weights have infinite variance. We also characterize its Stieltjes transform by a fixed point equation in an appropriate Banach space.

12:00
Finite vs Infinite-Mean Heavy-Tailed Fitness: Geometry and Connectivity in Inhomogeneous Random Graphs
PRESENTER: Elena Matteini

ABSTRACT. We consider a class of inhomogeneous random graphs G_n(α, ε) where n vertices carry i.i.d. Pareto weights (Wi)i∈[n] with tail index α > 0. Conditionally on the weights, edges are drawn independently with probability pij = min(εWiWj , 1), where ε = ε_n controls sparsity. The behaviour of the model is driven by the tail index α, with a sharp structural change at the boundary α = 1. The infinite-mean and finite-mean regimes lead to fundamentally different emerging landscapes. Building on recent work of L. Avena, D. Garlaschelli, R.S. Hazra and M. Lalli (Journal of Applied Probability 2025), we analyze the degree asymptotics across the full range α > 0 and identify the relevant scalings of ε_n in each regime for the convergence in distribution of the typical degree. We then characterize the connectivity threshold. In the infinite-mean case α ≤ 1, connectivity is hub-driven and forces a collapse of the diameter to at most two. In the finite-mean regime α > 1, connectivity emerges through a collective mechanism at a density scale distinct from that of ultra-small-world behaviour. This is joint ongoing work with Luisa Andreis, Luca Avena and Rajat Hazra.

12:25
Metastability of Glauber dynamics with inhomogeneous coupling disorder

ABSTRACT. I will first introduce a general class of mean-field-like spin systems with random couplings that comprises both the Ising model on inhomogeneous dense random graphs and the randomly diluted Hopfield model. I will then present quantitative estimates of metastability in large volumes at fixed temperatures when these systems evolve according to a Glauber dynamics, i.e. where spins flip with Metropolis rates at inverse temperature $\beta$. The main result identifies conditions ensuring that with high probability the system behaves like the corresponding system where the random couplings are replaced by their averages. More precisely, we prove that the metastability of the former system is implied with high probability by the metastability of the latter. Moreover, we consider relevant metastable hitting times of the two systems and find the asymptotic tail behaviour and the moments of their ratio. Based on a joint work in collaboration with Anton Bovier, Frank den Hollander, Saeda Marello and Martin Slowik.

11:10-12:50 Session 5H: CS191: Selected Topics in Probability and Statistics
11:10
LEARNING THEORY OF SHALLOW NEURAL NETWORKS THROUGH THE LENS OF RKBS
PRESENTER: Lorenzo Fiorito

ABSTRACT. We develop a functional framework for shallow neural networks based on reproducing kernel Banach spaces. This approach enables a nonparametric treatment of neural networks, in direct analogy with kernel methods. A representer theorem shows that finite networks suffice for empirical risk minimization. Estimation and approximation error bounds can then be derived in linear function spaces. As a byproduct, we obtain universality results and approximation bounds showing that neural networks can adapt to latent structure in the problem. Further, we derive complexity estimates based on the Rademacher complexities of RKBS balls, independent of network size.

11:35
Parameter estimation in a fractional neuronal model
PRESENTER: Luigia Caputo

ABSTRACT. In the framework of stochastic modelling of some biological dynamics, the fractional calculus is one of the valid tool to insert memory effects in widely applied Markov models (see, for instance, \cite{AbundoPirozzi2021}, \cite{LeonenkoPirozzi2025}). Here, we focus on a class of fractional stochastic neuronal models among them those investigated in \cite{Pirozzi2018}, \cite{PirFracModels} and \cite{PirMittag}.

In particular, we apply techniques of parameter estimation for a generalized neuronal model driven by a fractional dynamic and stochastic input. Specifically, the membrane potential $V=\{V_t\}_{t\ge 0}$ is modeled through the Caputo fractional differential equation \[ D^\alpha V_t = A V_t + b + \eta(t), \qquad \alpha \in (0,1), \] where $D^\alpha$ denotes the Caputo derivative of order $\alpha$, $A,b\in\mathbb{R}$, and the latent input process $\eta$ satisfies the Ornstein--Uhlenbeck-type dynamics \[ d\eta(t) = -\Theta \eta(t)\,dt + \sigma\, dG_t, \] with $\Theta,\sigma>0$ and a driving process $G$ with stationary increments. The framework is intrinsically multidimensional, although the estimation methodology is first developed in the univariate case.

The mild solution of the fractional equation is expressed in terms of the Mittag--Leffler functions as follows: \[ V_t = E_\alpha(t^\alpha A) V_0 + \int_0^t s^{\alpha-1} E_{\alpha,\alpha}(s^\alpha A)\bigl(b+\eta(t-s)\bigr)\,ds, \] which provides short- and long-time asymptotics. These asymptotic expansions constitute the basis of a constructive estimation strategy. First, in the case of $b=0,$ exploiting the behavior of $V_t$ as $ t\downarrow 0$ we derive estimators for the fractional order $\alpha$. Alternative difference-based estimators are proposed to mitigate the slow convergence of bias terms involving $\log t$. Once $\widehat{\alpha}$ is obtained, the same asymptotics and large-time expansions of $E_\alpha$ yield consistent estimators of $A$.

After recovering $(\widehat{\alpha},\widehat{A})$, the latent noise is reconstructed via \[ \widehat{\eta}(t)=D^{\widehat{\alpha}}V_t-\widehat{A}V_t, \] and classical methods for Vasicek-type processes are applied to estimate $(\Theta,\sigma)$. The propagation of estimation error from the fractional stage to the second-step inference is analyzed numerically.

A discretization scheme for the Caputo derivative of order $\alpha$ is implemented, leading to an iterative algorithm of computational complexity $O(n^2)$. Simulation studies demonstrate that accurate estimation of $\alpha$ requires observations on a sufficiently fine grid near zero, while reliable inference for $(\Theta,\sigma)$ necessitates long time series. The results confirm the feasibility of the proposed two-step procedure and highlight the interplay between fractional memory effects and stochastic input estimation in generalized neuronal models.

12:00
The Reverse Hypergeometric distribution for attribute concentration in small groups
PRESENTER: Andrea Simonetti

ABSTRACT. In the framework of urn models, we introduce a probability distribution designed to quantify the concentration of attributes among members of small groups. This new distribution addresses a specific occupancy problem, focusing on how particular marbles are allocated to urns. We fully characterize this distribution, referred to as the Reverse Hypergeometric distribution, and propose a statistical test based on it. The model enables testing for excess intra-group similarity against a null hypothesis of random co-occurrence of marbles with the same attribute in the urns. We compare it with established models, including the Multinomial and the Multivariate Hypergeometric distributions. We also provide an asymptotic approximation of the Reverse Hypergeometric distribution by gauging a Multinomial distribution and demonstrate how the model results from urn exchangeability. We illustrate its use through three real-world applications in the domains of network science, social science, and text analysis: investigating the presence of homophily in relationship networks, assessing the excess of same-sex children within households, and analyzing the concentration of sentiment-polarized sentences in the abstracts of scientific papers. Finally, we present a generalization of the model that accommodates groups of varying sizes, enhancing its versatility for different domains and data structures. Session number: CS191 Session name: Selected Topics in Probability and Statistics First organizer: Andrea Simonetti

12:25
Inference for a concave distribution function under measurement error

ABSTRACT. We study nonparametric inference for a concave distribution function under the measurement error model, where the non-negative variable of interest is perturbed by additive independent noise. We propose a shape-constrained estimator defined as the least concave majorant on the non-negative real half-line of the deconvolution estimator of the cumulative distribution function, and we establish its uniform consistency as well as its square-root convergence in distribution.

To assess the concavity assumption, we introduce a nonparametric test of the null hypothesis that the distribution function is concave on the non-negative real half-line against the alternative that it is not. The test is calibrated via the bootstrap. We show that the test statistic and its bootstrap analogue have the same limiting distribution under the null, while the rejection probability tends to one under the alternative.

The proofs mainly rely on a bootstrap Donsker-type result for the deconvolution estimator of the cumulative distribution function, combined with the functional delta method. Simulation studies illustrate the finite-sample performance of both the estimator and the test.

14:30-16:10 Session 6A: CS124: Infinite Dimensional Analysis and Malliavin Calculus
14:30
Hypercontractivity type property for generalized Mehler semigroups

ABSTRACT. A natural framework for studying semigroups associated with elliptic operators with unbounded coefficients is given by L^p spaces related to invariant measures. This is the case, for instance, of the classical Ornstein-Uhlenbeck semigroup (P(t)), which enjoys many nice properties in L^p(m), where m denotes the standard Gaussian measure that turns out to be the unique associated invariant measure. One of the most relevant properties of the Ornstein--Uhlenbeck semigroup, proved by Nelson, concerns hypercontractivity; that is, for any 1<p<q (not infinity) there exists t_0>0 such that

||P(t)f||_{L^q(m)} \le ||f||_{L^p(m)},

for all f \in L^p(m) and t > t_0.

The hypercontractivity of P(t) is strictly connected to the validity of the classical logarithmic Sobolev inequality. Moreover, the above estimat allows one to deduce the asymptotic behavior of P(t) as t tends to infinity.

The Ornstein-Uhlenbeck semigroup can be interpreted as a particular case of a generalized Mehler semigroup and, as is well known, in the general case hypercontractivity fails to hold for such semigroups.

In this talk we consider generalized Mehler semigroups on L^p spaces related to invariant measures and investigate their summability-improving properties. We identify natural subspaces of L^p where hypercontractivity-type estimates are satisfied, providing both examples and counterexamples. The results we prove extend and, in some cases, improve the existing theory. This is joint work with Luciana Angiuli (Università del Salento).

14:55
Bismut-Elworthy type formulae for BSDEs with degenerate noise
PRESENTER: Federica Masiero

ABSTRACT. \title{Bismut-Elworthy type formulae for BSDEs with degenerate noise }

% Author name(s) \author{ Davide Addona\inst{2} \and Federica Masiero\inst{1}\thanks{Presenter} %\and %Third Author\inst{2} } % % \institute{Department of Mathematics and Applications, University of Milano Bicocca, Italy, \\ \email{federica.masiero@unimib.it}\\ \and Department of Mathematical, Physical and Computer Sciences, University of Parma, Parma, Italy\\ \email{davide.addona}@unipr.it}

% \maketitle %

\keywords{gradient estimates \and degenerate noise \and backward stochastic differential equations} \\

In this talk we present how to derive Bismut-Elworthy formula under assumptions weaker than non degeneracy of the noise. By Bismut-Elworthy formula we mean a gradient type estimate on the transition semigroup of a stochastic differential equation in a possibly infinite dimensional Hilbert space. \newline We also present a nonlinear version of the Bismut formula for BSDEs, in analogy to what is done in \cite{FT} in the case of non degenerate noise, and we discuss applications to the solution of semilinear Kolmogorov equations.

Our study is motivated by the regularizing properities of the transition semigroup of the stochastic wave equations, studied in \cite{MP}, and of the stochastic damped wave equation, first studied in \cite{AddBig24} and next also in \cite{AddMas}.

\def\sessionnumber{CS124}

\def\sessionname{Infinite Dimensional Analysis and Malliavin Calculus}

\def\firstorganizer{Davide Addona}

15:20
Second Quantization and Evolution Operators in infinite dimension

ABSTRACT. In an infinite dimensional separable Hilbert space $X$, we study compactness properties and the hypercontractivity of the Ornstein-Uhlenbeck evolution operators $P_{s,t}$ in the spaces $L^p(X,\gamma_t)$, $\{\gamma_t\}_{t\in R}$ being a suitable evolution system of measures for $P_{s,t}$. Moreover, we study the asymptotic behavior of $P_{s,t}$. Our results are produced thanks to a representation formula for $P_{s,t}$ through the second quantization operator. Among the examples, we consider the transition evolution operator associated to a non-autonomous stochastic parabolic PDE.

15:45
Malliavin Calculus for rough stochastic differential equations
PRESENTER: Michele Coghi

ABSTRACT. In this work we show that rough stochastic differential equations (RSDEs), as introduced by Friz, Hocquet, and Lê (2021), are Malliavin differentiable. We use this to prove existence of a density when the diffusion coefficients satisfies standard ellipticity assumptions. Moreover, when the coefficients are smooth and the diffusion coefficients satisfies a Hörmander condition, the density is shown to be smooth. The key ingredient is to develop a comprehensive theory of linear rough stochastic differential equations, which could be of independent interest.

14:30-16:10 Session 6B: CS137: Stochastic Models in Fluid Dynamics
14:30
Averaging Dynamics and Wong-Zakai approximations for a Fast-Slow Navier-Stokes System Driven by fractional Brownian Motion

ABSTRACT. We study a slow-fast system of coupled two- and three-dimensional Navier-Stokes equations in which the fast component is perturbed by an additive fractional Brownian noise with Hurst parameter $\mathcal{H}>\frac{1}{3}$. The system is analyzed using rough path theory, and the limiting behaviour strongly depends on the value of $\mathcal{H}$. We prove convergence in law of the slow component to a Navier–Stokes system with an additional It\^o-Stokes drift when $\mathcal{H}<\frac{1}{2}$. In contrast, for $\mathcal{H}\in (\frac{1}{2},1)$, the limit equation features only a transport noise driven by a rough path.

14:55
Zero-noise selection and Large Deviations in $L^\infty_t L^p_x$ for the stochastic transport equation beyond DiPerna-Lions

ABSTRACT. We consider $L^\infty_t L^p_x$ solutions of the stochastic transport equation with drift in $L^\infty_t W^{1,q}_x$. We show strong existence and pathwise uniqueness of solutions in a regime of parameters $p,q$ for which non-unique weak solutions of the deterministic transport equation exist. When the intensity of the noise goes to zero, we prove that the solutions of the stochastic transport equation converge to the unique renormalized solution of the transport equation in the sense of DiPerna-Lions. Furthermore, we show that the convergence is governed by a Large Deviations Principle in the space $L^\infty_t L^p_x$. Since the space $L^\infty_t L^p_x$ is not separable, the weak convergence approach to Large Deviations by Budhiraja, Dupuis, and Maroulas is not directly applicable.

15:20
Hookean dumbbell model for polymers, stretching noise and turbulence

ABSTRACT. It is recognized that the addition of polymers is very efficient in reducing the friction drag in turbulent regimes. My talk is about the effects of small-scale turbulence on polymers distribution by using a stochastic scaling and singular limits. Many works have been done in recent years using the scaling limit in both scalar and vector cases. The second one is characterized by the presence of stretching, which adds complications over the scalar case.

In \cite{Art1}, we investigate the stretching mechanism of stochastic models of turbulence acting on a simple model of polymer. Namely, we investigate a scaling limit problem, under suitable intensity assumption. The polymer density equation, initially an SPDE converges (in the first step) weakly to a limit deterministic equation with a new degenerate term with some singular parameter. Recently, in \cite{Art2} we investigate the singular limit in the spirit of the hydrodynamic limit techniques. One consequence is that the limiting density shows a power-law decay in the polymer length, which is consistent with physical predictions.

The activities mentioned herein were performed in the framework of the project: EU-HORIZON EUROPE ERC-2021-ADG “Noise in Fluids” (NoisyFluid), no. 101053472.

\bibitem{Art1}Flandoli, F. and Tahraoui Y.: Stretching of polymers and turbulence: Fokker Planck equation, special stochastic scaling limit and stationary law. Journal of Differential Equations 452 : 113789 (2026)

\bibitem{Art2}Tahraoui, Y.: Small-scale turbulence limit of Fokker-Planck equation for polymers in turbulent flow. arXiv preprint arXiv:2503.18143 (2025)

14:30-16:10 Session 6C: CS139: Conformal prediction: theory and methods
14:30
Distribution-Free Outlier Detection and Enumeration
PRESENTER: Aldo Solari

ABSTRACT. A flexible, distribution-free framework for collective outlier detection and enumeration is introduced, targeting situations in which the presence of outliers can be detected powerfully even though their precise identification may be challenging due to the sparsity, weakness, or elusiveness of their signals. The methodology builds on recent advances in conformal inference and integrates classical ideas from multiple testing, locally most powerful and adaptive rank tests, and nonparametric large-sample asymptotics.

14:55
A Conformal Prediction Approach to Predict Populations of Graphs
PRESENTER: Matteo Fontana

ABSTRACT. This presentation introduces a conformal prediction methodology for quantifying uncertainty in populations of graph data. While existing literature offers numerous methods for graph prediction, techniques for assessing the uncertainty of these predictions remain scarce. The proposed framework addresses this gap by generating prediction regions for both labelled graphs, which possess a clear correspondence between nodes across observations, and unlabelled graphs, which lack such correspondence.

For unlabelled graphs, the methodology constructs prediction regions embedded within a discrete quotient metric space, referred to as graph space. The approach is model-free and does not rely on distributional assumptions. It achieves finite-sample validity and produces component-wise interpretable prediction regions configured as parallelotopes. Furthermore, the framework incorporates a length modulation mechanism to account for the local variability of specific edge or node attributes.

The theoretical properties and empirical performance of this forecasting technique are evaluated through two simulation studies covering both labelled and unlabelled graph scenarios. Additionally, the practical utility of the method is demonstrated using a real-world dataset of player passing networks from the FIFA 2018 World Cup. This application illustrates the framework's capacity to analyze network topology and quantify prediction uncertainty for football teams categorized by varying performance levels.

15:20
Conformal classification with tight marginal coverage in noisy settings

ABSTRACT. Conformal prediction is a nonparametric method widely applied in regression, classification, and outlier detection, providing valid predictive inference with finite-sample coverage guarantees. Marginal coverage, in particular, is a fundamental objective in conformal inference, ensuring that prediction sets contain the correct label for a predefined proportion of future test points. However, these guarantees rely on the assumption of data exchangeability, which is often violated in real-world applications due to distribution shifts, outliers, and label noise. In Bortolotti et al. (2025), we address the limitations of conformal classification in the presence of label contamination and propose novel adaptive methodologies that automatically adjust for noise to restore marginal coverage. Our theoretical guarantees are derived under the assumption that the contamination mechanism is known. We show how label noise induces a systematic inflation of coverage and, leveraging tools from empirical process theory, we derive correction factors that restore nominal marginal guarantees. The resulting adaptive calibration procedures provide valid and informative prediction sets even in challenging classification settings with many classes or severe class imbalance. To make the framework fully data-driven, we complement our theoretical results with a practical strategy to estimate the contamination process from noisy data. Specifically, we propose a procedure based on the identification of anchor points, i.e., observations for which the conditional probability of a class is close to one. These points allow us to consistently estimate the class-dependent contamination matrix without requiring access to clean data. The estimated contamination mechanism is then plugged into the adaptive calibration step. The effectiveness of our approach is demonstrated through extensive experiments on synthetic and real-world datasets, including CIFAR-10H and BigEarthNet. Our findings highlight the importance of accounting for label contamination in conformal classification and provide a robust framework for reliable predictive inference in noisy settings.

15:45
Conditional Coverage in Conformal Prediction: Tradeoffs and Insights

ABSTRACT. Reliable decision-making relies on predictive sets that capture the true outcome with a specified probability. In this talk, I will explore Conformal Prediction, a statistical approach that delivers rigorous finite-sample guarantees. While standard conformal methods provide valid marginal coverage, they do not ensure coverage conditional on specific inputs. I will present a framework to quantify conditional miscoverage and discuss strategies to improve conditional reliability, including the tradeoff between set size and conditional coverage. The talk will highlight both theoretical insights and practical implications for building trustworthy predictive systems.

14:30-16:10 Session 6D: CS142: Diffusion Processes in Machine Learning
14:30
Theoretical guarantees for diffusion models — beyond log-concavity
PRESENTER: Gitte Kremling

ABSTRACT. Score-based generative modeling, implemented through probability flow ODEs, has shown impressive results in numerous practical settings. However, most convergence guarantees rely on restrictive regularity assumptions on the target distribution—such as strong log-concavity or bounded support. This work establishes non-asymptotic convergence bounds in the 2-Wasserstein distance for a general class of probability flow ODEs under considerably weaker assumptions: weak log-concavity and Lipschitz continuity of the score function. Our framework accommodates non-log-concave distributions, such as Gaussian mixtures, and explicitly accounts for initialization errors, score approximation errors, and effects of discretization via an exponential integrator scheme. Bridging a key theoretical challenge in diffusion-based generative modeling, our results extend convergence theory to more realistic data distributions and practical ODE solvers. We provide concrete guarantees for the efficiency and correctness of the sampling algorithm, complementing the empirical success of diffusion models with rigorous theory. Moreover, from a practical perspective, our explicit rates might be helpful in choosing hyperparameters, such as the step size in the discretization.

14:55
Adaptive denoising diffusion modelling via random time reversal
PRESENTER: Lukas Trottner

ABSTRACT. We introduce a new class of generative diffusion models that, unlike conventional denoising diffusion models, achieve a time-homogeneous structure for both the noising and denoising processes, allowing the number of steps to adaptively adjust based on the noise level. This is accomplished by conditioning the forward process using Doob's $h$-transform, which terminates the process at a suitable sampling distribution at a random time. The model is particularly well suited for generating data with lower intrinsic dimensions, as the termination criterion simplifies to a first hitting rule. A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model. We highlight this point by demonstrating how our generative model may be used as an unsupervised learning algorithm: in high dimensions the model outputs with high probability the metric projection of a noisy observation $y$ of some latent data point $x$ onto the lower-dimensional support of the data---which we don't assume to be analytically accessible but to be only represented by the unlabeled training data set of the generative model.

15:20
Sampling error bounds for the denoising diffusion probabilistic model via the Föllmer process

ABSTRACT. The Föllmer process is a Brownian motion conditioned to have a pre-specified law at time 1. This process can be interpreted as an "augmented" time-compression of the reverse stochastic differential equation (SDE) corresponding to the denoising diffusion probabilistic model (DDPM). While this fact has been indirectly used to analyze DDPM sampling errors via discretization of the reverse SDE, implications of directly discretizing the Föllmer process have not yet been fully explored. This talk aims to clarify these implications while surveying relevant results from existing work.

15:45
Dimension-free statistical guarantees for fine-tuning of conditional diffusion models via PAC-Bayes bounds
PRESENTER: Shogo Nakakita

ABSTRACT. Reward-guided fine-tuning is an established strategy for aligning pre-trained diffusion models with new objectives, including improving text-to-image quality and aesthetic preference optimization. Existing theoretical treatments of reward-guided alignment of conditional diffusion models often assume that the expected reward of each fine-tuning hypothesis is available. This effectively presumes knowledge of the prompt distribution, which is typically unknown in practice. Consequently, practitioners replace expected rewards with empirical rewards computed from a finite set of sampled prompts, raising a fundamental statistical question: when is this replacement justified, and how large can the resulting generalization error be? Despite the importance of these questions, prior analyses have often relied on naïve model-capacity arguments, which can suffer from the curse of dimensionality. This work establishes non-asymptotic statistical guarantees for reward-guided alignment of conditional diffusion models under empirical rewards. The main result is a uniform concentration bound controlling the discrepancy between the population (expected) reward and the empirical reward, simultaneously over a family of fine-tuning hypotheses. A distinctive feature is that the bound is dimension-free and prompt-space independent: it does not depend on the dimension of the diffusion state, the dimension or cardinality of the prompt space, the architecture of the pre-trained model, the architecture of the fine-tuning hypothesis class, or hidden constants. Instead, the bound is governed by an $\ell_2$-type measure of the magnitude of fine-tuning across diffusion steps, clarifying how "how much the model is changed" determines generalization performance. Technically, we introduce a coupling PAC-Bayes framework tailored to conditional diffusion. A key shift in perspective is to regard the pre-trained diffusion model as a prior and the fine-tuned model as a corresponding posterior within a quasi-Bayesian argument. The main challenge is that multiple prompt-conditioned posteriors appear simultaneously in our bounds; we address this by extending PAC-Bayes bounds to coupled joint distributions and by constructing a tight Gaussian coupling of diffusion noises across prompts, which yields complexity control that does not deteriorate with prompt-space size.

14:30-16:10 Session 6E: CS144: Probabilistic Analysis of Complex Engineering Networks
14:30
Large-Scale Analysis of Multi-Scale Queuing Networks: Applications to Car-Sharing Systems
PRESENTER: Alessia Rigonat

ABSTRACT. We develop a probabilistic model for free-floating car-sharing systems. In these systems, shared vehicles occupy the same parking spaces as private cars. The availability of parking spaces across different zones of a city depends on private cars, which are far more numerous than free-floating vehicles. The system dynamics are described by a closed network of queues in which private and free-floating cars move among the same nodes, representing city zones. Nodes have finite capacity, and saturation is handled through a blocking and rerouting policy. We show that these dynamics preserve the product-form structure of the invariant distribution. The model accounts for spatial heterogeneity in both user demand and availability of parking spaces. We identify, in this setting, phase transitions between an overloaded regime where all nodes are saturated and underloaded regimes with saturated and non-saturated nodes. Scaling limits and stochastic averaging methods are used to analyze the behavior of the system when the capacity of the nodes is large. The analysis is performed when the average number of private cars per zone increases linearly with capacity, while the number of free-floating cars remains of smaller order. Our goal is to characterize the macroscopic behavior of the system and provide insights for optimizing vehicle distribution.

14:55
Association in Spatial Queueing-Filtering Networks
PRESENTER: Emanuele Mengoli

ABSTRACT. We study a class of spatially structured stochastic networks that couple queueing-type communication dynamics with sensing-type state estimation. Network nodes are distributed according to a stationary homogeneous Poisson point process $\mathrm{\Phi} \subset \mathbb{R}^2$. Around each node, secondary agents evolve according to localised random motions, yielding a dynamic marked point process. Interactions are induced through shot-noise interference generated by full spectrum reuse, so that both communication and sensing performances depend on the same underlying random field. Communication dynamics are modelled as spatially indexed queues whose service rates are monotone functionals of the instantaneous signal-to-interference-plus-noise ratio (SINR). Sensing dynamics are described by a partially observed stochastic process whose observation noise covariance is itself a functional of the same interference field, leading to a state-dependent filtering problem. This construction induces a non-trivial coupling between a queueing network in random environment and a family of stochastic estimators driven by spatial shot noise. We define system-level performance metrics under the Palm distribution of $\mathrm{\Phi}$. Our main result establishes the association property between communication and sensing functionals at the typical node. Under general shot-noise interference model, we prove that the queue workload process and the filtering error process are associated, in the sense of increasing functionals. The proof relies on coupling constructions, stochastic monotonicity, and comparison arguments for interacting particle systems in random environment. The results suggest that operating regimes that improve communication performance also improve sensing accuracy. More broadly, this framework provides a probabilistic foundation for the analysis of spatial networks with coupled service and estimation mechanisms. It illustrates how tools from stochastic geometry, interacting particle systems, and queueing theory can be combined to analyse large-scale systems where geometry and flow dynamics are intrinsically intertwined.

15:20
The geometry of the stability region of randomly modulated queuing systems

ABSTRACT. We investigate a fundamental object in operations research: the stability region of a randomly modulated scheduling problem. Specifically, we consider a queueing system comprising multiple queues and a single server, which makes scheduling decisions and is influenced by a dynamic, autonomous, random, and stationary environment that modulates the queue capacities. In the setting where the modulation space is finite, we characterise the stability region as a Minkowski sum of deGua simplices -- structures known in convex geometry as cephoids. Beyond revealing a rich mathematical structure of the stability region, this apparently novel connection yields an explicit description of the region in the two-queue case, and provides a simple iterative scheme to obtain its minimal H- and V-representation in the general case. We further uncover a link between this problem and the max-flow min-cut theorem in electrical networks.

15:45
Poisson Hail on a Wireless Ground
PRESENTER: Ke Feng

ABSTRACT. We introduce a new model which incorporates three key ingredients of a large class of wireless communication systems: (1) spatial interactions through interference, (2) dynamics of the queueing type, with users joining and leaving, and (3) carrier sensing and collision avoidance as used in, e.g., WiFi. In systems using (3), rather than directly accessing the shared resources upon arrival, a customer is considerate and waits to access them until nearby users in service have left. This new model can be seen as a missing piece of a larger puzzle that contains such dynamics as spatial birth-and-death processes, the Poisson hail model, and wireless dynamics as key other pieces. We show that, under natural assumptions, this model can be represented as a Markov process on the space of counting measures.

The main results are then two-fold. The first is on the shape of the stability region and, more precisely, on the characterization of the critical value of the arrival rate that separates stability from instability. We show that, for natural values of the system parameters, the implementation of sensing and collision avoidance stabilizes a system that would be unstable if immediate access to the shared resources would be granted. In other words, for these parameters, renouncing greedy access makes sharing sustainable, whereas indulging in greedy access kills the system.

14:30-16:10 Session 6F: CS147: Kernel methods in Bayesian statistics
14:30
Approximate Bayesian computation with kernel Wasserstein distance
PRESENTER: Sirio Legramanti

ABSTRACT. Approximate Bayesian Computation (ABC) is a family of methods that allow sampling from an approximate posterior even when the likelihood is intractable, provided one can simulate from the model and quantify the discrepancy between simulated and observed data. While this discrepancy has traditionally been defined through summary statistics, recent developments in ABC leverage distances between empirical distributions, with the Wasserstein distance emerging as an interpretable and principled choice. However, it has been shown that Wasserstein ABC can be highly sensitive to outlier contamination. We identify that this sensitivity arises from the choice of the cost function rather than from the Wasserstein distance itself. We then propose to replace the usual Euclidean cost with a kernel-based cost, leading to a kernel Wasserstein distance that substantially enhances robustness while preserving ABC posterior concentration under broad conditions. This provides a flexible and theoretically grounded alternative to classical Wasserstein ABC.

14:55
Adaptive variational Gaussian processes
PRESENTER: Dennis Nieman

ABSTRACT. Accurate tuning of hyperparameters is crucial to ensure that models can generalise effectively across different settings. In this talk, we present theoretical guarantees for hyperparameter selection using variational Bayes in the nonparametric regression model. We construct a variational approximation to a hierarchical Bayes procedure, and derive upper bounds for the contraction rate of the variational posterior in an abstract setting. The theory is applied to various Gaussian process priors and variational classes, resulting in minimax optimal rates. Our theoretical results are accompanied with numerical analysis both on synthetic and real world data sets.

15:20
Computationally efficient hierarchical additive interaction models through hierarchical ANOVA kernels and Gaussian processes
PRESENTER: Francesca Panero

ABSTRACT. Additive Gaussian process (GP) models offer flexible tools for modelling complex non-linear relationships and interaction effects among covariates. While most studies have focused on predictive performance, relatively little attention has been given to identifying the underlying interaction structure, which may be of scientific interest in many applications. In practice, the use of additive GP models in this context has been limited by the cubic computational cost and quadratic storage requirements of GP inference. In this talk, we will presents a fast hierarchical additive interaction GP model for multi-dimensional grid data. A hierarchical ANOVA decomposition kernel forms the foundation of our model, which incorporate main and interaction effects under the principle of marginality. Kernel centring ensures identifiability and provides a unique, interpretable decomposition of lower- and higher-order effects. For datasets forming a multi-dimensional grid, efficient implementation is achieved by exploiting the Kronecker product structure of the covariance matrix. Our contribution is to extend Kronecker-based computation to handle any interaction structure within the proposed class of hierarchical additive GP models, whereas previous methods were limited to separable or fully saturated cases.

Joint work with S. Ishida (University of Oxford) and W. Bergsma (London School of Economics). The pre-print of our paper can be found at: https://arxiv.org/abs/2305.07073

15:45
Kernel Quantile Embeddings and Associated Probability Metrics
PRESENTER: Masha Naslidnyk

ABSTRACT. Embedding probability distributions into reproducing kernel Hilbert spaces (RKHS) has enabled powerful non-parametric methods such as the maximum mean discrepancy (MMD), a statistical distance with strong theoretical and computational properties. At its core, the MMD relies on kernel mean embeddings (KMEs) to represent distributions as mean functions in RKHS. However, it remains unclear if the mean function is the only meaningful RKHS representation. Inspired by generalised quantiles, we introduce the notion of kernel quantile embeddings (KQEs), along with a consistent estimator. We then use KQEs to construct a family of distances that:(i) are probability metrics under weaker kernel conditions than MMD; (ii) recover a kernelised form of the sliced Wasserstein distance; and (iii) can be efficiently estimated with near-linear cost. Through hypothesis testing, we show that these distances offer a competitive alternative to MMD and its fast approximations. Our findings demonstrate the value of representing distributions in Hilbert space beyond simple mean functions, paving the way for new avenues of research.

14:30-16:10 Session 6G: CS153: Optimal Stopping, Stochastic Control and Stochastic Games II
14:30
Robust Ergodic Singular Control of Compound–Poisson Jump Diffusions under Drift and Intensity Ambiguity
PRESENTER: Bernardo D'Auria

ABSTRACT. We study an ergodic singular stochastic control problem for a one-dimensional compound–Poisson jump diffusion under model ambiguity. Ambiguity affects both the drift and the jump intensity and is modeled via a $(\kappa,\lambda)$-ignorance framework, leading to a robust control problem formulated as a min–max optimization over admissible controls strategies.

We show that the associated robust Hamilton–Jacobi–Bellman equation admits a reduction to a non-ambiguous formulation in which the worst-case drift and jump intensity are of bang-bang type. Under an infinite-horizon average-cost criterion, optimality is characterized by a free-boundary problem with gradient constraints for which we establish a verification theorem.

Focusing on negative and exponentially distributed jump sizes, we obtain a more explicit expression for the bang-bang regions for the drift and the jump intensity. We derive an integro-differential free-boundary problem that can be reduced to piecewise system of ordinary differential equations whose solutions have to satisfy local and global regularity constraints

We propose a two-stage numerical scheme combining closed-form expressions with a root-finding procedure to compute the solution. Numerical experiments illustrate the qualitative effects of ambiguity on the optimal policy and confirm the analytical findings.

The associate paper is still in preparation, it will be soon available on arXiv.

14:55
Strategic Focus or Technological Neutrality? On the Optimal Mix of Green Investment and Carbon Capture and Storage Research in a Budget-Constraint World
PRESENTER: Katia Colaneri

ABSTRACT. Major pathways for carbon abatement include a large-scale deployment of renewable energy sources (RES) and investment in carbon capture and storage (CCS) technologies. While RES such as solar and wind power offer clean, sustainable energy, significantly expanding their share in the energy mix necessitates heavy infrastructure investment. This is primarily due to issues of intermittency and the need to upgrade or redesign existing electricity grids to ensure stability and reliability. On the other hand, CCS technologies offer a potential solution to decarbonize existing fossil fuel-based infrastructure. However, CCS remains technologically immature and economically un-viable at large scale. Significant research and development (R&D) efforts are required to reach a breakthrough that would make CCS a competitive option. Given limited fiscal capacity, it may be infeasible for societies to simultaneously invest heavily in RES infrastructure and fund foundational CCS research. We explores this trade-off by modeling the problem as a stochastic optimization problem. We analyze the optimal allocation of a constrained research and investment budget over time, under uncertainty about technological breakthroughs and deployment costs. We study the problem using theoretical and numerical methods.

15:20
Optimal resource extraction with a random threshold

ABSTRACT. We study a problem of resource extraction cast as a stochastic control problem where the depletion time of the resource is modeled by the hitting time for the controlled dynamics of a random (non-observable) threshold. Such a threshold may represent a tipping point, i.e., a critical level below which we expect a drastic disruption of the underlying source, leading to its extinction. Mathematically, this is formulated as a singular control problem with random time- horizon. The underlying stochastic source X is singularly controlled by the cumulative extraction and it is modeled as a time-homogeneous diffusion process subject to general boundary conditions. The random time horizon is modeled by the first time X drops below a random thresh- old, which is independent of the Brownian motion and distributed according to a cdf F . The problem is cast in a Markovian setting by introducing the running infimum of X as an additional state variable, which leads to a 2-dimensional singular control problem with infinite time-horizon. Under some assumptions on F , we are able to fully characterize the solution of the problem. That is, we show that the optimal strategy consists of extracting resources in such a way that X reflects along a given boundary, which is expressed as a function of the running infimum. Depending on the chosen distribution F , the precise characterization of this boundary requires either solving an auxiliary problem or applying the so-called maximality principle, borrowed from optimal stopping theory, for singular control.

15:45
When investors force the green transition: a two-dimensional singular stochastic control problem
PRESENTER: Omar Khattab

ABSTRACT. Traditional corporate compensation schemes inherently discourage sustainable operations, as managerial incentives remain strictly aligned with financial returns rather than environmental outcomes. However, when a firm is backed by a fully informed "green" investor, this misalignment can be overcome. Building on the framework introduced in [1], we investigate the first-best benchmark of this principal-agent interaction. In our setting, the investor can perfectly deduce the manager’s actions and threatens heavy penalties for any deviation from the socially optimal policy. Under this threat, the investor effectively acts as a social planner, directly implementing the optimal greening and investment strategies.

We formulate this benchmark as a two-dimensional singular optimal control problem. The firm’s state is primarily characterized by its production capacity, X, alongside the accumulated abatement effort, R. The investor controls the firm’s dynamics through two forces: injecting external capital (ν) when production capacity is deemed too low, and enforcing abatement (η). Crucially, abatement operates through a pure substitution effect—the cost of greening is fully internalized as a direct reduction in the production capacity X.

In this talk, we formalize the principal’s optimization criterion as a two-dimensional singular stochastic control problem and analyze the properties of the value function via its associated Hamilton-Jacobi-Bellman variational inequality. The core mathematical challenge arises from the interplay between a degenerate diffusion and an oblique reflection driven by the substitution effect. By exploiting the optimal policies derived under a deterministic setting, we explore the geometry of the free boundaries that partition the state space into continuation and action regions. Our preliminary results reveal that the optimal intervention takes the form of a Skorokhod-type reflection along moving boundaries, which are monotonically increasing with respect to the firm’s accumulated abatement effort.

14:30-16:10 Session 6H: CS165: Processes on dynamic random graphs
14:30
The critical percolation window in growing random graphs

ABSTRACT. We describe the critical window for percolation on sparse growing random graphs. In our models, vertices arrive sequentially and connect independently to each earlier vertex with probability proportional to a nonpositive power of its arrival time, continuing until the graph has n vertices. These models include uniformly grown random graphs and inhomogeneous random graphs of preferential attachment type. Whenever the critical percolation threshold is positive, we show that the critical window has width of order (log n)^{-2} and a secondary phase transition at its finite upper boundary. Inside this window the largest component has size of order sqrt(n)/log n, and the susceptibility remains finite and independent of the position in the window. The proofs couple component explorations to branching random walks killed outside an interval of length log n, allowing sharp control of the barely subcritical and critical regimes. The talk is based on joint work with Joost Jorritsma and Pascal Maillard.

14:55
Infection models on dense dynamic random graphs

ABSTRACT. The focus of this talk will be Susceptible-Infected-Recovered (SIR) models on dense dynamic random graphs, in which the joint dynamics of vertices and edges are co-evolutionary, i.e., they influence each other bidirectionally. In particular, edges appear and disappear over time depending on the states of the two connected vertices, on how long they have been infected, and on the total density of susceptible and infected vertices. I will present our main results, which establish functional laws of large numbers for the densities of susceptible, infected, and recovered vertices, jointly with the underlying evolving random graphs in the graphon space. The talk will also include numerical illustrations showing that our model exhibits multiple epidemic peaks, as observed in real-world epidemics.

This talk is based on a joint work with P. Braunsteins, F. den Hollander and M. Mandjes.

15:20
Threshold-Driven Streaming Graph: Expansion and Rumor Spreading
PRESENTER: Flora Angileri

ABSTRACT. We will introduce the Threshold-driven Streaming Graph model, which is obtained by performing a randomized distributed algorithm, called RAES, over a dynamic graph evolving with the streaming node-churn process. This model captures two key features of modern peer-to-peer networks: a local threshold mechanism that bounds the degree of each vertex, and a node-churn process that regulates how vertices join and leave the network in each round.

Our main result proves good expansion properties of this model, with high probability. As a consequence, we will establish a logarithmic upper bound on the completion time of the well-known PUSH and PULL rumor-spreading protocols. Our analysis will also provide an upper bound to the message-communication overhead, showing that the overall number of exchanged messages at every round t is optimal in expectation and O(log n) with high probability.

15:45
Contact process on interchange process
PRESENTER: Daniel Valesin

ABSTRACT. We introduce a model of epidemics among moving particles on any locally finite graph. At any time, each vertex either is empty, occupied by a healthy particle, or occupied by an infected particle. Infected particles recover at rate 1 and transmit the infection to healthy particles at neighboring vertices at rate $\lambda$. In addition, particles perform an interchange process with rate $\mathsf v$, that is, the states of adjacent vertices are swapped independently at rate $\mathsf v$, allowing the infection to spread also through the movement of infected particles. On the $d$-dimensional Euclidean lattice, we start with a single infected particle at the origin and with all the other vertices independently occupied by a healthy particle with probability $p$ or empty with probability $1-p$. We define $\lambda_c(\mathsf v,p)$ as the threshold value for $\lambda$ above which the infection persists with positive probability and analyze its asymptotic behavior as $\mathsf v \to \infty$ for fixed $p$.

16:40-18:20 Session 7A: CS111: Dynamical Aspects of Stochastic PDEs
16:40
A mild rough Gronwall lemma with applications to non-autonomous evolution equations

ABSTRACT. We derive a Gronwall type inequality for mild solutions of non-autonomous parabolic rough partial differential equations (RPDEs). This inequality together with an analysis of the Cameron-Martin space associated to the noise, allows us to obtain the existence of moments of all order for the solution of the corresponding RPDE and its Jacobian when the random input is given by a Gaussian Volterra process. Applying further the multiplicative ergodic theorem, these integrable bounds entail the existence of Lyapunov exponents for RPDEs. We illustrate these results for stochastic partial differential equations with multiplicative boundary noise. This talk is based on a joint work with Mazyar Ghani Varzaneh and Tim Seitz.

17:05
Reduced stochastic PDE models for collective behaviour

ABSTRACT. Collective behaviour refers to a wide variety of phenomena in which large numbers of interacting particles self-organise from an unordered to an ordered state: examples of such phenomena include cluster formation, flocking, and opinion alignment. In this talk we discuss the derivation, analysis, and numerical simulation of reduced stochastic PDE (SPDE) models tailored for capturing important phenomena of collective behaviour. The model reduction, which is a key feature of these SPDEs, is primarily done in the interest of computational efficiency, and allows for enhanced interpretability due to distinct handling of spatial and kinetic variables.

This talk is based on a series of works (both concluded, and in progress) with Ana Djurdjevac (University of Oxford), Sebastian Zimper, Natasa Djurdjevac Conrad (Zuse Institute Berlin (ZIB)), Kamran Arora, Tony Shardlow (University of Bath).

17:30
(Moment) Lyapunov stability of parabolic SPDEs

ABSTRACT. We study the Lyapunov and moment Lyapunov stability of a class of parabolic SPDEs driven by additive noise, including the stochastic Allen-Cahn equation. To do so, we analyze properties of the associated projective process.

17:55
Invariant measures for the open KPZ equation

ABSTRACT. We provide an analytic proof for celebrated relative density formulas of the open KPZ equation with respect to white noise. The proof relies on a Girsanov transform, a time reversal and a subtle use of the theory of regularity structures to reconstruct the force of the solution to the KPZ equation at the boundary of the domain. This is joint work with A. Dunlap and Y. Gu.

16:40-18:20 Session 7B: CS117: Statistical inference for high-dimensional diffusions
16:40
Linearization of McKean SDEs with application to parameter estimation
PRESENTER: Andrea Zanoni

ABSTRACT. We consider ergodic McKean stochastic differential equations with a unique stationary state and study the linearized (in the sense of McKean) diffusion process obtained by replacing the law of the nonlinear process with its unique invariant measure. We prove that the law of the nonlinear McKean process and its linearized counterpart are exponentially close in time, both in relative entropy and in Wasserstein distance. The analysis, based on entropy estimates and logarithmic Sobolev inequalities, is carried out on both the whole space and the torus. We then show how the resulting linearized diffusion can be used to replace the original nonlinear process for tasks depending on the long-time behavior of the dynamics, with a particular focus on parameter estimation from a single observed long trajectory.

17:05
Nonparametric Estimation of the Diffusive Interaction Function in Particle Systems
PRESENTER: Francisco Pina

ABSTRACT. In this talk, we present a nonparametric estimator for the diffusive interaction function in particle systems, constructed from $Nn$ discrete observations of the trajectories. We comment its statistical performance and provide theoretical guarantees on the estimation error within suitable function classes and norms. We also discuss the main challenges arising in this setting and comment on optimality properties of the estimator.

17:30
Statistical inference for interacting particle systems driven by the fractional Brownian motion

ABSTRACT. We consider a system of interacting particles with Lipschitz continuous drift functions, driven by additive fractional Brownian motions with H in [1/2 1). For this system, we address the drift parameter estimation problem over a fixed time interval, considering different assumptions for the drift. We propose several estimators, demonstrate their consistency and asymptotic normality as the number of particles tends to infinity, and present a numerical study illustrating our findings.

This talk is based on joint work with Chiara Amorino and Ivan Nourdin, and on ongoing work with Chiara Amorino, Augustin Puel, as well as Yasan Odeh.

17:55
Learning Interaction Networks for High-Dimensional Diffusion Processes

ABSTRACT. We consider the setting where the state dynamics at each node in a network depend on interactions with its neighbors. We model this using the general framework of Network Stochastic Differential Equations (N-SDEs). The evolution at each node arises from three components: intrinsic dynamics (a momentum term), feedback from adjacent nodes (a network term), and a stochastic volatility component driven by Brownian motion. Our goals are twofold: parameter estimation for N-SDE systems and recovery of the underlying graph. The main motivation is to handle very high-dimensional time series by exploiting sparsity in the network structure. We study two settings. i) Known network structure: the graph is given, and we provide identifiability conditions for the parameters, accounting for the fact that the parameter dimension grows with the number of edges. ii) Unknown network structure: the graph must be learned from data; for this case, we propose an iterative procedure based on adaptive Lasso, developed for a particular class of N-SDE models. We focus on oriented graphs, which supports applications to causal inference by allowing the investigation of directed cause–effect relationships in dynamical systems. Using simulations and real data, we illustrate the performance of the proposed estimators across several graph topologies in high-dimensional regimes. We establish non-asymptotic bounds for parametric estimation when the system dimension is large, in two observation schemes: (1) high-frequency data from an ergodic diffusion, and (2) continuous observation in a small-diffusion, not necessarily ergodic, setting. Based on joint works with S.M. Iacus and N. Yoshida.

16:40-18:20 Session 7C: CS133: Optimal Stopping and Applications
16:40
The Wiener Disorder Problem with Random Post-Disorder Drift
PRESENTER: Bruno Buonaguidi

ABSTRACT. We consider a one-dimensional Wiener process with zero drift initially, which changes at some random and unobservable moment, referred to as the disorder time. We observe the evolution of the process in real time with the goal of detecting the disorder time as precisely as possible. Unlike Shiryaev's seminal work from the 1960s on the Wiener disorder problem, which assumes a known and fixed value of the post-disorder drift, we assume that the post-disorder drift is a discrete random variable with a known distribution. This formulation is particularly useful when the post-disorder regime is unknown, but past data and/or expert opinions can be used to construct a prior distribution for the new drift. Under the additional assumptions that (a) the disorder time is exponentially distributed and (b) the disorder time, the initial Wiener process with zero drift, and the post-disorder drift are independent, we show that the solution to our problem can be expressed in terms of a stopping time which minimizes a linear combination of the probability of a false alarm and the expected detection delay since the onset of the disorder. This stopping time can be characterized as the first moment at which the coordinate processes of the posterior probability that the disorder has already occurred - given the observed path of the Wiener process - enter a region shaped by a curved boundary, where the latter is the unique solution to a certain integral equation.

17:05
The Dubins Constants for Walsh's Spider Process

ABSTRACT. A long-standing open problem of L. E. Dubins seeks to determine the maximal expected range of Walsh's spider process on $n$ edges per root of the expected stopping time. The solution is known for $n=1$ (1988) and $n=2$ (2009). In this paper we present the solution for $n \ge 3$.

17:30
Continuous-Time Dynamic Contracting With Limited Commitment
PRESENTER: Andrea Bovo

ABSTRACT. This talk studies a problem arising in a Principal-Agent framework, analysed from the Principal’s perspective. The problem is formulated as a finite-horizon optimal stochastic control problem for a possibly degenerate process with absorption at the boundary. The controlled process represents the contract offered by the Principal, whose objective is to maximise over all admissible contracts offered to the Agent. Properties of the value function are obtained using both probabilistic and analytical techniques. In particular, we establish the existence of a classical solution of the related Hamilton-Jacobi-Bellman equation which allows to characterise explicitly the optimal contract offered by the Principal. Finally, we underline properties of the optimal contract and discuss their economic implications.

17:55
Optimal autonomous trading strategies in models based on Ornstein-Uhlenbeck processes with mean-reverting levels

ABSTRACT. We present closed-form solutions to the autonomous trading problems in the model in which the logarithm of dynamics of the asset price is described by the observation process from the extended Kalman-Bucy filtering model with generalised Ornstein-Uhlenbeck processes having mean-reverting levels. One can consider the cases in which the mean-reverting levels are either observable (full information) or unobservable (partial information). The optimal trading times are shown to be the first hitting times of the risky asset price process to either upper or lower either stochastic boundaries depending on the running filtering values (full information) or time-dependent boundaries (partial information). The method of proof consists of embedding the initial problems into optimal double-stopping problems for either two-dimensional time-homogeneous (full information) or one-dimensional time-inhomogeneous (partial information) continuous Markov diffusion processes. The latter are solved as either the equivalent elliptic-type free-boundary problems (full information) or the equivalend parabolic-type free-boundary problems (partial information). We show that the resulting optimal trading boundaries provide unique solutions to the associated systems of nonlinear Fredholm-type integral equations.

16:40-18:20 Session 7D: CS149: Modeling the Unseen: Theory and Methods for Extreme Events
16:40
Self-normalization of sums of dependent random variables

ABSTRACT. see pdf file

17:05
Accurate Bayesian inference for tail risk extrapolation in time series
PRESENTER: Simone Padoan

ABSTRACT. Accurately quantifying tail risks—rare but high-impact events such as financial crashes or extreme weather—is a central challenge in risk management, with serially dependent data. We develop a Bayesian framework based on the Generalized Pareto (GP) distribution for modeling threshold exceedances, providing posterior distributions for the GP parameters and tail quantiles in time series. Two cases are considered: extrapolation of tail quantiles for the stationary marginal distribution under $\beta$-mixing dependence, and dynamic, past-conditional tail quantiles in heteroscedastic regression models. The proposal yields asymptotically honest credible regions, whose coverage probabilities converge to their nominal levels. We establish the asymptotic theory for the Bayesian procedure, deriving conditions on the prior distributions under which the posterior satisfies key asymptotic properties. To achieve this, we first develop a likelihood theory under serial dependence, providing local and global bounds for the empirical log-likelihood process of the misspecified GP model and deriving corresponding asymptotic properties of the Maximum Likelihood Estimator (MLE). Simulations demonstrate that our Bayesian credible regions outperform naïve Bayesian and MLE-based confidence regions across several standard time-series models, including ARMA, GARCH, and Markovian copula models. Two real-data applications—to U.S. interest rates and Swiss electricity demand—highlight the relevance of the proposed methodology.

17:30
Asymptotic theory for the likelihood-based block maxima method in time series
PRESENTER: David Carl

ABSTRACT. In this work, we investigate the block maxima method in the context of stationary time series. We begin by extending aspects of likelihood asymptotic theory for the estimation of the marginal parameters of the Generalized Extreme Value (GEV) distribution from the case of independence to scenarios involving serial dependence. Once the likelihood framework is established at a suitable level of generality, we shift our focus to its Bayesian counterpart, studying the corresponding asymptotic properties. Frequentist and Bayesian inference is then employed to estimate marginal parameters of the GEV, the extremal index, return levels and extreme quantiles of the underlying stationary distribution.

17:55
Testing for Multivariate Regular Variation

ABSTRACT. Many statistical methods for analyzing the extreme value behavior of a sample of $d$-dimensional random vectors rely on the assumption that the observed vectors are multivariate regularly varying (perhaps after a marginal transformation). Despite its importance, surprisingly few statistical tests for this hypothesis have been proposed and thoroughly analyzed. Taking up an idea from \cite{DM08}, we discuss a general approach to tackle this problem. The proposed test statistics are based on empirical processes, which give further insight about the type of deviation from regular variation if the test rejects the null hypothesis.

16:40-18:20 Session 7E: CS154: Interplay between statistical physics and probabilistic methods: the case of anomalous diffusion
16:40
Non-local dynamic boundary conditions for sticky Brownian motions on smooth domains

ABSTRACT. Sticky diffusion processes on bounded domains can spend finite time (and finite mean time) on the lower-dimensional space given by the boundary. Once the process hits the boundary, then it starts again after a random amount of time. While on the boundary it can stay or move according to dynamics that are different from those in the interior. Such processes may be characterized by a time-derivative appearing in the boundary condition for the governing problem. We use suitable time changes in order to describe fractional sticky conditions and the associated boundary behaviours. We obtain that fractional boundary value problems (involving fractional dynamic boundary conditions) lead to sticky diffusions, strong Markov on the interior, spending an infinite mean time (and finite time) on the boundary. Such a behaviour can be associated with a trap effect from the macroscopic point of view. We provide an example on fractals.

17:05
On some stochastic models of Anomalous Diffusion

ABSTRACT. In this talk we will briefly introduce several stochastic models of anomalous diffusion. In particular, we will focus on random processes subject to trapping effects, diffusion processes subject to reflecting barriers and kinetic processes subject to random obstacles. For these different models, we will discuss the anomalous diffusive behavior, the (non-) Markov property, the (non-local) PDE connections and, in particular and simulation methods. Particular attention will be dedicated to their connection with scaling limits of Continuous Time Random Walks / L\'evy walks.

17:30
Random Flights and Anomalous Diffusion: A non-Markovian Take on Lorentz Processes

ABSTRACT. A Lorentz process is a model for the motion of a particle among randomly located scatterers, also known as obstacles. It was originally used to describe the transport of electrons through a conductor.

In the classical setting, when the scatterers are distributed according to a Poisson point process, the deterministic dynamics of elastic collisions can be approximated, under the Boltzmann-Grad scaling limit, by a Markovian random flight. The density of this limiting process is governed by the Boltzmann equation. Passing further to the hydrodynamic limit, one recovers Brownian motion as the macroscopic description of the particle’s position.

In this work, we introduce a new class of point processes that generalizes the Poisson process and we investigate the motion of a particle which collides elastically with obstacles distributed according to this distribution. Unlike the classical case, the corresponding limiting random flight process is no longer Markovian. Instead, it exhibits memory effects that lead to superdiffusive behavior. At the macroscopic level, the particle’s position converges to a continuous superdiffusive process.

Within this framework, we derive a non-local analogue of the Boltzmann equation governing the non-Markovian random flight. Moreover, we show that the density of the superdiffusive scaling limit satisfies a fractional heat equation, reflecting the anomalous transport induced by the underlying correlations.

17:55
Nonlocal α-size biasing: Stein characterization and one-sided concentration bound
PRESENTER: Alessandra Meoli

ABSTRACT. We introduce a nonlocal α-size–biased transform for nonnegative random variables, which recovers the classical size bias in the limit α→1. The transform admits a clear sampling interpretation: it corresponds to an infinite-horizon renewal inspection scheme in which the observation mechanism is α-dependently power-biased toward longer waiting-time gaps. This biasing viewpoint provides a direct link to renewal-based CTRW models of anomalous diffusion, where different observation protocols induce biased waiting-time statistics. We characterize the transform via a Stein identity based on the Riemann–Liouville integral and derive a one-sided concentration inequality as an application. Joint work with Antonio Di Crescenzo.

16:40-18:20 Session 7F: CS167: Dynamics and phase transitions on discrete structures
16:40
Competing growth on the configuration model via first-passage percolation and long-range jumps
PRESENTER: Matteo Sfragara

ABSTRACT. We study two-type competing first-passage percolation on random graphs generated by the configuration model with a power-law degree distribution with exponent \tau in (1,2), corresponding to the infinite-mean regime. In the classical nearest-neighbor setting, the competition is dominated by giant-degree hubs: the type that first reaches a hub rapidly infects the entire network, leading to a "winner takes it all but one" phenomenon. We extend this model by introducing long-range infections: each infected vertex infects a uniformly chosen vertex at rate \gamma>0, independently of the edge-based dynamics. This global transmission mechanism competes with the local spread and fundamentally changes the phase diagram.

In cybersecurity terms, this models malware or information campaigns that spread both through local network connections and via global mechanisms, such as phishing, mass email, or broadcast exploits, which can reach arbitrary devices. This provides a natural framework for studying attacks on heterogeneous networks, many of which have heavy-tailed degree distributions with \tau in (1,2) or \tau in (2,3).

We identify a sharp threshold for coexistence as a function of $\gamma$. In the subcritical regime, the "winner takes it all" phenomenon arises, with the losing type infecting either finitely many vertices or even infinitely many but a vanishing proportion of the graph. In the supercritical regime, long-range transmission enables macroscopic coexistence, including an extreme case in which the final proportions of the two types converge to a random limit characterized by a Pólya urn.

17:05
Critical density of the Stochastic Sandpile Model

ABSTRACT. The Stochastic Sandpile Model is an interacting particle system introduced in the physics literature in the '90s to study the concept of self-organized criticality, which describes physical systems that spontaneously evolve toward a critical state without the need to fine-tune their parameters. In this talk, I will present the model and discuss some questions related to its behaviour, such as the critical density at which the phase transition occurs, how to exactly sample from the stationary distribution on finite graphs, and the stationary particle density on the complete graph.

17:30
Node immunization via random forests

ABSTRACT. Consider a viral agent spreading through a network with prescribed infection and healing rates. The multiple-node optimization problem requires identifying a set of $k$ nodes to immunize to minimize the spread of the infection. This problem is computationally hard, and various heuristic methods have been considered to address it. We propose an algorithm that chooses the nodes to immunize as the complement of the set of roots of a suitably sampled random forest. We provide a theoretical description of the algorithm's features and offer numerical evidence that it improves the results of the reference deterministic heuristic while maintaining the same asymptotic computational cost.

17:55
Estimate of the exit time for the Long Range Ising model on random regular graphs

ABSTRACT. We study the metastable behavior of the long-range Ising model on random regular graphs evolving under Glauber dynamics at low temperature. By combining the pathwise approach with refined isoperimetric properties of random regular graphs, such as sharp estimates on the Cheeger constant and the graph diameter, we derive precise asymptotics for the energy barrier and the mean exit time from the metastable phase. Our results show that when the interaction decays exponentially with the graph distance, the barrier height coincides with that of the short-range model up to an explicit multiplicative factor depending only on $r$, thereby extending and sharpening the analysis of Dommers \cite{dommers2017metastability}. For polynomially decaying interactions, we obtain non-trivial bounds that uncover a subtle interplay between the decay exponent and the underlying graph geometry. This provides the first systematic framework for understanding metastability in Ising models with long-range interactions on random graphs, revealing structural mechanisms with no counterpart in lattice settings.

16:40-18:20 Session 7G: CS168: Scaling limits for stochastic processes
16:40
Scaling limits of the one-dimensional facilitated exclusion process

ABSTRACT. I will present some recent results obtained for the facilitated exclusion process in one dimension. This stochastic lattice gas model is subject to strong kinetic constraints that create a continuous phase transition to an absorbing state at a critical particle density value. If the microscopic dynamics is symmetric, its macroscopic behavior (with periodic boundary conditions and in the diffusive time scale) is governed by a nonlinear PDE belonging to free boundary problems (or Stefan problems). One of the major ingredients is to show that the system reaches the “ergodic” component in a subdiffusive time. When the particle system is put in contact with reservoirs (which can either destroy or inject particles at both boundaries), it leads to a Dirichlet boundary-value problem. Starting from a suitable initial condition, the weakly asymmetric case gives rise to a new KPZ-type equation on the half line. All these results rely, to various extent, on mapping arguments (towards auxiliary processes), which completely fail in dimension higher than 1. I will finally discuss some open problems and questions, especially in dimension 2. Based on several joint works with G. Barraquand, O. Blondel, H. Da Cunha, C. Erignoux, M. Sasada and L. Zhao.

17:05
Non-chaotic interacting particle systems

ABSTRACT. Propagation of chaos is a well-known technique formally introduced in the physics literature by Marc Ka\v{c} in the 50s to simplify the study of Boltzmann equation and giving rise, for instance, to the mathematically more tractable Vlasov-like equations. In the following years, this approach has been repeatedly applied to both deterministic and stochastic particle systems, and it is nowadays part of the standard tools used in stochastic processes and statistical mechanics to prove Law of Large Numbers results. However, with the more and more interest of the current research in studying complex systems, the assumption of chaotic initial data is too stringent with regards to describing real-world phenomena.

In this talk, I am going to present recent results on Law of Large Numbers of the empirical measure without assuming any hypothesis on the initial datum but the convergence at time zero. The biggest challenge would be to tackle equations with non-linear coefficients and replace the standard topology in the space of probabilities induced by the Wasserstein distance, with a weaker notion of convergence but more suitable for non-chaotic systems.

17:30
Hydrodynamic limits of exploration processes on large random graphs
PRESENTER: Pascal Moyal

ABSTRACT. In this talk, we propose an analysis of a class of exploration processes on large random graphs having a fixed degree distribution, using the « constructing while exploring » approach: The graph is constructed by uniform pairing of half-edges, thus leading to a realization of the configuration model, while simultaneously exploring it.

Under general assumptions, we show how this approach allows to estimate key characteristics of the exploration process, to the large graph limit, by solving a system of ordinary differential equations in a space of measures, obtained as the hydrodynamic limits of a (properly scaled) sequence of point measure-valued continuous-time Markov chains. This procedure thus extends Wormald's differential equation method, to a space of infinite dimension.

We will focus on a particular example to illustrate this methodology: the greedy matching problem on general graphs, using a local matching criterion.

17:55
Stability and renormalization of Jackson networks with non-idling mobile servers

ABSTRACT. A tandem of two queues sharing a pool of servers, where users need time to switch to the second queue, is used to model a typical pathway through an emergency department (ED), where patients undergo two consultations separated by diagnostic tests. In this paper~\cite{Fayolle2026stability}, explicit conditions for ergodicity, transience and null-recurrence are given and proven via Foster’s criterion, using a linear Lyapunov function. This result is extended to a Jackson network, with the key feature that the nodes share a pool of servers, with a non-idling one-limited service policy and Markovian routing for the servers. Furthermore, delay times for customers to move from one node to another are also taken into account. This covers some of the main features of models for emergency departments, namely priorities (triage) between patients.

In the case of the tandem queue, after scaling the arrival rate and the number of servers by $N$, and dividing the process by $N$, we obtain a renormalized process converging to the solution of an ordinary differential equation (ODE) subject to boundary conditions. We gives some insights of the solution of this ODE in case of ergodicity, mainly we discuss the long time behavior, more precisely convergence to the equilibrium point.

16:40-18:20 Session 7H: CS175: Global and local topological properties of random graphs
16:40
Diameters in preferential attachment model with random initial degrees

ABSTRACT. Given an i.i.d. sequence of random variables Xi having a power law distribution with finite mean but infinite variance, consider the random graph process built as follow: at each time step a new vertex is added to the graph with initial degree Xt, and then is attached to the older vertices of the graph following a preferential attachment rule. In this context we show that depending on the support of the random out-degree (whether is 1 or bigger or equal then 2) the resulting graph is a small-world or an ultra-small world, namely it has diameter which is of the order of Clog( t)or Clog(log(t)) for C some constant. This extends the results for the classical preferential attachment model to the one with random initial degrees (the so-called PARID).

17:05
Sharp thresholds for higher powers of Hamilton cycles in random graphs
PRESENTER: Tamas Makai

ABSTRACT. For k ≥ 4, we establish that p = (e/n)^(1/k) is a sharp threshold for the existence of the k-th power H of a Hamilton cycle in the binomial random graph model. Our proof builds upon an approach by Riordan based on the second moment method, which previously established a weak threshold for H. This method expresses the second moment bound through contributions of subgraphs of H, with two key quantities: the number of copies of each subgraph in H and the subgraphs’ densities. We control these two quantities more precisely by carefully restructuring Riordan’s proof and treating sparse and dense subgraphs of H separately. This allows us to determine the exact constant in the threshold.

17:30
Do random initial degrees suppress concentration in preferential attachment graphs?
PRESENTER: Federico Polito

ABSTRACT. We consider the open problem concerning the possible lack of concentration of the degree distribution in preferential attachment graphs with random initial degree, when its distribution is characterized by extremely heavy tails of power-law type. We show that the addition of such a large number of edges causes a significant upset of the degree distribution, leading to its non-concentration. Furthermore, we show that the smallest value of the exponent for which the degree distribution exhibits concentration is 2.

17:55
Exploring the space of graphs with fixed discrete curvatures

ABSTRACT. Discrete curvatures are quantities associated to the nodes and edges of a graph that reflect the local geometry around them. These curvatures have a rich mathematical theory and they have recently found success as a tool to analyze networks across a wide range of domains. We consider the problem of constructing graphs with a prescribed set of discrete edge curvatures, and explore the space of such graphs. In particular, we solve the exact reconstruction problem for the specific case of Forman–Ricci curvature. By leveraging the algebraic theory of Markov bases, we obtain a finite set of rewiring moves that connects the space of all graphs with a fixed discrete curvature. These moves allow us to define a Markov chain to sample from the space of graphs with a given curvature, providing a foundation for generating curvature-constrained null models. Based on joint work with Michelle Roost, Karel Devriendt and Jürgen Jost and ongoing work with Jane Ivy Coons.