COLT 2018: 31ST ANNUAL CONFERENCE ON LEARNING THEORY
PROGRAM

Days: Friday, July 6th Saturday, July 7th Sunday, July 8th Monday, July 9th

Friday, July 6th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 1
09:00
An Optimal Learning Algorithm for Online Unconstrained Submodular Maximization ( abstract )
09:10
Black-Box Reductions for Parameter-free Online Learning in Banach Spaces ( abstract )
09:20
The Many Faces of Exponential Weights in Online Learning ( abstract )
09:30
Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent ( abstract )
09:40
Online learning over a finite action set with limited switching ( abstract )
09:50
Online Learning: Sufficient Statistics and the Burkholder Method ( abstract )
10:00-10:30Coffee Break
10:30-12:00 Session 2
10:30
Learning Single Index Models in Gaussian Space ( abstract )
10:40
L1 Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent ( abstract )
10:50
Reducibility and Computational Lower Bounds for Problems with Planted Sparse Structure ( abstract )
11:00
Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models ( abstract )
11:10
Marginal Singularity, and the Benefits of Labels in Covariate-Shift ( abstract )
11:20
Learning Mixtures of Linear Regressions with Nearly Optimal Complexity ( abstract )
11:30
Efficient Algorithms for Outlier-Robust Regression ( abstract )
11:40
Logistic Regression: The Importance of Being Improper ( abstract )
11:50
Restricted Eigenvalue from Stable Rank with Applications to Sparse Linear Regression ( abstract )
12:00-14:00Lunch Break
14:00-15:00 Session 3

Invited talk: Stephane Mallat

15:20-16:20 Session 4
15:20
An explicit analysis of the entropic penalty in linear programming ( abstract )
15:30
Lower Bounds for Higher-Order Convex Optimization ( abstract )
15:40
Efficient Convex Optimization with Membership Oracles ( abstract )
15:50
Faster Rates for Convex-Concave Games ( abstract )
16:00
Smoothed Analysis for Efficient Semi-definite Programming ( abstract )
16:10
Convex Optimization with Unbounded Nonconvex Oracles Using Simulated Annealing ( abstract )
16:30-17:30 Session 5
16:30
Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance ( abstract )
16:40
Fast and Sample Near-Optimal Algorithms for Learning Multidimensional Histograms ( abstract )
16:50
Learning Patterns for Detection with Multiscale Scan Statistics ( abstract )
17:00
Detecting Correlations with Little Memory and Communication ( abstract )
17:10
Actively Avoiding Nonsense in Generative Models ( abstract )
17:20
Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave Densities ( abstract )
17:30-19:30 Session 6

Poster Session

Saturday, July 7th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 7
09:00
The Externalities of Exploration and How Data Diversity Helps Exploitation ( abstract )
09:10
Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification ( abstract )
09:20
Testing Symmetric Markov Chains From a Single Trajectory ( abstract )
09:30
Action-Constrained Markov Decision Processes With Kullback-Leibler Cost ( abstract )
09:40
A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation ( abstract )
09:50
Finite Sample Analysis of Two-Timescale Stochastic Approximation with Applications to Reinforcement Learning ( abstract )
10:00-10:30Coffee Break
10:30-12:00 Session 8
10:30
A Faster Approximation Algorithm for the Gibbs Partition Function ( abstract )
10:40
The Vertex Sample Complexity of Free Energy is Polynomial ( abstract )
10:50
Optimal Single Sample Tests for Structured versus Unstructured Network Data ( abstract )
11:00
Fundamental Limits of Weak Recovery with Applications to Phase Retrieval ( abstract )
11:10
Non-Convex Matrix Completion Against a Semi-Random Adversary ( abstract )
11:20
Counting Motifs with Graph Sampling ( abstract )
11:30
The Mean-Field Approximation: Information Inequalities, Algorithms, and Complexity ( abstract )
11:40
Breaking the $1/\sqrt{n}$ Barrier: Faster Rates for Permutation-based Models in Polynomial Time ( abstract )
11:50
Geometric Lower Bounds for Distributed Parameter Estimation under Communication Constraints ( abstract )
12:00-13:40Lunch Break
13:40-14:00 Session 9

Open Problems

14:00-15:00 Session 10

Invited talk: Susan Murphy

15:20-16:50 Session 11
15:20
An Estimate Sequence for Geodesically Convex Optimization ( abstract )
15:30
Averaged Stochastic Gradient Descent on Riemannian Manifolds ( abstract )
15:40
Accelerating Stochastic Gradient Descent for Least Squares Regression ( abstract )
15:50
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent ( abstract )
16:00
Exponential convergence of testing error for stochastic gradient methods ( abstract )
16:10
Cutting plane methods can be extended into nonconvex optimization ( abstract )
16:20
Online Variance Reduction for Stochastic Optimization ( abstract )
16:30
Minimax Bounds on Stochastic Batched Convex Optimization ( abstract )
16:40
Iterate averaging as regularization for stochastic gradient descent ( abstract )
17:00-18:00 Session 12

Business meeting

18:00-20:00 Session 13

Poster Session

Sunday, July 8th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 14
09:00
Langevin Monte Carlo and JKO splitting ( abstract )
09:10
Log-concave sampling: Metropolis-Hastings algorithms are fast! ( abstract )
09:20
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem ( abstract )
09:30
Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical Viewpoints ( abstract )
09:40
Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability ( abstract )
09:50
Underdamped Langevin MCMC: A non-asymptotic analysis ( abstract )
10:00-10:30Coffee Break
10:30-12:00 Session 15
10:30
Incentivizing Exploration by Heterogeneous Users ( abstract )
10:40
A General Approach to Multi-Armed Bandits Under Risk Criteria ( abstract )
10:50
Nonstochastic Bandits with Composite Anonymous Feedback ( abstract )
11:00
More Adaptive Algorithms for Adversarial Bandits ( abstract )
11:10
Best of Both Worlds: Stochastic & Adversarial Best-Arm Identification ( abstract )
11:20
Efficient Contextual Bandits in Non-stationary Worlds ( abstract )
11:30
Adaptivity to Smoothness in X-armed bandits ( abstract )
11:40
Small-loss bounds for online learning with partial information ( abstract )
11:50
Information Directed Sampling and Bandits with Heteroscedastic Noise ( abstract )
12:00-14:00Lunch Break
14:00-15:00 Session 16

Invited talk: Johan Hastad

15:20-16:20 Session 17
15:20
Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations ( abstract )
15:30
Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk ( abstract )
15:40
Size-Independent Sample Complexity of Neural Networks ( abstract )
15:50
Optimal approximation of continuous functions by very deep ReLU networks ( abstract )
16:30-17:30 Session 18
16:30
Subpolynomial trace reconstruction for random strings and arbitrary deletion probability ( abstract )
16:40
Detection limits in the high-dimensional spiked rectangular model ( abstract )
16:50
Hidden Integrality of SDP Relaxations for Sub-Gaussian Mixture Models ( abstract )
17:00
An Analysis of the t-SNE Algorithm for Data Visualization ( abstract )
17:10
Fitting a putative manifold to noisy data ( abstract )
17:20
Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods ( abstract )
17:30-19:30 Session 19

Poster Session

Monday, July 9th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 20
09:00
Learning from Unreliable Datasets ( abstract )
09:10
Privacy-preserving Prediction ( abstract )
09:20
Calibrating Noise to Variance in Adaptive Data Analysis ( abstract )
09:30
A Data Prism: Semi-verified learning in the small-alpha regime ( abstract )
09:40
Reductions in Fair Learning and Optimization ( abstract )
09:50
Private Sequential Learning ( abstract )
10:00-10:30Coffee Break
10:30-12:00 Session 21
10:30
Hardness of Learning Noisy Halfspaces using Polynomial Thresholds ( abstract )
10:40
Active Tolerant Testing ( abstract )
10:50
Time-Space Tradeoffs for Learning Finite Functions from Random Tests, with Applications to Polynomials ( abstract )
11:00
Empirical bounds for functions with weak interactions ( abstract )
11:10
A Direct Sum for Information Learners ( abstract )
11:20
Exact and Robust Conformal Inference Methods for Predictive Machine Learning With Dependent Data ( abstract )
11:30
Approximation beats concentration? An approximation view on inference with smooth radial kernels ( abstract )
11:40
Approximate Nearest Neighbors in Limited Space ( abstract )
11:50
Efficient Active Learning of Sparse Halfspaces ( abstract )
12:00-14:00Lunch Break
12:00-14:00 Session 22

Poster Session