PROGRAM
Days: Thursday, July 6th Friday, July 7th Saturday, July 8th Sunday, July 9th Monday, July 10th
Thursday, July 6th
View this program: with abstractssession overviewtalk overview
11:00-13:30Lunch Break
15:20-15:50Coffee Break
Friday, July 7th
View this program: with abstractssession overviewtalk overview
09:00-10:00 Session 5: Adaptivity and Human-centric Learning
Chair:
09:00 | Generalization for Adaptively-chosen Estimators via Stable Median ( abstract ) |
09:20 | Learning Non-Discriminatory Predictors ( abstract ) |
09:40 | The Price of Selection in Differential Privacy ( abstract ) |
09:50 | Efficient PAC Learning from the Crowd ( abstract ) |
10:00-10:20Coffee Break
10:20-11:20 Session 6: Langevin Dynamics and Non-Convex Optimization
Chair:
10:20 | A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics (Best Paper Award) ( abstract ) |
10:40 | Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis ( abstract ) |
10:50 | Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent ( abstract ) |
11:00 | Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo ( abstract ) |
11:10 | Fast Rates for Empirical Risk Minimization of Strict Saddle Problems ( abstract ) |
11:20-11:35Coffee Break
12:35-14:30Lunch Break (including Women in Machine Learning - Theory Lunch)
14:30-15:30 Session 8: Unsupervised Learning
Chair:
14:30 | Sample complexity of population recovery ( abstract ) |
14:50 | Noisy Population Recovery from Unknown Noise ( abstract ) |
15:00 | Learning Multivariate Log-concave Distributions ( abstract ) |
15:10 | Ten Steps of EM Suffice for Mixtures of Two Gaussians ( abstract ) |
15:20 | The Hidden Hubs Problem ( abstract ) |
15:30-16:00Coffee Break
16:00-17:00 Session 9: Bandits I
Chair:
16:00 | Sparse Stochastic Bandits ( abstract ) |
16:10 | An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits ( abstract ) |
16:20 | Corralling a Band of Bandit Algorithms ( abstract ) |
16:30 | Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization ( abstract ) |
16:40 | Towards Instance Optimal Bounds for Best Arm Identification ( abstract ) |
16:50 | Bandits with Movement Costs and Adaptive Pricing ( abstract ) |
17:00-17:20Coffee Break
17:20-17:40 Session 10: Online Learning with Partial Feedback
Chair:
17:20 | Tight Bounds for Bandit Combinatorial Optimization ( abstract ) |
17:30 | Online Nonparametric Learning, Chaining, and the Role of Partial Feedback ( abstract ) |
17:40-18:00 Session 11: Open Problems Session
Chair:
17:40 | Open Problem: First-Order Regret Bounds for Contextual Bandits ( abstract ) |
17:50 | Open Problem: Meeting Times for Learning Random Automata ( abstract ) |
Saturday, July 8th
View this program: with abstractssession overviewtalk overview
09:00-10:00 Session 13: Robustness
Chair:
09:00 | Adaptivity to Noise Parameters in Nonparametric Active Learning ( abstract ) |
09:20 | Computationally Efficient Robust Estimation of Sparse Functionals ( abstract ) |
09:30 | Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities ( abstract ) |
09:40 | Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization ( abstract ) |
09:50 | Thresholding based Efficient Outlier Robust PCA ( abstract ) |
10:00-10:20Coffee Break
10:20-11:20 Session 14: Combinatorial Optimization in Learning
Chair:
10:20 | Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality ( abstract ) |
10:40 | Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems ( abstract ) |
10:50 | Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization ( abstract ) |
11:00 | Submodular Optimization under Noise ( abstract ) |
11:10 | Correspondence retrieval ( abstract ) |
11:20-11:35Coffee Break
11:35-12:35 Session 15: Online Learning
Chair:
11:35 | Online Learning Without Prior Information (Best Student Paper Award) ( abstract ) |
11:55 | On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities ( abstract ) |
12:15 | Fast rates for online learning in Linearly Solvable Markov Decision Processes ( abstract ) |
12:25 | ZIGZAG: A new approach to adaptive online learning ( abstract ) |
12:35-14:50Lunch Break
14:50-15:40 Session 17: PAC Learning
Chair:
14:50 | Efficient Co-Training of Linear Separators under Weak Dependence ( abstract ) |
15:10 | Effective Semisupervised Learning on Manifolds ( abstract ) |
15:20 | Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes ( abstract ) |
15:30 | Learning Disjunctions of Predicates ( abstract ) |
Sunday, July 9th
View this program: with abstractssession overviewtalk overview
09:00-10:00 Session 20: Complexity of Learning
Chair:
09:00 | A General Characterization of the Statistical Query Complexity ( abstract ) |
09:20 | Mixing Implies Lower Bounds for Space Bounded Learning ( abstract ) |
09:40 | On Learning versus Refutation ( abstract ) |
09:50 | Inapproximability of VC Dimension and Littlestone's Dimension ( abstract ) |
10:00-10:20Coffee Break
10:20-11:20 Session 21: Property Testing and Elicitation
Chair:
10:20 | Memoryless Sequences for Differentiable Losses ( abstract ) |
10:40 | Multi-Observation Elicitation ( abstract ) |
10:50 | Testing Bayesian Networks ( abstract ) |
11:00 | Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing ( abstract ) |
11:10 | Two-Sample Tests for Large Random Graphs using Network Statistics ( abstract ) |
11:20-11:35Coffee Break
12:35-14:30Lunch Break
14:30-15:30 Session 23: Stochastic Optimization
Chair:
14:30 | Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds ( abstract ) |
14:50 | Stochastic Composite Least-Squares Regression with convergence rate O(1/n) ( abstract ) |
15:00 | A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints ( abstract ) |
15:10 | Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox ( abstract ) |
15:20 | The Sample Complexity of Optimizing a Convex Function ( abstract ) |
15:30-16:00Coffee Break
16:00-17:00 Session 24: Bandits II
Chair:
16:00 | The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime ( abstract ) |
16:20 | Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons ( abstract ) |
16:40 | Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration ( abstract ) |
16:50 | Thompson Sampling for the MNL-Bandit ( abstract ) |
17:00-17:20Coffee Break
Monday, July 10th
View this program: with abstractssession overviewtalk overview
09:00-10:00 Session 26: Neural Networks
Chair:
09:00 | On the Ability of Neural Nets to Express Distributions ( abstract ) |
09:20 | Depth Separation for Neural Networks ( abstract ) |
09:30 | Surprising properties of dropout in deep networks ( abstract ) |
09:40 | Reliably Learning the ReLU in Polynomial Time ( abstract ) |
09:50 | Nearly-tight VC-dimension bounds for neural networks ( abstract ) |
10:00-10:20Coffee Break
10:20-11:20 Session 27: Learning with Matrices and Tensors
Chair:
10:20 | Exact tensor completion with sum-of-squares ( abstract ) |
10:40 | Fast and robust tensor decomposition with applications to dictionary learning ( abstract ) |
10:50 | Homotopy Analysis for Tensor PCA ( abstract ) |
11:00 | Fundamental limits of symmetric low-rank matrix estimation ( abstract ) |
11:10 | Matrix Completion from O(n) Samples in Linear Time ( abstract ) |
11:20-11:35Coffee Break
11:35-12:35 Session 28: Statistical Learning Theory
Chair:
11:35 | High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition. ( abstract ) |
11:55 | Rates of estimation for determinantal point processes ( abstract ) |
12:05 | Predicting with Distributions ( abstract ) |
12:15 | A second-order look at stability and generalization ( abstract ) |
12:25 | Optimal learning via local entropies and sample compression ( abstract ) |