COLT 2015: 28TH ANNUAL CONFERENCE ON LEARNING THEORY
TALK KEYWORD INDEX

This page contains an index consisting of author-provided keywords.

A
absolutely minimal Lipschitz extension
Acceleration
Active Learning
active learning on graphs
adaptive estimation
adaptive regret
adaptivity
adversarial noise
aggregating algorithm
Agnostic learning
AIXI
approximate policy evaluation
Approximation algorithms
asymptotic optimality
autoencoding
Averaging
B
balanced Pareto optimality
bandit algorithms
Bandit convex optimization
bandit linear optimization
batches
Bounded (a.k.a Massart) Noise
Bradley-Terry model
C
Censor Block Model
chaining
classification
Closeness testing
cluster assumption
Combinatorial prediction
Community Detection
computational complexity
computational learning theory
Computational lower bounds
Computationally Efficient Algorithms
computationally efficient kernel learning
concentration
concentration inequalities
concentration of measure
conditional probability estimation
Conditional sampling
conjunctive query
consistency
contextual dueling bandits
convergence
convex analysis
Convex calibrated surrogates
convex duality
Convex optimization
convexity
correlation clustering
corrupted inputs
Cortical algorithms
Crowd Sourcing
D
data privacy
decay of correlation
deep learning
dictionary learning
Differential Privacy
Dimension Reduction
Discrete point process
dueling bandit problem
E
Eigenvalue spacing
elicitation
empirical processes
empirical risk minimization
Ensemble aggregation
estimation error
excess risk
exp-concavity
expert algorithm
exponential family model
exponentially concave losses
extreme data classification
F
f-divergence
fast mixing
Feed-forward neural networks
Feedback graphs
first order bounds
first-order bounds
follow the perturbed leader
Fourier PCA
G
game theory
Gaussian min-max Theorem
Gaussian Sampling
general reinforcement learning
graph partitioning
graph prediction
grouped clinical trials
H
Halfspaces
Hartigan consistency
heat kernel
Hidden clique
hierarchical clustering
I
identification function
Identity testing
importance sampling
improvement for small losses
Independent Component Analysis
individual sequences
inference
information theory
Interactive Data Analysis
interior point methods
K
k-means
kernel methods
L
labeling
LAD
Lasserre hierarchy
LASSO
latent variable models
learning
learning on graphs
least squares
Legg-Hutter intelligence
Lifelong learning
linear bandits
linear regression
Linear Separators
Lipschitz extension
local algorithms
localization
log-concave measures
low rank matrix estimation
lower bound
lower bounds
Luckiness bounds
M
majority vote
Markov Decision Process
Markov decision processes
markov random fields
matrix completion
Matrix perturbation theory
Matrix Polynomials
maximum entropy
mcmc
mean-square-error
Mechanism Design
metric distortion
minimax
minimax regret
Minimax risk
mixability
Multi-armed bandit
multi-armed bandit problem
Multi-armed bandit problems
multi-stage allocation
Multiclass classification
Multiple Communities
multitask learning
multivariate extremes
N
neuroidal computation
noise sensitivity
non-additive losses
non-convex functions
non-convex optimization
nonparametric classification
nonparametric regression
nonparametric statistics
NormalHedge
nuclear norm
O
offset Rademacher complexity
on-line learning
Online
online combinatorial optimization
online density estimation
online learning
online local learning
Optimal Mechanism
optimal PAC algorithm
Optimization
orlicz spaces
overcomplete representations
P
PAC-learning
parameter estimation
Pareto optimality
Partitioning trees
PCA
planted clique
planted dense subgraph
Polynomial approximation
Polynomial regression
polynomial-time approximation scheme
Polynomial-time Reduction
prediction
Prediction with expert advice
prediction with membership queries
predictive join (PJOIN)
Principal Component Analysis
privacy
Probability estimation
proper composite losses
proper loss
Proper losses
Proper scoring rules
property
Property elicitation
Property testing
Q
Quantiles
query complexity of finding a cut
R
Rademacher complexity
random walk
Random Walks
ranking
Regression
regret
regret bounds
Regret Minimization
reinforcement learning
resource-constrained learning
S
saddle points
Sample Complexity
sample size
sample size determination
scale-sensitive capacity control
scoring rule
Second-order
self-concordant barriers
semi-bandit feedback
semi-random model
semi-supervised learning
semidefinite programming
shared representations
shifted power iteration
shifting regret
simulated annealing
Singular Value Decomposition
sleeping expert
sparse coding
sparse regression
sparsity
Spectral Algorithm
spectral algorithms
spectral clustering
Spectral Sparsification
stable tail dependence function
Statistical Estimation
Statistical Query Model
stochastic approximation
Stochastic Block Model
stochastic gradient
stochastic optimization
streaming algorithms
structured prediction
structured signals
submodular functions
substitution functions
Sum of squares
sum-of-squares method
Surrogate risk minimization
SVD
switching cost
T
temporal difference methods
tensor decomposition
Tensors
Thompson sampling
time-varying competitors
transductive
U
unbounded functions
unified framework
Uniform distribution
Unique Dominant Strategy Equilibrium
universal algebra
universal Turing machine
unknown competitors
unsupervised and semi-supervised learning
unsupervised learning
V
variable selection
variance reduction
VC theory
W
weighted average algorithm