COLT 2011 - Program

July 8, Friday

17:00 - 19:00 Registration
19:00 - 20:00 Welcome reception at Danubius Hotel Flamenco

July 9, Saturday

09:0010:15

Recommendation Systems and Matrix Estimation

10:15 – 10:45 Coffee
10:45 11:45 Invited talk
11:4512:35 Sparsity
12:35 – 14:35 Lunch break
14:3516:15 Computation and Learning
16:15 – 16:45 Coffee
16:4517:10 Privacy
17:1018:40 Open problems session
20.30 Business meeting

July 10, Sunday

09:0010:40 Learnability
10:40 11:10 Coffee
11:10 12:50 Statistical Estimation
12:50 – 14:50 Lunch break
14:5016:05 Online Learning, approachability, and calibration
16:05 – 16:35 Coffee
16:35 17:25 Control and Reinforcement Learning
17:2518:25 Impromptu
19:00 – 21:00 Banquet at Hemingway Restaurant
(Kosztolányi Dezső tér 2 -near the lake)

July 11, Monday

09:0010:40 Bandits
10:40 – 11:10 Coffee
11:1012:10 Invited talk
12:1013:00 Optimization
13:00 – 15:00 Lunch break
15:0016:15 Games
16:15 – 16:45 Coffee
16:4518:25 Aggregation Methods and MDL

July 9, Saturday

09:0010:15

Recommendation Systems and Matrix Estimation

Ohad Shamir and Shai Shalev-Shwartz
Collaborative Filtering with the Trace Norm: Learning, Bounding, and Transducing

Rina Foygel and Nathan Srebro
Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Cynthia Rudin, Ansaf Salleb-Aouissi, Eugene Kogan and David Madigan
Sequential Event Prediction with Association Rules

10:15 – 10:45 Coffee
10:45 11:45

Invited talk

David Hand
Learning in the real world

11:4512:35

Sparsity

Arnak Dalalyan and Laëtitia Comminges
Tight conditions for consistent variable selection in high dimensional nonparametric regression

Sébastien Gerchinovitz
Sparsity regret bounds for individual sequences in online linear regression

12:35 – 14:35 Lunch break

14:3516:15

Computation and Learning

Vitaly Feldman
Distribution-Independent Evolvability of Linear Threshold Functions

Vitaly Feldman, Homin K. Lee and Rocco Servedio
Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

Alekh Agarwal, John Duchi, Peter Bartlett and Clement Levrard
Oracle inequalities for computationally budgeted model selection

Michael Kallweit and Hans Simon
A Close Look to Margin Complexity and Related Parameters

16:15 – 16:45 Coffee

16:4517:10

Privacy

Kamalika Chaudhuri and Daniel Hsu
Sample Complexity Bounds for Differentially Private Learning

17:1018:40

Open problems session

Elad Hazan and Satyen Kale
A simple multi-armed bandit algorithm with optimal variation-bounded regret

Aleksandrs Slivkins
Monotone multi-armed bandit allocations

Loizos Michael
Missing information impediments to learnability

Jacob Abernethy and Shie Mannor
Does an efficient calibrated forecasting strategy exist?

Peter Grünwald and Wojciech Kotłowski
Bounds on individual risk for log-loss predictors

Wojciech Kotłowski and Manfred Warmuth
Minimax algorithm for learning rotations

20.30 - Business meeting

July 10, Sunday

09:0010:40

Learnability

Amit Daniely, Sivan Sabato, Shai Ben-David and Shai Shalev-Shwartz
Multiclass Learnability and the ERM principle

Daniel Vainsencher, Shie Mannor and Alfred Bruckstein
The Sample Complexity of Dictionary Learning

Liu Yang, Steve Hanneke and Jaime Carbonell
Identifiability of Priors from Bounded Sample Sizes with Applications to Transfer Learning

Wei Gao and Zhi-Hua Zhou
On the Consistency of Multi-Label Learning

10:40 11:10 Coffee

11:10 12:50

Statistical Estimation

Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky and Shengjun Pan
Competitive Closeness Testing

Philippe Rigollet and Xin Tong
Neyman-Pearson classification under a strict constraint

Ingo Steinwart
Adaptive Density Level Set Clustering

Ping Li and Cun-Hui Zhang
A New Algorithm for Compressed Counting with Applications in Shannon Entropy Estimation in Dynamic Data

12:50 – 14:50 Lunch break

14:5016:05

Online Learning, approachability, and calibration

Jacob Abernethy, Peter Bartlett and Elad Hazan
Blackwell Approachability and No-Regret Learning are Equivalent

Alexander Rakhlin, Karthik Sridharan and Ambuj Tewari
Online Learning: Beyond Regret

Dean Foster, Alexander Rakhlin, Karthik Sridharan and Ambuj Tewari
Complexity-Based Approach to Calibration with Checking Rules

16:05 – 16:35 Coffee

16:35 17:25

Control and Reinforcement Learning

Istvan Szita and Csaba Szepesvári
Agnostic KWIK learning and efficient approximate reinforcement learning

Yasin Abbasi-Yadkori and Csaba Szepesvári
Regret Bounds for the Adaptive Control of Linear Quadratic Systems

17:2518:25 Inpromptu
19:00 – 22:00 Banquet at Hemingway Restaurant
(Kosztolányi Dezső tér 2 -near the lake)

July 11, Monday

09:0010:40

Bandits

Aurélien Garivier and Olivier Cappé
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond

Odalric-Ambrym Maillard, Gilles Stoltz and Remi Munos
A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences

Kareem Amin, Michael Kearns and Umar Syed
Bandits, Query Learning, and the Haystack Dimension

Aleksandrs Slivkins
Contextual Bandits with Similarity Information

10:40 – 11:10 Coffee
11:1012:10

Invited talk

Bill Freeman
Where machine vision needs help from machine learning

12:1013:00

Optimization

Indraneel Mukherjee, Cynthia Rudin and Robert Schapire
The Rate of Convergence of AdaBoost

Elad Hazan and Satyen Kale
Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization

13:00 – 15:00 Lunch break

15:0016:15

Games

Jean-Yves Audibert, Sébastien Bubeck and Gábor Lugosi
Minimax Policies for Combinatorial Prediction Games

Shie Mannor, Vianney Perchet and Gilles Stoltz
Robust approachability and regret minimization in games with partial monitoring

Gábor Bartók, Dávid Pál and Csaba Szepesvári
Minimax Regret of Finite Partial-Monitoring Games in Stochastic Environments

16:15 – 16:45 Coffee

16:4518:25

Aggregation Methods and MDL

Arnak Dalalyan and Joseph Salmon
Optimal aggregation of affine estimators

Tim Van Erven, Mark Reid and Robert Williamson
Mixability is Bayes Risk Curvature Relative to Log Loss

Peter Grünwald
Safe Learning: bridging the gap between Bayes, MDL and statistical learning theory via empirical convexity

Wojciech Kotlowski and Peter Grünwald
Maximum Likelihood vs. Sequential Normalized Maximum Likelihood in On-line Density Estimation