Keynote I: Learning in sequential decision problems

Prof. Peter Bartlett, University of California, Berkeley, USA

Abstract:

Many problems of decision making under uncertainty can be formulated as sequential decision problems in which a strategy's current state and choice of action determine its loss and next state, and the aim is to choose actions so as to minimize the sum of losses incurred.  For instance, in internet news recommendation and in digital marketing, the optimization of interactions with users to maximize long-term utility needs to exploit the dynamics of users.  We consider three problems of this kind: Markov decision processes with adversarially chosen transition and loss structures; policy optimization for large scale Markov decision processes; and linear tracking problems with adversarially chosen quadratic loss functions.  The key challenge in these problems is to develop methods that are effective with large state spaces, yet computationally efficient.  We aim to incur a total loss that is not too much worse than the best in some comparison class. Since optimality with respect to the class of all policies is unachievable in general for large scale problems, we consider more restricted comparison classes. We present algorithms and optimal excess loss bounds for these three problems.  We show situations where these algorithms are computationally efficient, and others where hardness results suggest that no algorithm is computationally efficient. It is a joint work with Yasin Abbasi-Yadkori, Varun Kanade, Alan Malek,Yevgeny Seldin and Csaba Szepesvari.

Biography: Peter Bartlett is a professor in Mathematics at the Queensland University of Technology and professor in Computer Science and Statistics at UC Berkeley. His research interests include machine learning, statistical learning theory, and adaptive control.  He has served as associate editor of Bernoulli, Machine Learning, the Journal of Machine Learning Research, the Journal of Artificial Intelligence Research, the IEEE Transactions on Information Theory, and Mathematics of Control Signals and Systems, and on the editorial boards of Machine Learning, JAIR, and Foundations and Trends in Machine Learning.  He has been professor in the Research School of Information Sciences and Engineering at the Australian National University, and honorary professor at the University of Queensland.  He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia in 2001, was an IMS Medallion Lecturer in 2008, and is an Australian Laureate Fellow and a Fellow of the IMS.

Download slides

Keynote II: Making Sense of a Random World through Statistics

Prof. Geoff McLachlan, University of Queensland, Brisbane, Australia

Abstract:

With the growth in data in recent times, it is argued in this talk that there is a need for even more statistical methods in data mining. In so doing, we present some examples in which there is a need to adopt some fairly sophisticated statistical procedures (at least not off-the-shelf methods) to avoid misleading inferences being made about patterns in the data due to randomness. One example concerns the search for clusters in data. Having found an apparent clustering in a dataset, as evidenced in a visualisation of the dataset in some reduced form, the question arises of whether this clustering is representative of an underlying group structure or is merely due to random fluctuations. Another example concerns the supervised classification in the case of many variables measured on only a small number of objects. In this situation, it is possible to construct a classifier based on a relatively small subset of the variables that provides a perfect classification of the data (that is, its apparent error rate is zero). We discuss how statistics is needed to correct for the optimism in these results due to randomness and to provide a realistic interpretation.
Ċ
Yanchang Zhao,
Nov 28, 2014, 12:26 PM