Bandit Algorithms (inbunden)
Format
Inbunden (Hardback)
Språk
Engelska
Antal sidor
536
Utgivningsdatum
2020-07-16
Förlag
Cambridge University Press
Medarbetare
Szepesvri, Csaba
Illustratör/Fotograf
Worked examples or Exercises
Illustrationer
Worked examples or Exercises
Dimensioner
252 x 182 x 32 mm
Vikt
1070 g
Antal komponenter
1
Komponenter
69:B&W 6.69 x 9.61 in or 244 x 170 mm (Pinched Crown) Case Laminate on White w/Gloss Lam
ISBN
9781108486828

Bandit Algorithms

Inbunden,  Engelska, 2020-07-16
572
  • Skickas från oss inom 7-10 vardagar.
  • Fri frakt över 249 kr för privatkunder i Sverige.
Finns även som
Visa alla 1 format & utgåvor
Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Visa hela texten

Passar bra ihop

  1. Bandit Algorithms
  2. +
  3. The Anxious Generation

De som köpt den här boken har ofta också köpt The Anxious Generation av Jonathan Haidt (inbunden).

Köp båda 2 för 861 kr

Kundrecensioner

Har du läst boken? Sätt ditt betyg »

Recensioner i media

'This year marks the 68th anniversary of 'multi-armed bandits' introduced by Herbert Robbins in 1952, and the 35th anniversary of his 1985 paper with me that advanced multi-armed bandit theory in new directions via the concept of 'regret' and a sharp asymptotic lower bound for the regret. This vibrant subject has attracted important multidisciplinary developments and applications. Bandit Algorithms gives it a comprehensive and up-to-date treatment, and meets the need for such books in instruction and research in the subject, as in a new course on contextual bandits and recommendation technology that I am developing at Stanford.' Tze L. Lai, Stanford University

'This is a timely book on the theory of multi-armed bandits, covering a very broad range of basic and advanced topics. The rigorous treatment combined with intuition makes it an ideal resource for anyone interested in the mathematical and algorithmic foundations of a fascinating and rapidly growing field of research.' Nicol- Cesa-Bianchi, University of Milan

'The field of bandit algorithms, in its modern form, and driven by prominent new applications, has been taking off in multiple directions. The book by Lattimore and Szepesvri is a timely contribution that will become a standard reference on the subject. The book offers a thorough exposition of an enormous amount of material, neatly organized in digestible pieces. It is mathematically rigorous, but also pleasant to read, rich in intuition and historical notes, and without superfluous details. Highly recommended.' John Tsitsiklis, Massachusetts Institute of Technology

Övrig information

Tor Lattimore is a research scientist at DeepMind. His research is focused on decision making in the face of uncertainty, including bandit algorithms and reinforcement learning. Before joining DeepMind he was an assistant professor at Indiana University and a postdoctoral fellow at the University of Alberta. Csaba Szepesvri is a Professor in the Department of Computing Science at the University of Alberta and a Principal Investigator of the Alberta Machine Intelligence Institute. He also leads the 'Foundations' team at DeepMind. He has co-authored a book on nonlinear approximate adaptive controllers and authored a book on reinforcement learning, in addition to publishing over 200 journal and conference papers. He is an action editor of the Journal of Machine Learning Research.

Innehållsförteckning

1. Introduction; 2. Foundations of probability; 3. Stochastic processes and Markov chains; 4. Finite-armed stochastic bandits; 5. Concentration of measure; 6. The explore-then-commit algorithm; 7. The upper confidence bound algorithm; 8. The upper confidence bound algorithm: asymptotic optimality; 9. The upper confidence bound algorithm: minimax optimality; 10. The upper confidence bound algorithm: Bernoulli noise; 11. The Exp3 algorithm; 12. The Exp3-IX algorithm; 13. Lower bounds: basic ideas; 14. Foundations of information theory; 15. Minimax lower bounds; 16. Asymptotic and instance dependent lower bounds; 17. High probability lower bounds; 18. Contextual bandits; 19. Stochastic linear bandits; 20. Confidence bounds for least squares estimators; 21. Optimal design for least squares estimators; 22. Stochastic linear bandits with finitely many arms; 23. Stochastic linear bandits with sparsity; 24. Minimax lower bounds for stochastic linear bandits; 25. Asymptotic lower bounds for stochastic linear bandits; 26. Foundations of convex analysis; 27. Exp3 for adversarial linear bandits; 28. Follow the regularized leader and mirror descent; 29. The relation between adversarial and stochastic linear bandits; 30. Combinatorial bandits; 31. Non-stationary bandits; 32. Ranking; 33. Pure exploration; 34. Foundations of Bayesian learning; 35. Bayesian bandits; 36. Thompson sampling; 37. Partial monitoring; 38. Markov decision processes.