- Inbunden (Hardback)
- Antal sidor
- 1st ed. 2018
- Springer International Publishing AG
- Bibliographie 9 farbige Abbildungen
- 10 Tables, color; 11 Illustrations, color; 128 Illustrations, black and white; XXIII, 497 p. 139 ill
- 257 x 185 x 36 mm
- Antal komponenter
- 1 Hardback
- 1090 g
Du kanske gillar
Java How to Program, Late Objects, Global Edition
Harvey DeitelMixed media product
Neural Networks and Deep Learning
A Textbook719Skickas inom 3-6 vardagar.
Fri frakt inom Sverige för privatpersoner.This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Applications associated with many different areas like recommender systems, machine translation, image captioning, image classification, reinforcement-learning based gaming, and text analytics are covered. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.
KundrecensionerHar du läst boken? Sätt ditt betyg »
Fler böcker av Charu C Aggarwal
Recensioner i media
"The book recommends itself as a stepping-stone of the research-intensive area of deep learning and a worthy continuation of the previous textbooks written by the author ... . Thanks to its systematic and thorough approach complemented with the variety of resources (bibliographic and software references, exercises) neatly presented after each chapter, it is suitable for audiences of varied expertise or background." (Irina Ioana Mohorianu, zbMATH 1402.68001, 2019)
Charu C. Aggarwal is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. in Operations Research from the Massachusetts Institute of Technology in 1996. He has published more than 350 papers in refereed conferences and journals, and has applied for or been granted more than 80 patents. He is author or editor of 18 books, including textbooks on data mining, machine learning (for text), recommender systems, and outlier analy-sis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He has received several inter-nal and external awards, including the EDBT Test-of-Time Award (2014) and the IEEE ICDM Research Contributions Award (2015). Aside from serving as program or general chair of many major conferences in data mining, he is an editor-in-chief of the ACM SIGKDD Explorations and also of the ACM Transactions on Knowledge Discovery from Data. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms."
1 An Introduction to Neural Networks.- 2 Machine Learning with Shallow Neural Networks.- 3 Training Deep Neural Networks.- 4 Teaching Deep Learners to Generalize.- 5 Radical Basis Function Networks.- 6 Restricted Boltzmann Machines.- 7 Recurrent Neural Networks.- 8 Convolutional Neural Networks.- 9 Deep Reinforcement Learning.- 10 Advanced Topics in Deep Learning.