- Häftad (Paperback)
- Antal sidor
- Addison-Wesley Professional
- 231 x 189 x 24 mm
- Antal komponenter
- 1017 g
Du kanske gillar
Learning Deep Learning
Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlowav Magnus Ekman399
NVIDIA's Full-Color Guide to Deep Learning with TensorFlow: All You Need to Get Started and Get Results Deep learning is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to deep learning with TensorFlow, the #1 Python library for building these breakthrough applications. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, Magnus Ekman shows how to use fully connected feedforward networks and convolutional networks to solve real problems, such as predicting housing prices or classifying images. You'll learn how to represent words from a natural language, capture semantics, and develop a working natural language translator. With that foundation in place, Ekman then guides you through building a system that inputs images and describes them in natural language. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow and the Keras API. (For comparison and easy migration between frameworks, complementary PyTorch examples are provided online.) He concludes by previewing trends in deep learning, exploring important ethical issues, and providing resources for further learning. Master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how frameworks make it easier to develop more robust and useful neural networks Discover how convolutional neural networks (CNNs) revolutionize classification and analysis Use recurrent neural networks (RNNs) to optimize for text, speech, and other variable-length sequences Master long short-term memory (LSTM) techniques for natural language generation and other applications Move further into natural language-processing (NLP), including understanding and translation
- Skickas inom 7-10 vardagar.
- Gratis frakt inom Sverige över 159 kr för privatpersoner.
KundrecensionerHar du läst boken? Sätt ditt betyg »
Magnus Ekman, Ph.D., is a director of architecture at NVIDIA Corporation. His doctorate is in computer engineering, and he is the inventor of multiple patents. He was first exposed to artificial neural networks in the late nineties in his native country, Sweden. After some dabbling in evolutionary computation, he ended up focusing on computer architecture and relocated to Silicon Valley, where he lives with his wife Jennifer, children Sebastian and Sofia, and dog Babette. He previously worked with processor design and R&D at Sun Microsystems and Samsung Research America, and has been involved in starting two companies, one of which (Skout) was later acquired by The Meet Group, Inc. In his current role at NVIDIA, he leads an engineering team working on CPU performance and power efficiency for system on chips targeting the autonomous vehicle market. As the Deep Learning (DL) field exploded the past few years, fueled by NVIDIA's GPU technology and CUDA, Dr. Ekman found himself in the middle of a company expanding beyond computer graphics into becoming a deep learning (DL) powerhouse. As a part of that journey, he challenged himself to stay up-to-date with the most recent developments in the field. He considers himself to be an educator, and in the process of writing Learning Deep Learning ( LDL), he partnered with the NVIDIA Deep Learning Institute (DLI), which offers hands-on training in AI, accelerated computing, and accelerated data science. He is thrilled about DLI's plans to add LDL to its existing portfolio of self-paced online courses, live instructor-led workshops, educator programs, and teaching kits.
Preface Acknowledgments About the Author Chapter 1: The Rosenblatt Perceptron Chapter 2: Gradient-Based Learning Chapter 3: Sigmoid Neurons and Backpropagation Chapter 4: Fully Connected Networks Applied to Multiclass Classification Chapter 5: Toward DL: Frameworks and Network Tweaks Chapter 6: Fully Connected Networks Applied to Regression Chapter 7: Convolutional Neural Networks Applied to Image Classification Chapter 8: Deeper CNNs and Pretrained Models Chapter 9: Predicting Time Sequences with Recurrent Neural Networks Chapter 10: Long Short-Term Memory Chapter 11: Text Autocompletion with LSTM and Beam Search Chapter 12: Neural Language Models and Word Embeddings Chapter 13: Word Embeddings from word2vec and GloVe Chapter 14: Sequence-to-Sequence Networks and Natural Language Translation Chapter 15: Attention and the Transformer Chapter 16: One-to-Many Network for Image Captioning Chapter 17: Medley of Additional Topics Chapter 18: Summary and Next Steps Appendix A: Linear Regression and Linear Classifiers Appendix B: Object Detection and Segmentation Appendix C: Word embeddings Beyond word2vec and GloVe Appendix D: GPT, BERT, and RoBERTa Appendix E: Newton-Raphson versus Gradient Descent Appendix F: Matrix Implementation of Digit Classification Network Appendix G: Relating Convolutional Layers to Mathematical Convolution Appendix H: Gated Recurrent Units Appendix I: Setting Up a Development Environment Appendix J: Cheat Sheets Index