Adversarial Machine Learning (inbunden)
Format
Inbunden (Hardback)
Språk
Engelska
Antal sidor
338
Utgivningsdatum
2019-02-21
Upplaga
1 Edition
Förlag
Cambridge University Press
Medarbetare
Nelson, Blaine / Rubinstein, Benjamin I. P. / Tygar, J. D.
Illustratör/Fotograf
37 b, w illus 8 tables
Illustrationer
37 b/w illus. 8 tables
Dimensioner
251 x 241 x 20 mm
Vikt
817 g
Antal komponenter
1
Komponenter
,
ISBN
9781107043466

Adversarial Machine Learning

Inbunden,  Engelska, 2019-02-21
1021
  • Skickas från oss inom 7-10 vardagar.
  • Fri frakt över 249 kr för privatkunder i Sverige.
Finns även som
Visa alla 2 format & utgåvor
Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race.
Visa hela texten

Passar bra ihop

  1. Adversarial Machine Learning
  2. +
  3. Co-Intelligence

De som köpt den här boken har ofta också köpt Co-Intelligence av Ethan Mollick (häftad).

Köp båda 2 för 1211 kr

Kundrecensioner

Har du läst boken? Sätt ditt betyg »

Recensioner i media

'Data Science practitioners tend to be unaware of how easy it is for adversaries to manipulate and misuse adaptive machine learning systems. This book demonstrates the severity of the problem by providing a taxonomy of attacks and studies of adversarial learning. It analyzes older attacks as well as recently discovered surprising weaknesses in deep learning systems. A variety of defenses are discussed for different learning systems and attack types that could help researchers and developers design systems that are more robust to attacks.' Richard Lippmann, Lincoln Laboratory, Massachusetts Institute of Technology

'This is a timely book. Right time and right book, written with an authoritative but inclusive style. Machine learning is becoming ubiquitous. But for people to trust it, they first need to understand how reliable it is.' Fabio Roli, University of Cagliari, Italy

Övrig information

Anthony D. Joseph is a Chancellor's Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was formerly the Director of Intel Labs Berkeley. Blaine Nelson is a Software Engineer in the Software Engineer in the Counter-Abuse Technologies (CAT) team at Google. He has previously worked at the University of Potsdam and the University of Tbingen. Benjamin I. P. Rubinstein is a Senior Lecturer in Computing and Information Systems at the University of Melbourne. He has previously worked at Microsoft Research, Google Research, Yahoo! Research, Intel Labs Berkeley, and IBM Research. J. D. Tygar is a Professor of Computer Science and a Professor of Information Management at the University of California, Berkeley.

Innehållsförteckning

Part I. Overview of Adversarial Machine Learning: 1. Introduction; 2. Background and notation; 3. A framework for secure learning; Part II. Causative Attacks on Machine Learning: 4. Attacking a hypersphere learner; 5. Availability attack case study: SpamBayes; 6. Integrity attack case study: PCA detector; Part III. Exploratory Attacks on Machine Learning: 7. Privacy-preserving mechanisms for SVM learning; 8. Near-optimal evasion of classifiers; Part IV. Future Directions in Adversarial Machine Learning: 9. Adversarial machine learning challenges.