AI Security
The Most Dangerous Cyber-Attacks on Artificial Intelligence
Inbunden, Engelska, 2026
Del i serien Cognitive Technologies
1 205 kr
Kommande
Beskrivning
The author provides a rigorous, technically grounded framework for analysing, modelling, and mitigating adversarial threats against artificial intelligence systems. The book focuses on adversarial machine learning and AI-native cyber-attacks, examining how threat actors exploit vulnerabilities in data pipelines, model architectures, training procedures, and inference mechanisms to compromise the integrity, confidentiality, and availability of AI-driven systems.The significance of this book lies in addressing a structural gap in contemporary cybersecurity practice. Traditional security models were designed for deterministic software and networked systems, not for probabilistic, adaptive, and data-driven AI models. As AI increasingly underpins high-stakes decision-making across finance, healthcare, critical infrastructure, autonomous systems, and defence, adversarial manipulation of AI models has become an operational and strategic risk rather than a theoretical concern. This book responds directly to that risk by reframing cybersecurity through a model-centric, adversarial lens.The book is organised around the primary classes of AI cyber-attacks, each chapter analysing a major attack class that subsumes multiple concrete adversarial techniques. Collectively, these chapters cover the most dangerous and operationally relevant attack vectors observed in real-world AI deployments, including adversarial perturbations, data poisoning and backdoors, model extraction and inversion, membership inference, prompt injection and jailbreak attacks on large language models, AI-powered social engineering and deepfakes, federated learning and reinforcement learning attacks, and adversarial malware targeting AI-based security systems. Key features include lifecycle-based threat modelling, red-teaming methodologies, quantitative risk assessment frameworks, and technical countermeasures such as adversarial training, differential privacy, secure aggregation, cryptographic watermarking, and AI-specific governance controls.Readers will gain an operational understanding of how AI systems fail under adversarial pressure, how to simulate and test adversarial behaviours, and how to design resilient AI architectures suitable for deployment in high-risk environments. The book assumes prior familiarity with machine learning fundamentals and cybersecurity concepts and is aimed at advanced practitioners, researchers, and postgraduate audiences.