Sijia Liu - Böcker
Visar alla böcker från författaren Sijia Liu. Handla med fri frakt och snabb leverans.
2 produkter
2 produkter
824 kr
Skickas inom 7-10 vardagar
This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.
693 kr
Kommande
This book provides a systematic and in-depth introduction to machine unlearning (MU) for foundation models, framed through an optimization–model–data tri-design perspective and complemented by assessments and applications. As foundation models are continuously adapted and reused, the ability to selectively remove unwanted data, knowledge, or model behavior, without full retraining, poses new theoretical and practical challenges. Thus, MU has become a critical capability for trustworthy, deployable, and regulation-ready artificial intelligence. From the optimization viewpoint, this book treats unlearning as a multi-objective and often adversarial problem that must simultaneously enforce targeted forgetting, preserve model utility, resist recovery attacks, and remain computationally efficient. From the model perspective, the book examines how knowledge is distributed across layers and latent subspaces, motivating modular and localized unlearning. From the data perspective, the book explores forget-set construction, data attribution, corruption, and coresets as key drivers of reliable forgetting. Bridging theory and practice, the book also provides a comprehensive review of benchmark datasets and evaluation metrics for machine unlearning, critically examining their strengths and limitations. The authors further survey a wide range of applications in computer vision and large language models, including AI safety, privacy, fairness, and industrial deployment, highlighting why post-training model modification is often preferred over repeated retraining in real-world systems. By unifying optimization, model, data, evaluation, and application perspectives, this book offers both a foundational framework and a practical toolkit for designing machine unlearning methods that are effective, robust, and ready for large-scale, regulated deployment.