This book brings normative ethical theory to AI system development. It shows how we can align artificial intelligence (AI) systems with normative human values by training AI to follow human goals and values. It specifies these normative ethical and philosophical foundations. It then provides techniques to implement these values using state-of-the-art methods for aligning general-purpose language systems. All of this is introduced in a straightforward way through the book's original concept of dynamic normativity. This book is useful for advanced students and researchers in the fields of the ethics of technology, artificial intelligence, and responsible innovation, as well as technology professionals and policymakers seeking to direct artificial intelligence towards common virtues and the public good.