Artificial Intelligence has made remarkable progress in recent years, with deep reinforcement learning enabling machines to solve complex decision-making problems across domains such as robotics, healthcare, finance, and autonomous systems. However, as these models become increasingly powerful, they also become more difficult to interpret, often functioning as opaque “black-box” systems.
Illuminating Intelligence: Explainable AI and Interpretability in Deep Reinforcement Learning explores the emerging field of Explainable Artificial Intelligence (XAI) and its role in making advanced AI systems more transparent and understandable. The book introduces the foundations of artificial intelligence and reinforcement learning, and examines key techniques for interpreting complex machine learning models.
Through discussions on model-agnostic explanations, visualization methods, feature attribution, and policy interpretability, the book provides practical insights into analysing and understanding deep reinforcement learning systems.
Designed for students, researchers, and practitioners in artificial intelligence and machine learning, this book offers a clear introduction to the challenges, techniques, and future directions of building transparent, trustworthy, and responsible AI systems.
Sorry we are currently not available in your region. Alternatively you can purchase from our partners
Sorry we are currently not available in your region. Alternatively you can purchase from our partners