Human-Centered Artificial Intelligence: Research and Applications
- 5h 42m
- Chang S. Nam, Jae-Yoon Jung, Sangwon Lee
- Elsevier Science and Technology Books, Inc.
- 2022
Human-Centered Artificial Intelligence: Research and Applications presents current theories, fundamentals, techniques and diverse applications of human-centered AI. Sections address the question, "are AI models explainable, interpretable and understandable?, introduce readers to the design and development process, including mind perception and human interfaces, explore various applications of human-centered AI, including human-robot interaction, healthcare and decision-making, and more. As human-centered AI aims to push the boundaries of previously limited AI solutions to bridge the gap between machine and human, this book is an ideal update on the latest advances.
- Presents extensive research on human-centered AI technology
- Provides different methods and techniques used to investigate human-AI interaction
- Discusses open questions and challenges in trust within human-centered AI
- Explores how human-centered AI changes and operates in human-machine interactions
About the Author
Chang S. Nam is currently a Professor of Industrial and Systems Engineering at North Carolina State University (NCSU), USA. He is also an associated faculty in the UNC/NCSU Joint Department of Biomedical Engineering, Department of Psychology, and Brain Research Imaging Center (BRIC) at UNC. He received a PhD at Virginia Tech. His research interests center around brain-computer interfaces, computational neuroscience, neuroergonomics, and human-AI/Robot/Automation interaction. He is the editor of “Brain-Computer Interfaces Handbook: Technological and Theoretical Advances” (with Drs. Nijholt and Lotte, CRC Press), “Neuroergonomics: Principles and Practices (Springer), “Mobile Brain-Body Imaging and the Neuroscience of Art, Innovation and Creativity (with Contreras-Vidal et al., Springer), “Trust in Human-Robot Interaction: Research and Applications” (with Lyons, Elsevier), and “Human-centered AI: Research and Applications” (with Jung & Lee, Elsevier). Currently, Nam serves as the Editor-in-Chief of the journal Brain-Computer Interfaces.
Jae-Yoon Jung is a professor in the department of industrial and management systems engineering (IE) at Kyung Hee University (KHU), Korea, and also an adjunct professor of the department of the department of software convergence (SWCon), KHU. He is currently the director of Graduate Program, IE and Smart Factory Program at KHU. He is leading Industrial AI Lab at KHU. He received the Ph.D., M.S., and B.S. degrees in Industrial Engineering at Seoul National University (SNU), in 2005, 2001, and 1999, respectively. In SNU, he was supervised by prof. Suk-Ho Kang and Yeongho Kim in Intelligent Manufacturing Systems Lab. After that, he visited the Process Mining Group at Eindhoven University of Technology (TU/e) in the Netherland, supervised by prof. Wil van der Aalst. Before joining in KHU, he worked for u-Computing Innovation Center (uCIC), directed by Prof. Jinwoo Park, and he also studied in the Information Management Lab. at SNU, supervised by prof. Jonghun Park.
Sangwon Lee is an Associate Professor in Department of Interaction Science and Department of Applied Artificial Intelligence at Sugnkyunkwan University. He is also the director of ID square lab (Interaction Design and Development Laboratory). He received his BS degree from Korea University, and his MS degree and PhD degree from the Pennsylvania State University. His research interests lie in human-AI interaction, user experience, affective computing, user modelling, and explainable artificial intelligence.
In this Book
-
Foreword
-
Are AI Models Explainable, Interpretable, and Understandable?
-
Explanation Using Model-Agnostic Methods
-
Explanation Using Examples
-
Explanation of Ensemble Models
-
Explanation of Deep Learning Models
-
AI as an Explanation Agent and User-Centered Explanation Interfaces for Trust in AI-Based Systems
-
Anthropomorphism in Human-Centered AI: Determinants and Consequences of Applying Human Knowledge to AI Agents
-
Designing a Pragmatic Explanation for the XAI System Based on the User's Context and Background Knowledge
-
Interactive Reinforcement Learning and Error-Related Potential Classification for Implicit Feedback
-
Reinforcement Learning in EEG-Based Human-Robot Interaction
-
Shopping with AI: Consumers' Perceived Autonomy in the Age of AI
-
Use of Deep Learning Techniques in EEG-Based BCI Applications
-
AI in Human Behavior Analysis
-
AI in Nondestructive Condition Assessment of Concrete Structures: Detecting Internal Defects and Improving Prediction Performance Using Prediction Integration and Data Proliferation Techniques
-
Ethics of AI in Organizations
-
Designing XAI from Policy Perspectives
-
Responsible AI and Algorithm Governance: An Institutional Perspective