Robust Machine Learning: Distributed Methods for Safe AI

  • 2h 40m
  • Nirupam Gupta, Rachid Guerraoui, Rafael Pinot
  • Springer
  • 2024

Today, machine learning algorithms are often distributed across multiple machines to leverage more computing power and more data. However, the use of a distributed framework entails a variety of security threats. In particular, some of the machines may misbehave and jeopardize the learning procedure. This could, for example, result from hardware and software bugs, data poisoning or a malicious player controlling a subset of the machines. This book explains in simple terms what it means for a distributed machine learning scheme to be robust to these threats, and how to build provably robust machine learning algorithms.

Studying the robustness of machine learning algorithms is a necessity given the ubiquity of these algorithms in both the private and public sectors. Accordingly, over the past few years, we have witnessed a rapid growth in the number of articles published on the robustness of distributed machine learning algorithms. We believe it is time to provide a clear foundation to this emerging and dynamic field. By gathering the existing knowledge and democratizing the concept of robustness, the book provides the basis for a new generation of reliable and safe machine learning schemes.

In addition to introducing the problem of robustness in modern machine learning algorithms, the book will equip readers with essential skills for designing distributed learning algorithms with enhanced robustness. Moreover, the book provides a foundation for future research in this area.

About the Author

Rachid Guerraoui is a professor of computer science at EPFL, where he leads the Distributed Computing Laboratory. He has previously worked at the Ecole des Mines de Paris, CEA Saclay, HP Labs in Palo Alto, and MIT. ACM fellow and professor of the College de France, he was awarded a Senior ERC Grant and a Google Focused Award. He has co-authored several popular books on distributed computing, including Reliable and Secure Distributed Programming, and Algorithms for Concurrent Systems.

Nirupam Gupta is a computer science research associate at EPFL. He has previously worked as a postdoc in the department of computer science at Georgetown University. He has served on the program committees of the dependable and secure machine learning workshops at the IEEE DSN conference and the symposium on reliable distributed systems (SRDS), and currently serves as a reviewer for leading control systems and optimization journals, including Elsevier Automatica, IEEE TAC and IEEE CONES. He received his PhD from the University of Maryland College Park, and his bachelor’s degree from the Indian Institute of Technology Delhi.

Rafael Pinot is a junior professor in the department of mathematics at Sorbonne Université, where he holds a chair on the mathematical foundation of computer and data science within the LPSM research unit. He previously worked as a computer science research associate at EPFL and received his PhD from PSL Research University. In 2018, he was awarded a JSPS summer fellowship to join Kyoto University as a visiting researcher. He also received the Dauphine Foundation’s Young Researcher Award (2020) and the Postdoctoral Research Award from EPFL’s Ecocloud Research Center (2021).

In this Book

  • Preface
  • Notation
  • Acronyms
  • Context and Motivation
  • Basics of Machine Learning
  • Federated Machine Learning
  • Fundamentals of Robust Machine Learning
  • Optimal Robustness
  • Practical Robustness
  • Appendix A
  • Appendix B
  • Appendix C
SHOW MORE
FREE ACCESS