Understanding Large Language Models: Learning Their Underlying Concepts and Technologies
- 1h 38m
- Thimira Amaratunga
- Apress
- 2023
This book will teach you the underlying concepts of large language models (LLMs), as well as the technologies associated with them.
The book starts with an introduction to the rise of conversational AIs such as ChatGPT, and how they are related to the broader spectrum of large language models. From there, you will learn about natural language processing (NLP), its core concepts, and how it has led to the rise of LLMs. Next, you will gain insight into transformers and how their characteristics, such as self-attention, enhance the capabilities of language modeling, along with the unique capabilities of LLMs. The book concludes with an exploration of the architectures of various LLMs and the opportunities presented by their ever-increasing capabilities—as well as the dangers of their misuse.
After completing this book, you will have a thorough understanding of LLMs and will be ready to take your first steps in implementing them into your own projects.
What You Will Learn
- Grasp the underlying concepts of LLMs
- Gain insight into how the concepts and approaches of NLP have evolved over the years
- Understand transformer models and attention mechanisms
- Explore different types of LLMs and their applications
- Understand the architectures of popular LLMs
- Delve into misconceptions and concerns about LLMs, as well as how to best utilize them
Who This Book Is For
Anyone interested in learning the foundational concepts of NLP, LLMs, and recent advancements of deep learning
About the Author
Thimira Amaratunga is a Senior Software Architect at Pearson PLC Sri Lanka with over 15 years of industry experience. He is also an inventor, author, and researcher in the areas of AI, machine learning, deep learning in education, and computer vision.
Thimira holds a Master of Science degree in Computer Science and a bachelor’s degree in information technology from the University of Colombo, Sri Lanka. He has filed three patents in the fields of dynamic neural networks and semantics for online learning platforms. He has published three books on deep learning and computer vision.
In this Book
-
Preface
-
Introduction
-
NLP Through the Ages
-
Transformers
-
What Makes LLMs Large?
-
Popular LLMs
-
Threats, Opportunities, and Misconceptions