Advanced NLP: Introduction to Transformer Models
Natural Language Processing
| Intermediate
- 12 videos | 40m 55s
- Includes Assessment
- Earns a Badge
With recent advancements in cheap GPU compute power and natural language processing (NLP) research, companies and researchers have introduced many powerful models and architectures that have taken NLP to new heights. In this course, learn about Transformer models like Bert and GPT and the maturity of AI in NLP areas due to these models. Next, examine the fundamentals of Transformer models and their architectures. Finally, discover the importance of attention mechanisms in the Transformer architecture and how they help achieve state-of-the-art results in NLP tasks. Upon completing this course, you'll be able to understand different aspects of Transformer architectures like the self-attention layer and encoder-decoder models.
WHAT YOU WILL LEARN
-
Discover the key concepts covered in this courseOutline and apply sequence-to-sequence (seq2seq) encoder-decoder network architectureRecall how to use the attention method to improve nlp model resultsIdentify the fundamentals of the transformer architectureRecall the fundamentals of the self-attention layer in the transformer architectureState the fundamentals of multi-head attention in the transformer architecture
-
Identify the encoder block in the transformer architectureIdentify the decoder block in the transformer architectureOutline the fundamentals of transformer modelsRecall industry use cases for transformer modelsState the challenges for transformer modelsSummarize the key concepts covered in this course
IN THIS COURSE
-
1m 20s
-
6m 52s
-
7m 19s
-
1m
-
6m 52s
-
3m 32s
-
1m 56s
-
2m 14s
-
1m 41s
-
4m 3s
-
3m 24s
-
43s
EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE
Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.
Digital badges are yours to keep, forever.