Using Out-of-the-Box Transformer Models for Natural Language Processing

NLP    |    Intermediate
  • 10 videos | 1h 29m 47s
  • Includes Assessment
  • Earns a Badge
Rating 5.0 of 2 users Rating 5.0 of 2 users (2)
Transfer learning is a powerful machine learning technique that involves taking a pre-trained model on a large dataset and fine-tuning it for a related but different task, significantly reducing the need for extensive datasets and computational resources. Transformers are groundbreaking neural network architectures that use attention mechanisms to efficiently process sequential data, enabling state-of-the-art performance in a wide range of natural language processing tasks. In this course, you will discover transfer learning, the TensorFlow Hub, and attention-based models. Then you will learn how to perform subword tokenization with WordPiece. Next, you will examine transformer models, specifically the FNet model, and you will apply the FNet model for sentiment analysis. Finally, you will explore advanced text processing techniques using the Universal Sentence Encoder (USE) for semantic similarity analysis and the Bidirectional Encoder Representations from Transformers (BERT) model for sentence similarity prediction.

WHAT YOU WILL LEARN

  • Discover the key concepts covered in this course
    Provide an overview of how transfer learning works
    Use pre-trained embeddings from the tensorflow hub
    Describe attention-based models and transformers
    Perform subword tokenization with wordpiece
  • Train the fnet model for sentiment analysis
    Compute text similarity with the use
    Set up a bidirectional encoder representations from transformers (bert) model for text similarity
    Classify sentences with a bert model
    Summarize the key concepts covered in this course

IN THIS COURSE

  • 1m 59s
    In this video, we will discover the key concepts covered in this course. FREE ACCESS
  • 7m 17s
    After completing this video, you will be able to provide an overview of how transfer learning works. FREE ACCESS
  • Locked
    3.  Using Pre-trained Embeddings from the TensorFlow Hub
    13m 38s
    In this video, you will learn how to use pre-trained embeddings from the TensorFlow Hub. FREE ACCESS
  • Locked
    4.  Attention-based Models and Transformers
    8m 19s
    Upon completion of this video, you will be able to describe attention-based models and transformers. FREE ACCESS
  • Locked
    5.  Performing Subword Tokenization with WordPiece
    12m 42s
    Find out how to perform subword tokenization with WordPiece. FREE ACCESS
  • Locked
    6.  Using the FNet Encoder for Sentiment Analysis
    11m 52s
    During this video, discover how to train the FNet model for sentiment analysis. FREE ACCESS
  • Locked
    7.  Using the Universal Sentence Encoder (USE) for Semantic Textual Similarity
    12m 28s
    Learn how to compute text similarity with the USE. FREE ACCESS
  • Locked
    8.  Structuring Data for Sentence Similarity Prediction Using BERT
    8m 51s
    In this video, find out how to set up a Bidirectional Encoder Representations from Transformers (BERT) model for text similarity. FREE ACCESS
  • Locked
    9.  Using a Fine-tuned BERT Model for Sentence Classification
    9m 39s
    Discover how to classify sentences with a BERT model. FREE ACCESS
  • Locked
    10.  Course Summary
    3m
    In this video, we will summarize the key concepts covered in this course. FREE ACCESS

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE