SKILL BENCHMARK
NLP and LLMs Proficiency (Advanced Level)
- 25m
- 25 questions
The NLP and LLMs Proficiency (Advanced Level) benchmark measures your knowledge of the concepts of language translation, summarization, and semantic similarity. You will be evaluated on your skills in fine-tuning models for classification and question answering and fine-tuning models for language translation and summarization. A learner who scores high on this benchmark demonstrates that they have expertise in developing NLP and LLM applications and can work on NLP and LLM projects without any supervision.
Topics covered
- align NER tags with subword tokens
- compare the performance of fine-tuned T5 and baseline T5 models
- compute text similarity with a tokenizer and model
- compute text similarity with sentence transformers
- evaluate text summaries using ROUGE scores
- fine-tune a BERT classifier
- fine-tune a DistilBERT model for QnA
- fine-tune a DistilGPT model for CLM
- fine-tune a DistilRoBERTa model for masked language modeling (MLM)
- fine-tune a T5-small model for summarization
- fine-tune BERT for NER
- fine-tune the T5-small model for translation
- generate BLEU scores for text translation
- generate context-question pairs for QnA
- generate predictions with a fine-tuned model
- load and clean text for named entity recognition (NER)
- load and process data for causal language modeling (CLM)
- perform clustering on sentence embeddings
- perform language translation with the M2M 100 and Opus models
- perform language translation with the T5 model
- preprocess text for summarization
- preprocess text for translation
- set up data for fine-tuning
- summarize text using BART and T5
- summarize text with a regular T5-small model