Aspire Journeys
AI for Data Science
- 23 Courses | 39h 2m 15s
- 2 Labs | 2h
AI along with generative AI is a cutting-edge technology that will transform nearly every business function, ranging from content creation and product design, to improving customer experience and marketing new ideas. While the benefits of AI are immense, the technology has its limitations and poses some ethical considerations. In this Journey designigned for for front-line learners, you will be introducted to AI concepts and ethical considerations.
AI for Data Science: Activate
In this track, you will be introduced to foundation data science methods/.
- 8 Courses | 14h 28m 59s
AI for Data Science: Accelerate
In this track, you will explore generative AI models and transformers.
- 8 Courses | 13h 8m 15s
- 1 Lab | 1h
AI for Data Science: Transform
In this track, you will explore fine-tuning and RAG.
- 7 Courses | 11h 25m 1s
- 1 Lab | 1h
COURSES INCLUDED
An Introduction to Generative AI Concepts
This comprehensive course delves deep into the fascinating world of Generative AI. Through a combination of engaging lectures and hands-on practice, participants will gain an in-depth understanding of what generative models are, how they differ from other AI techniques, and the theories and principles underlying them. You will discover various types of generative models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs), and explore the process involved in training these models. Then you will examine the strengths, limitations, and practical applications of generative models across various domains, such as image generation, text generation, and data augmentation. Next, you will learn how to evaluate the performance of generative models and focus on ethical considerations in generative AI and the potential societal impact of these technologies. Finally, you will have the opportunity to generate synthetic data using generative models for training and testing purposes and investigate the notion of responsible AI in the generative era. Upon course completion, you will be prepared not just to use these powerful tools, but to use them wisely and ethically.
17 videos |
2h 38m
Assessment
Badge
Generative Modeling Foundations
This course dives deep into the world of generative models, providing learners with a comprehensive understanding of various generative techniques and their applications. This course is carefully designed to bridge theoretical concepts with practical applications, demystifying the methods used in popular generative models like generative adversarial networks (GANs), variational autoencoders (VAEs), and more. Through a combination of rich imagery, illustrative examples, and detailed explanations, participants will explore the differences between generative and discriminative modeling, the foundational framework of generative artificial intelligence (AI), and the various evaluation metrics that gauge the success of these models. Whether you're a budding data scientist, an AI enthusiast, or a seasoned researcher, this course offers a deep dive into the cutting-edge techniques that are shaping the future of artificial intelligence.
14 videos |
1h 57m
Assessment
Badge
Getting Started with Large Language Models (LLMs)
Dive deep into the expansive realm of large language models (LLMs), a pivotal cornerstone in today's artificial intelligence (AI)-driven landscape. This course unravels the intricacies of these models, from their architecture and training methods to their profound implications in real-world scenarios. Begin by exploring the significance of LLMs in the world of AI. Then you will examine the architecture of LLMs, evaluate the impact of data on the effectiveness of LLMs, and fine-tune your LLM for a specific task. Next, you will investigate the ethical implications of using LLMs, including potential biases and privacy issues. Finally, you will discover the potential and limitations of LLMs and learn how to stay updated with the latest advancements in this dynamic field.
13 videos |
1h 36m
Assessment
Badge
Leveraging Generative AI for Business
In the modern digital era, generative artificial intelligence (AI) emerges as a game-changer, introducing unprecedented capabilities to the business landscape. This course is tailored for professionals seeking to understand the depth and breadth of generative AI's impact on the business world. Dive into the essentials, from the foundational concepts to ethical ramifications and real-world implementations. You will explore the transformative potential of generative AI on business operations, products, and customer experiences and delve into the algorithms propelling these innovations. Discover the possibilities for the interplay of human expertise with AI, managing data for AI deployments, and navigating legal landscapes. At the end of the course, participants will be adept at assessing the business value of generative AI and equipped with the knowledge to strategically integrate it into their organization's digital evolution.
14 videos |
1h 38m
Assessment
Badge
Artificial Intelligence and Machine Learning
This course will demystify the world of artificial intelligence (AI) and machine learning (ML), taking you from foundational concepts to practical applications. You'll learn to distinguish AI and ML, explore how algorithms learn, and perform common tasks like classification and clustering. You will begin by learning to confidently distinguish between the broad umbrella of AI and the specific subset of ML, understanding how each contributes to the landscape of intelligent systems. Next, you'll explore the milestones that shaped AI. Then you will discover how to classify the diverse approaches of machine learning. Finally, you will explore the practical aspects of common machine learning problems. You'll learn the meaning of regression, classification, and clustering and how they're applied in real-world scenarios. Discover how to evaluate model performance and explore the workings of popular traditional models like linear regression and decision trees. You'll also be introduced to ensemble learning, where the "wisdom of the crowds" fuels even more accurate predictions.
11 videos |
1h 36m
Assessment
Badge
Deep Learning and Neural Networks
Deep learning and neural networks have revolutionized various fields by enabling computers to automatically learn complex patterns from data. This led to breakthroughs in areas such as image recognition, natural language processing (NLP), and autonomous driving. In this course, you will compare and contrast traditional machine learning (ML) and deep learning models. You will see how deep learning models excel in automated feature extraction from raw data, tackling complex tasks with the power of vast datasets. You will explore the fundamental unit of deep learning, the neuron, and understand how it works. Next, you will explore the diverse neural network architectures designed for specific data types. You will learn how convolutional neural networks (CNNs) extract features from images and how recurrent neural networks (RNNs) are able to extract relationships in time-series data. Finally, you will explore how neural networks handle natural language processing. You will learn how attention-based models help models focus on crucial parts of the input data for enhanced predictions and how generative adversarial networks (GANs) work. You will also explore reinforcement learning, a machine learning technique where agents navigate uncertain environments to maximize rewards.
11 videos |
1h 20m
Assessment
Badge
An Introduction to GPT Models
Generative Pre-trained Transformer (GPT) models are advanced artificial intelligence (AI) systems designed to understand and generate human-like text based on the information they've been trained on. These models can perform a wide range of language tasks, from writing stories to answering questions, by learning patterns in vast amounts of text data. In this course, you will dive into the world of GPT models and the foundational models that are pivotal to the development of the GPT-n series. You will gain an understanding of the terminology and concepts that make GPT models outstanding in performing natural language processing tasks. Next, you will explore the concept of attention in language models and explore the mechanics of the Transformer architecture, the cornerstone of GPT models. Finally, you will explore the details of the GPT model. You will discover methods used to adapt these models for particular tasks through supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and techniques such as prompt engineering and prompt tuning.
12 videos |
1h 53m
Assessment
Badge
COURSES INCLUDED
Generative AI Models: Getting Started with Autoencoders
Autoencoders are a class of artificial neural networks employed in unsupervised learning tasks, primarily focused on data compression and feature learning. Begin this course off by exploring autoencoders, learning about the functions of the encoder and the decoder in the model. Next, you will learn how to create and train an autoencoder, using the Google Colab environment. Then you will use PyTorch to create the neural networks for the autoencoder, and you will train the model to reconstruct high-dimensional, grayscale images. You will also use convolutional autoencoders to work with multichannel color images. Finally, you will make use of the denoising autoencoder, a type of model that takes in a corrupted image with Gaussian noise, and attempts to reconstruct the original clean image, thus learning better representations of the input data. In conclusion, this course will provide you with a solid understanding of basic autoencoders and their use cases.
14 videos |
2h 15m
Assessment
Badge
Generative AI Models: Generating Data Using Variational Autoencoders
Variational autoencoders (VAEs) represent a powerful variant of traditional autoencoders, designed to address the challenge of generating new and diverse samples from the learned latent space. VAEs introduce probabilistic components, incorporating a probabilistic encoder that maps input data to a distribution in the latent space and a decoder that reconstructs data from samples drawn from this distribution. Begin this course by discovering how variational autoencoders can be used for generating images. Next, you will create and train VAEs in Python and the Google Colab environment. Then you will construct the encoder and decoder. Finally, you will train the VAE on multichannel color images. Upon course completion, you will have a solid understanding of variational autoencoders and their use in generating images.
8 videos |
1h 17m
Assessment
Badge
Generative AI Models: Generating Data Using Generative Adversarial Networks
Generative adversarial networks (GANs) represent a revolutionary approach to generative modeling within the realm of artificial intelligence. Begin this course by discovering GANs, including the basic architecture of a GAN, which involves two neural networks competing in a zero-sum game - the generator and the discriminator. Next, you will explore how to construct and train a GAN using the PyTorch framework to create and train the models. You will define the generator and discriminator separately, and then kick off the model training. Finally, you will focus the deep convolutional GAN, which uses deep convolutional neural networks (CNNs) rather than regular neural networks. CNNs are optimized for working with grid-like data, such as images and these can generate better-quality images than GANs built using dense neural networks. In conclusion, this course will provide you with a strong understanding of generative adversarial networks, their architecture, and their usage scenarios.
11 videos |
1h 40m
Assessment
Badge
Natural Language Processing Using Deep Learning
Deep learning has revolutionized natural language processing (NLP), offering powerful techniques for understanding, generating, and processing human language. Through deep neural networks (DNNs), NLP models can now comprehend complex linguistic structures, extract meaningful information from vast amounts of text data, and even generate human-like responses. Begin this course by learning how to utilize Keras and TensorFlow to construct and train neural networks. Next, you will build a DNN to classify messages as spam or not. You will find out how to encode data using count vector and term frequency-inverse document frequency (TF-IDF) encodings via the Keras TextVectorization layer. To enhance the training process, you will employ Keras callbacks to gain insights into metrics tracking, TensorBoard integration, and model checkpointing. Finally, you will apply sentiment analysis using word embeddings, explore the use of pre-trained GloVe word vector embeddings, and incorporate convolutional layers to grasp local text context.
14 videos |
1h 55m
Assessment
Badge
Using Recurrent Networks For Natural Language Processing
Recurrent neural networks (RNNs) are a class of neural networks designed to efficiently process sequential data. Unlike traditional feedforward neural networks, RNNs possess internal memory, which enables them to learn patterns and dependencies in sequential data, making them well-suited for a wide range of applications, including natural language processing. In this course, you will explore the mechanics of RNNs and their capacity for processing sequential data. Next, you will perform sentiment analysis with RNNs, generating and visualizing word embeddings through the TensorBoard embedding projector plug-in. You will construct an RNN, employing these word embeddings for sentiment analysis and evaluating the RNN's efficacy on a set of test data. Then, you will investigate advanced RNN applications, focusing on long short-term memory (LSTM) and bidirectional LSTM models. Finally, you will discover how LSTM models enhance the processing of long text sequences and you will build and train a bidirectional LSTM model to process data in both directions and capture a more comprehensive understanding of the text.
8 videos |
1h 14m
Assessment
Badge
Using Out-of-the-Box Transformer Models for Natural Language Processing
Transfer learning is a powerful machine learning technique that involves taking a pre-trained model on a large dataset and fine-tuning it for a related but different task, significantly reducing the need for extensive datasets and computational resources. Transformers are groundbreaking neural network architectures that use attention mechanisms to efficiently process sequential data, enabling state-of-the-art performance in a wide range of natural language processing tasks. In this course, you will discover transfer learning, the TensorFlow Hub, and attention-based models. Then you will learn how to perform subword tokenization with WordPiece. Next, you will examine transformer models, specifically the FNet model, and you will apply the FNet model for sentiment analysis. Finally, you will explore advanced text processing techniques using the Universal Sentence Encoder (USE) for semantic similarity analysis and the Bidirectional Encoder Representations from Transformers (BERT) model for sentence similarity prediction.
10 videos |
1h 29m
Assessment
Badge
Attention-based Models and Transformers for Natural Language Processing
Attention mechanisms in natural language processing (NLP) allow models to dynamically focus on different parts of the input data, enhancing their ability to understand context and relationships within the text. This significantly improves the performance of tasks such as translation, sentiment analysis, and question-answering by enabling models to process and interpret complex language structures more effectively. Begin this course by setting up language translation models and exploring the foundational concepts of translation models, including the encoder-decoder structure. Then you will investigate the basic translation process by building a transformer model based on recurrent neural networks without attention. Next, you will incorporate an attention layer into the decoder of your language translation model. You will discover how transformers process input sequences in parallel, improving efficiency and training speed through the use of positional and word embeddings. Finally, you will learn about queries, keys, and values within the multi-head attention layer, culminating in training a transformer model for language translation.
15 videos |
2h 20m
Assessment
Badge
COURSES INCLUDED
NLP with LLMs: Working with Tokenizers in Hugging Face
Hugging Face, a leading company in the field of artificial intelligence (AI), offers a comprehensive platform that enables developers and researchers to build, train, and deploy state-of-the-art machine learning (ML) models with a strong emphasis on open collaboration and community-driven development. In this course, you will discover the extensive libraries and tools Hugging Face offers, including the Transformers library, which provides access to a vast array of pre-trained models and datasets. Next, you will set up your working environment in Google Colab. You will also explore the critical components of the text preprocessing pipeline: normalizers and pre-tokenizers. Finally, you will master various tokenization techniques, including byte pair encoding (BPE), Wordpiece, and Unigram tokenization, which are essential for working with transformer models. Through hands-on exercises, you will build and train BPE and WordPiece tokenizers, configuring normalizers and pre-tokenizers to fine-tune these tokenization methods.
15 videos |
2h 18m
Assessment
Badge
NLP with LLMs: Hugging Face Classification, QnA, & Text Generation Pipelines
Sentiment analysis, named entity recognition (NER), question answering, and text generation are pivotal tasks in the realm of Natural Language Processing (NLP) that enable machines to interpret and understand human language in a nuanced manner. In this course, you will be introduced to the concept of Hugging Face pipelines, a streamlined approach to applying pre-trained models to a variety of NLP tasks. Through hands-on exploration, you will learn how to classify text using zero-shot classification techniques, perform sentiment analysis with DistilBERT, and apply models to specialized tasks, utilizing the power of NLP to adapt to niche domains. Next, you will discover how to employ models to accurately answer questions based on provided contexts and understand the mechanics behind model-based answers, including their limitations and capabilities. Finally, you will discover various text generation strategies such as greedy search and beam search, learning how to balance predictability with creativity in generated text. You will also explore text generation through sampling techniques and the application of mask filling with BERT models.
13 videos |
1h 50m
Assessment
Badge
NLP with LLMs: Language Translation, Summarization, & Semantic Similarity
Language translation, text summarization, and semantic textual similarity are advanced problems within the field of Natural Language Processing (NLP) that are increasingly solvable due to advances in the use of large language models (LLMs) and pre-trained models. In this course, you will learn to translate text between languages with state-of-the-art pre-trained models such as T5, M2M 100, and Opus. You will also gain insights into evaluating translation accuracy with BLEU scores and explore multilingual translation techniques. Next, you will explore the process of summarizing text, utilizing the powerful BART and T5 models for abstractive summarization. You will see how these models extract and generate key information from large texts and learn to evaluate the quality of summaries using ROUGE scores. Finally, you will master the computation of semantic textual similarity using sentence transformers and apply clustering techniques to group texts based on their semantic content. You will also learn to compute embeddings and measure similarity directly.
10 videos |
1h 29m
Assessment
Badge
NLP with LLMs: Fine-tuning Models for Classification & Question Answering
Fine-tuning in the context of text-based models refers to the process of taking a pre-trained model and adapting it to a specific task or dataset with additional training. This technique leverages the general language understanding capabilities acquired by the model during its initial extensive training on a large corpus of text and refines its abilities to perform well on a more narrowly defined task or domain-specific data. In this course, you will learn how to fine-tune a model for sentiment analysis, starting with the preparation of datasets optimized for this purpose. You will be guided through setting up your computing environment and preparing a BERT classifier for sentiment analysis. Next, you will discover how to structure text data and align named entity recognition (NER) tags with subword tokenization. You will build on this knowledge to fine-tune a BERT model specifically for NER, training it to accurately identify and classify entities within text. Finally, you will explore the domain of question answering, learning to handle the challenges of long contexts to extract precise answers from extensive texts. You will prepare QnA data for fine-tuning and utilize a DistilBERT model to create an effective QnA system.
12 videos |
1h 33m
Assessment
Badge
NLP with LLMs: Fine-tuning Models for Language Translation, & Summarization
Causal language modeling (CLM), text translation, and summarization demonstrate the versatility and depth of language understanding and generation by artificial intelligence (AI). Fine-tuning models help improve the performance of models for these specific tasks. In this course, you will explore CLM with DistilGPT-2 and masked language modeling (MLM) with DistilRoBERTa, learning how to prepare, process, and fine-tune models for generating and predicting text. Next, you will dive into the nuances of language translation, focusing on translating English to Spanish. You will prepare and evaluate training data and learn to use BLEU scores for assessing translation quality. You will fine-tune a pre-trained T5-small model, enhancing its accuracy and broadening its linguistic capabilities. Finally, you will explore the intricacies of text summarization. Starting with data loading and visualization, you will establish a benchmark using the pre-trained T5-small model. You will then fine-tune this model for summarization tasks, learning to condense extensive texts into succinct summaries.
12 videos |
1h 38m
Assessment
Badge
EARN A DIGITAL BADGE WHEN YOU COMPLETE THESE TRACKS
Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.
Digital badges are yours to keep, forever.SKILL BENCHMARKS INCLUDED
AI Landscape Literacy (Beginner Level)
The AI Landscape Literacy (Beginner Level) benchmark measures your ability to recall and recognize the fundamentals of artificial intelligence (AI) and machine learning (ML). You will be evaluated on your knowledge of how algorithms learn and perform common tasks like classification and clustering and the importance of deep learning models. A learner who scores high on this benchmark demonstrates that they have the basic foundational knowledge of AI.
18m
| 18 questions
SKILL BENCHMARKS INCLUDED
NLP with Deep Learning Competency (Intermediate Level)
The NLP with Deep Learning Competency (Intermediate Level) benchmark measures your ability to identify the structure of neural networks, train a Deep Neural Network (DNN) model, and generate term frequency-inverse document frequency (TF-IDF) encodings for text. You will be evaluated on your ability to train models using pre-trained word vector embeddings, recognize the structure of a recurrent neural network (RNN), and train an RNN for sentiment analysis and with long short-term memory (LSTM). A learner who scores high on this benchmark demonstrates that they have good knowledge and experience in developing NLP applications using deep learning models.
16m
| 16 questions
SKILL BENCHMARKS INCLUDED
NLP and LLMs Competency (Intermediate Level)
The NLP and LLMs Competency (Intermediate Level) benchmark measures your knowledge of working with tokenizers in Hugging Face. You will be evaluated on your recognition of Hugging Face classification, QnA pipelines, and text generation pipelines. A learner who scores high on this benchmark demonstrates that they have good experience in developing NLP and LLM applications using Hugging Face with minimal supervision.
19m
| 19 questions