Python is one of the most popular programming languages and programming AI in this language has many advantages. In this course, you'll learn about the differences between Python and other programming languages used for AI, Python's role in the industry, and cases where using Python can be beneficial. You'll also examine multiple Python tools, libraries, and use environments and recognize the direction in which this language is developing.
In this course, you'll learn about development of AI with Python, starting with simple projects and ending with comprehensive systems. You'll examine various Python environments and ways to set them up and begin coding, leaving you with everything you need to begin building your own AI solutions in Python.
Search algorithms provide solutions for many problems, but they aren't always the optimal solution. Discover how constraint satisfaction algorithms are better than search algorithms in some cases, and how to use them.
Many problems occur in environments with more than one agent, such as games. Explore techniques used to solve adversarial problems to make agents play games, like chess.
Many problems aren't fully observable and have some degree of uncertainty, which is challenging for AI to solve. Discover how to make agents deal with uncertainty and make the best decisions.
Some problems are too complicated to describe to a computer and to solve with traditional algorithms, which is why reinforcement learning is useful. Explore the fundamentals of reinforcement learning.
Natural language is essential to human communication, which makes the ability to process it an important one for computers. Explore natural language processing and some of the basic tasks.
This 13-video course explores how artificial intelligence (AI) can be leveraged, how to plan an AI implementation from setup to architecture, and the issues surrounding incorporating it into an enterprise for machine learning. Learners will explore the three legs of AI: how it applies intelligence-like behavior to machines. You will then examine how machine learning adds to this intelligence-like behavior, and the next generation with deep learning. This course discusses strategies for implementation of AI, organizational challenges surrounding the adoption of AI, and the need for training of both personnel and machines. Next, learn the role of data and algorithms in AI implementation. Learners continue by examining several ways in which an organization can plan and develop AI capability; the elements organizations need to understand how to assess AI needs and tools; management challenges; and the impact on personnel. You will learn about pitfalls in using AI, and what to avoid. Finally, you will learn about data issues, data quality, training concepts, overfitting, and bias.
In this 12-video course, you will examine the different uses of data science tools and the overall platform, as well as the benefits and challenges of machine learning deployment. The first tutorial explores what automation is and how it is implemented. This is followed by a look at the tasks and processes best suited for automation. This leads learners into exploring automation design, including what Display Status is, and also the Human-Computer Collaboration automation design principle. Next, you will examine the Human Intervention automation design principle; automated testing in software design and development; and also the role of task runners in software design and development. Task runners are used to automate repeatable tasks in the build process. Delve into DevOps and automated deployment in software design, development, and deployment. Finally, you will examine process automation using robotics, and in the last tutorial in the course, recognize how modern robotics and AI designs are applied. The concluding exercise involves recognizing automation and robotics design application.
A cross-platform library, OpenCV facilitates image processing and analysis. In this course, you'll discover fundamental concepts related to computer vision and the basic operations which can be performed on images using OpenCV. You'll begin by outlining how to read images from your file system into your Python source in the form of arrays and then save an image array into a local file. Next, you'll explore color images represented as a combination of blue, green, and red channels, how to convert color images to grayscale, and how grayscale images are defined. Finally, you'll perform basic operations on images by investigating how to combine two images using an add operation and make one of the added images more prominent than the other using a weighted addition. Conversely, you'll also perform a subtract operation using two images.
In this course, participants will examine chatbot use cases, the technology stack, and popular development and deployment tools with Amazon's Alexa on Amazon Web Services (AWS) and Google's Dialogflow. First, you will learn about chatbots and in what categories they are used and the different classifications of chatbots. You will explore the different technologies orchestrated to create chatbots. Look at conversation flow and learn about the conversational flow of the typical chatbot/human interface. Then examine Dialogflow building blocks and the elemental building blocks for a typical chatbot built with AWS Alexa Skills Kit. Next, you will set up the AWS developer account required for Alexa Skills development and use the account and an AWS Lambda service to develop Alexa Skills. Then explore the components of the Alexa Development Console. Learn how to configure an AWS Lambda function. After setting up a developer account on Google's Dialogflow, you will look into the Dialogflow developer console and its components. In a closing exercise, you will practice what you learned about chatbots and their architecture.
In this course, participants explore the development of chatbots with one of the main chatbot development frameworks, Google's Dialogflow Developer Console. Start by creating an agent for a chatbot and exploring default intents in Dialogflow. Intents map what a user says to what the bot should do. You will then create custom intents in Dialogflow. Participants then examine the important differences between developer and system entities in Dialogflow. Next, you will generate developer entities to extract information from user conversations in Dialogflow. Learn how to generate training phases, which are user expressions that a user might say when they want to invoke an intent. You will then work with the actions and parameters associated with each intent. Learn how to write static responses, which a bot can respond to a user with in Dialogflow. Enable the Small Talk feature for a chatbot and test its functionality in Dialogflow. Then learn how to write inline cloud functions to satisfy a fulfillment in Dialogflow. A concluding exercise deals with creating a chatbox in Dialogflow.
In this course, explore the advanced concepts and features for developing and deploying chatbots, working with contexts, integrating with alternate platforms, and deploying fulfillments. Begin by looking at linear and nonlinear human/chatbot conversations. Next, work with input and output contexts. Contexts represent the current state of a user's request in a dialogue. Move on to follow-up intents, which allow you to easily shape a conversation without needing to create and manage contexts manually. Create the entry point for a nonlinear conversation by using contexts, then carry those contexts on a chatbot dialog to produce nonlinear conversations. Explore how to integrate Dialogflow chatbots with other platforms and deploy a fulfillment in Dialogflow. Access and use Actions on Google in Dialogflow and test a chatbot by using Google Assistant. Integrate Dialogflow chatbots with Google Assistant. Learn about Chatfuel building blocks, examining the use of prebuilt flows and text and typing elements, quick reply images and send blocks in Chatfuel. In the closing exercise, describe chatbot linear and nonlinear conversations and build a basic chatbot with Chatfuel.
In this course, participants examine the Amazon Web Services (AWS) Alexa Skills Kit, including the use of invocations, intents, utterances, and slots. Testing with Alexa Simulator and Echosim is also covered. Begin by creating a skill in Alexa Development Console and looking at the use of invocations with the Alexa skill. Then discover how built-in intents are used in Alexa Development Console. Next, create and use custom intents, utterances, and slots in Alexa Development Console. To review: an intent is a construct representing an action that fulfills a spoken request, utterances are related spoken phrases mapped to the intent, while slots are optional arguments also related to intent. You will learn how to build a Lambda function and integrate it with an Alexa skill, then test a skill by using Alexa Simulator and Echosim. You will configure a skill to use DynamoDB for persisting session data. Finally, create an Alexa skill that manages a multistage conversation. The concluding exercise directs you to create a skill by using the Skills Kit in the Alexa Development Console.
OpenAI offers an Application Programming Interface (API) that allows users to create, manipulate, and translate text using its available models and endpoints. Understanding how the API works, its limits, and how to effectively use best practices will help you get the most from the interface. In this course, you will explore OpenAI's API, generate an API key, and learn about the impact of social bias and blindness in models. Then, you will discover the ethical usage policy and safety and privacy concerns of OpenAI. Next, you will examine available models and endpoints. You will create a simple text completion, parse a response, troubleshoot common errors, and apply parameters to improve your results. Finally, you will use the language translation API to translate to and from English and identify organizational best practices when using OpenAI to handle scaling, latency, and limits.
Generative artificial intelligence (AI) focuses on creating models that can generate content such as text, images, or even multimedia. Unlike discriminative models that classify or label existing data, generative models operate by learning patterns from the provided data and producing novel outputs. You'll begin this course with an overview of generative. You will explore some notable examples of generative models, including OpenAI's ChatGPT and Google Bard. Next, you will look at the use of prompt engineering when interacting with AI chatbots. Then, you will then delve into the history and evolution of generative AI models including important milestones that culminated in the conversational agents that we work with today. Finally, you will explore the risks and ethical considerations associated with generative AI, such as unintentional use of copyrighted data, the use of personal data for training, and the creation of malicious deepfakes using AI. You will also learn how you can mitigate some of these risks while working with generative technologies.
Generative artificial intelligence (AI) has taken the tech and business world by storm. It currently can create stories, text, images, summaries, essays, and much more, with sometimes nothing more than a few words to describe what you want. Unfortunately, it can also be used in ways that can be harmful, such as creating deepfakes and false information. In this course, you will discover the differences between generative AI and general AI and look at the history and future of generative AI. You will explore applications of generative AI and the ethical, safety, security, and privacy concerns associated with its use. Then you will identify common generative AI application programming interfaces (APIs) and best practices when using generative AI. Next, you will find out how to create images and text with generative AI, and you will focus on the challenges of AI integration into processes and workflows. Finally, you will learn how to integrate generative AI APIs to create tools like chatbots.
Google Bard is a generative artificial intelligence (AI) that uses a large language model to facilitate answering questions and creating content for a wide range of topics. Understanding how the models work, its limitations, and what functionality the service provides enables anyone to optimize their usage of the service to accomplish a multitude of tasks. In this course, you will explore the Bard interface and learn to use Bard to answer questions and create content while also understanding Bard's limitations, features, and best practices. Additionally, you will explore the ethics, privacy, and security concerns that can come with using a generative AI like Bard.
Google Bard can be used to write creative content, but it also allows you to share that content, adjust content to reflect a tone, and translate text to and from English. These capabilities can be used by almost anyone in virtually any industry to expedite tasks. In this course, you will learn how to use Bard to create poems, stories, lyrics and other content. You will also learn to create summaries and outlines. Next, you will discover Bard's image object recognition and finding capabilities. Finally, you will be introduced to Bard's translation capabilities.
Google Bard is a useful tool for content creation, translation, and analysis; however, using the PaLM 2 API it is possible to integrate Bard directly into your own processes via the provided application programming interface (API) or the client libraries that are ready to go. This does require some programming and command line interface (CLI) experience but even a small amount should be sufficient to follow along. In this course you will learn about Bard's analytical capabilities, the PaLM 2 API, and how to use the API to accomplish tasks programmatically rather than through the Bard web interface. Additionally, you will explore the PaLM models, support languages and libraries, and the interfaces used for communicating with PaLM.
Python and Google Bard can be combined to create applications and programs via the PaLM 2 API. These programs can solve problems or integrate Bard into workflows or processes. In this course, you will learn to solve code problems with Bard and how to use the Python Client API library to connect and use PaLM to create applications that integrate Bard. In particular, you will explore how to programmatically check content for appropriate communications, adjust parameters to fine-tune responses, troubleshoot common problems, add security to a process, and create a simple chatbot.
Generative artificial intelligence (GenAI) can create new content, such as text, images, and music. It is powered by machine learning (ML) models that have been trained on massive datasets of existing content. Prompt engineering is the process of designing and crafting prompts that guide generative AI models to produce the desired output. You will start this course by learning how you can leverage prompt engineering to improve your day-to-day and work-related tasks. Next, you will see examples of prompting in action with external generative AI chatbots such as ChatGPT, Google Bard, and Microsoft Bing Chat. As several of these tools may not be supported on many corporate devices, you will not be expected to create accounts on those platforms, but you will be able to apply the learnings and principles to any corporate conversational AI chatbot in similar ways.
The OpenAI Playground is a web-based tool that lets you experiment with large language models (LLMs) to generate text, translate languages, write creative content, and answer your questions in an informative way. With the Playground, you can input text prompts and receive real-time outputs, and you can adjust hyperparameters to control the creativity, randomness, length, and repetition of the model responses. In this course, you will begin by creating an account to use the OpenAI Playground and you will learn how you are billed for its usage. Next, you will explore the different chat modes and models and work with the hyperparameters that allow you to configure creativity, randomness, repetition, and the length of model responses. You will also use stop sequences, which terminate the output when a specific phrase is reached, as well as the frequency and presence penalty, which penalize repetition of words and topics. Finally, you will learn how to view probabilities in generated text and explore how to use presets to share prompts and prompt parameters with other people.
Version control systems allow you to track changes to your code over time and collaborate on projects. They are widely used in software development, but can also be used for other purposes, such as tracking changes to documentation, website code, or other types of files. Git is a popular version control system that has a steep learning curve for beginners but with help from generative AI tools you'll find that learning Git is easy and intuitive. In this course, you will start with the basics of Git and learn the difference between local Git repositories and remote repositories on hosting services such as GitHub and GitLab. You will develop prompts with generative AI tools such as ChatGPT and use their responses to guide you while you are exploring Git commands. Next, you will learn how to use Git for version control and how to add files to the staging area. After that, you will commit your files to your repository and view all of the commits. Finally, you will learn how to perform operations such as restoring and modifying staged files and how to use commit hashes to uniquely identify commits and perform operations on them.
In today's rapidly evolving technological landscape, generative artificial intelligence (AI) has gained significant attention for its ability to create intelligent solutions. This path focuses on leveraging the Azure cloud platform to explore and harness the power of generative AI. In this course, you'll explore how generative AI works and types of generative AI models. Then, you'll be introduced to Azure services for generative AI, including Azure OpenAI service, Azure Bot service, and Azure Machine Learning. Finally, you'll learn about privacy and policy considerations for generative AI, chatbot creation, personalized marketing content, new product development, and training and tuning generative AI models.
Artificial intelligence (AI) is being harnessed everywhere today for a myriad of different practical applications. Microsoft Azure's OpenAI service is a key component in the development of AI apps in Azure and has gained significant attention for its ability to create intelligent solutions. In this course, you'll learn about Azure OpenAI service, including models, practical uses, and AI content generation principles with OpenAI. Then, you'll explore integration with Azure OpenAI, text and question answering, OpenAI vs. other generative AI services, and OpenAI pricing. Finally, you'll dig into limitations of OpenAI and what the future holds for Azure OpenAI.
GitHub, in conjunction with Git, provides a powerful framework for collaboration in software development. Git handles version control locally, while GitHub extends this functionality by serving as a remote repository, enabling teams to collaborate seamlessly by sharing, reviewing, and managing code changes. In this course, you will begin by setting up a GitHub account and authenticating yourself from the local repo using personal access tokens. You will then push your code to the remote repository and view the commits. Next, you will explore additional features of Git and GitHub using generative AI tools as a guide. You will also create another user to collaborate on your remote repository, and you'll sync changes made by other users to your local repo. Finally, you will explore how to merge divergent branches. You will discover how to resolve a divergence using the merge method with help from ChatGPT and bring your local repository in sync with remote changes.
Branches are separate, independent lines of development for people working on different features. Once you have finished your work, you can merge all your branches together. You will start this course by creating separate feature branches on Git and pushing commits to these branches. You will use prompt engineering to get the right commands to use for branching and working on branches. You will also explore how to develop your code on the main branch, switch branches, and then ultimately commit to a feature branch. Next, you will explore how you can stash changes to your project to work on them later. Finally, you will discover how to resolve divergences in the branches. You will try out both the merge and rebase methods and confirm that the branch commits are combined properly.
Python is a powerful programming language for data science, and pandas is a popular open-source data manipulation and analysis library in Python. Combined with prompt engineering techniques, working with data in Python is easy and intuitive, which allows you to be more productive and efficient. You will start this course by leveraging prompt engineering to work with pandas. You will explore libraries such as Matplotlib, seaborn, and Plotly, which are used for visualization and charting. With ChatGPT's help you will read data from a CSV file and inspect the DataFrame. You'll delve into pandas Series objects and explore their creation and manipulation. You will leverage prompt engineering techniques to access elements in a Series using index labels through loc, iloc, at, and iat functions and perform operations like modification and visualization. Finally, you will explore how to use pandas DataFrame objects and create basic DataFrames using lists and dictionaries for data assignment and inspection. You will also generate code to perform basic operations on DataFrames using tools such as ChatGPT and Bard.
With DataFrames in pandas you can filter, aggregate, join, pivot, and manipulate data efficiently. These operations enable data analysts and scientists to work with datasets for various data-driven tasks. Prompt engineering tools are adept at generating code to make these tasks simple. You will start this course by exploring the configurations you can apply to read in your data. You'll present your problem statement to ChatGPT and explore the use of arguments to configure various aspects of the file reading, such as defining column names, and specifying which columns to include in the DataFrame. Additionally, you will learn how to read data from different sources, including JSON, Excel, and the Clipboard and write files out to these different formats. Next, you'll delve into common DataFrame operations, examine statistics on your data, rename columns, iterate over, and sort your data. As you encounter issues, you will turn to prompt engineering to help debug them. Finally, you'll explore how you can enhance your data using computed columns. You'll harness the power of two essential functions, apply and map, to transform your records. You will also focus on utilizing generative AI for code generation and you will employ the chain-of-thought prompting method to guide the chatbot in generating code effectively.
Enhancing your security posture is an essential part of protecting yourself against generative artificial intelligence (AI) technologies. Monitoring and leveraging emerging generative AI technologies help organizations stay safe and secure while reducing manual work. In this course, discover why enhancing security posture is important, common security and cyber threats organizations are facing today, and the applications and advantages of using AI in cybersecurity. Next, examine the components of a successful cyberattack defense strategy and how to leverage machine learning (ML) in cybersecurity. Lastly, explore AI cybersecurity considerations and possible future trends of AI in cybersecurity. Upon course completion, you'll recognize how to monitor and utilize emerging generative AI technologies to help improve an organization's overall security posture.
Protecting intellectual property (IP) is an important part of any business. In this course, you will learn the fundamentals of intellectual property including copyright, trademarks, patents, and trade secrets. Discover how to defend against intellectual property infringement in the realm of generative AI, how using generative AI can result in IP infringement, and considerations to have when using generative AI solutions for yourself or your business. Then you will learn how to detect when your intellectual property has been infringed upon, identify risk areas, and determine what detection tools are available. Finally, you will explore legal considerations surrounding intellectual property and AI and investigate future trends of intellectual property in the era of AI. After completing this course, you will be ready to the crucial steps to detect and protect the intellectual property within your organization.
Generative artificial intelligence (AI) creates a variety of concerns in a number of different ways, so we must create governance to regulate these concerns. In this course, you will discover why governance in the realm of generative AI is crucial, how and by whom governance can be achieved, as well as the challenges and opportunities that arise with the use of generative AI. In addition, you will also explore how governments and legal bodies can regulate generative AI, and what regulations are already in place in the public sector. Then you will examine AI governance best practices, including engaging stakeholders, managing AI models, and building internal governance structures. Next, you will investigate the benefits of AI auditing and monitoring. Finally, you will learn how to implement a governance approach that includes user education, data and AI risk management, and regulatory compliance.
When using generative artificial intelligence (AI) content within your business, you need to know how to leverage it safely and effectively. In this course, you will learn about techniques used to identify AI-generated content and how to avoid possible misinformation that it can produce. You will investigate deepfakes and learn how to detect them. You will discover how to provide proper attribution when using generative AI content and how to avoid having any copyright issues. Next, you will explore industry use cases for generative AI, and more specifically, discover common use cases for boosting cybersecurity using generative AI. Then you will examine stakeholder considerations and possible generative AI challenges. Finally, you will focus on security protections, security risks, and mitigating risks associated with generative AI. Upon course completion you will be able to confidently take steps to secure your organization when using generative AI solutions.
The introduction of generative AI has created security, privacy, ethical, and moral implications for everyone involved. In this course, you will learn how and why generative AI is being used in security attacks, how to put countermeasures in place to defend against these attacks, and how to navigate and mitigate those attacks from happening. You will also explore the ethical and moral concerns that generative AI has created, how privacy plays a role in those concerns, and how to manage those risks. Then you will discover how generative AI can be leveraged by threat actors to perform data breaches, create malware threats, and coordinate social engineering attacks. Next, you will investigate how generative AI can be used maliciously to perform model manipulation and data poisoning. Finally, you will examine how to enhance protection controls using processes, governance, and ethics, and focus on common considerations for securing AI systems.
Django is a high-level, open-source web framework for building robust and scalable web applications using the Python programming language. Django comes equipped with a rich set of built-in features, including an object-relational mapping (ORM) system for database interactions, a powerful templating engine, and a secure authentication system. You will start this course by diving into Django and learning the model-template-view (MTV) architecture that Django uses. Next, you will install Django and create a basic app, seeking the help of generative AI tools such as ChatGPT and Google Bard to set up a Django project and explore its basic functionality. Then, you will create your own app within the project, focusing on the uses for and responsibilities of the automatically generated files. Finally, you will build a simple web app using Django, starting with a basic view that renders HTML templates that you can access at a URL path. You will learn to include static assets, such as stylesheets and images, and you will deal with misdirection from generative AI tools.
Generative artificial intelligence (AI) has taken the world by storm. Chatbots, photo creation, document writing, and other practical applications are everywhere, and they're gaining in popularity and sophistication. Google Cloud Platform (GCP) has a broad range of powerful generative AI tools that can be used to leverage the power of modern artificial intelligence. In this course, you'll be introduced to generative AI, beginning with GCP and its generative AI offerings. You will discover the advantages and disadvantages of generative AI and features of GCP machine learning (ML). Then you'll learn about the generative AI life cycle, image generation, natural language processing (NLP), and best practices for developing generative AI. Finally, you'll explore GCP privacy, security, and compliance considerations, monitoring and logging with GCP, and some real-world generative AI use cases.
The OpenAI Playground is a dynamic and user-friendly platform that allows individuals to engage with OpenAI's cutting-edge language models, such as GPT-3.5 and GPT-4. The Playground enables users to experiment with natural language processing (NLP) capabilities and parameters to tweak the responses of models. You will start this course by exploring the fundamentals of OpenAI models. Next, you will log into the OpenAI Playground and input basic prompts, observing the responses. You will work with multiple application programming interfaces (APIs), including the recommended chat completions API and the legacy completions API, all of which are accessible via the playground. Finally, you will work with the Assistants API which has access to tools for data retrieval, code interpretation, and function calling and can leverage these to respond to user queries. You will utilize the code interpreter to read and visualize CSV data, generating Python code for charts using libraries like Matplotlib and Seaborn.
OpenAI application programming interfaces (APIs) represent a groundbreaking leap in the accessibility of state-of-the-art natural language processing (NLP) capabilities. These APIs provide developers with a powerful toolset to integrate advanced language models seamlessly into their applications, products, and services. You will start this course by engaging with OpenAI through the command-line, utilizing the OpenAI APIs. You will learn how to authenticate yourself using API keys when programmatically accessing API endpoints using cURL commands. You will explore how to configure context for past interactions with the model and access both chat completions and legacy completions APIs via their respective endpoints. Moving onto Python, you will install the OpenAI library to create a client object for endpoint access. You will configure the API key and send requests to the chat completions endpoint with prompts in the JSON format. You will also explore the legacy completions API using the same client object. You will be introduced to the diverse range of model offerings from OpenAI and learn how to use those models. Finally, you will configure model parameters to adjust the response from the model. You will learn about the seed parameter to receive deterministic responses and how the system fingerprint helps track infrastructure changes on the server. You will explore various parameters, including Top P and Temperature for controlling creativity, max length, and stop sequences for response length, and frequency and presence penalty for word and topic repetition.
Statistics is a branch of mathematics that involves the collection, analysis, interpretation, presentation, and organization of data. It provides a framework for making inferences and drawing generalizable conclusions from observed information and it offers great tools to uncover patterns, trends, and relationships within datasets. Begin this course by exploring two important types of statistics - descriptive and inferential statistics. Next, learn how to compute and interpret descriptive statistics in code, including measures of central tendency and dispersion, mean and median, and range. Then use generative artificial intelligence (AI) tools to help interpret visualizations and understand the nuance between the different statistical measures and when you would choose to use them. After completing this course, you will have a solid understanding of how to calculate, interpret, and visualize descriptive statistics using Python and be able to leverage prompt engineering to help with implementation and interpretation.
Hypothesis testing is an important part of inferential statistics that involves assessing sample data to draw conclusions about a population parameter. Begin this course by exploring how hypothesis tests work, the results they generate, and how you interpret those results. You will learn how you set up the null and alternative hypotheses for tests and how to interpret the results which includes the test statistic and the p-value. Then you will discover the different types of t-tests, such as one-sample, two-sample, and paired samples. Finally, you will investigate the use of generative artificial intelligence (AI) tools to implement one-sample t-tests and interpret the results. At course completion, you will have a solid understanding of the basics of hypothesis testing and how prompt engineering can help you implement and interpret these statistical tests.
T-tests and analysis of variance (ANOVA) are statistical methods used to compare means between groups and assess whether observed differences are statistically significant. In this course, you will perform two-sample t-tests, comparing two independent groups to determine if the difference between their means is statistically significant. You will use ChatGPT and Google Bard to help ensure that your samples meet the assumptions of the t-test. Then you will visualize and interpret the characteristics of your data and run the right variation of the t-test based on your data. Next, you will run a paired sample t-test with help from generative artificial intelligence (AI) tools. Finally, you will use ANOVA to compare multiple samples simultaneously, use prompt engineering to determine when to use ANOVA, and use post-hoc analysis after running ANOVA to identify which groups or categories are significantly different. After completing this course, you will have a solid understanding of t-tests and ANOVA, and be able to leverage Generative AI tools to help you with your analysis.
Machine learning involves creating models that dynamically change based on the data from which they are created. Within machine learning, three fundamental problems-regression, classification, and clustering-are the focus of a variety of solution techniques. Begin this course by conducting regression analysis. You will analyze and visualize data to get a sense of the variables with predictive power, split data into training and test sets, and train a model. Then you will interpret the R-squared metric to evaluate how well the regression model has performed. Next, you will create a classification model for predicting categorical targets and split your data into test and training data to train a logistic regression model. You will also explore the impact of training a model on imbalanced data, and with generative artificial intelligence (AI) assistance, see how you can mitigate this by leveraging oversampling and undersampling techniques. Finally, you will perform clustering, train a k-means clustering model, and evaluate it using the silhouette and Davies-Bouldin scores. At course completion, you will have a good understanding of key concepts of machine learning and how to perform regression analysis, classification of data, and clustering.
Artificial intelligence (AI) has taken the world by storm, and image generation has become one of AI's most interesting contributions to the modern world. In this course, examine the pros and cons of generative AI (GenAI), the history of image generation, the uses of AI in creative content generation, and generative AI models and methods. Next, discover how to use generative AI to create content, generative AI pipeline components, and use cases for generative AI. Finally, learn about popular generative AI frameworks and tools, ethical considerations of generative AI, how AI influences art, the implications of generative AI, and the possible future of generative AI. After course completion, you'll be able to comprehensively describe the fundamentals of AI-powered image generation.
Generative Pre-trained Transformer (GPT) models are advanced artificial intelligence (AI) systems designed to understand and generate human-like text based on the information they've been trained on. These models can perform a wide range of language tasks, from writing stories to answering questions, by learning patterns in vast amounts of text data. In this course, you will dive into the world of GPT models and the foundational models that are pivotal to the development of the GPT-n series. You will gain an understanding of the terminology and concepts that make GPT models outstanding in performing natural language processing tasks. Next, you will explore the concept of attention in language models and explore the mechanics of the Transformer architecture, the cornerstone of GPT models. Finally, you will explore the details of the GPT model. You will discover methods used to adapt these models for particular tasks through supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and techniques such as prompt engineering and prompt tuning.
In the spirit of exploring the exciting possibilities of generative AI, this course was built using several AI technologies alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. Begin this course by exploring the various industries that are using ChatGPT, focusing on the impact of ChatGPT on their business processes and the ethical considerations and challenges of using this technology. Then discover the enormous potential of ChatGPT in shaping the future of work and for solving global challenges like climate change. Next, examine how to protect sensitive intellectual property while using ChatGPT. Finally, investigate the role of government and industry in regulating this powerful AI.
In the spirit of exploring the exciting possibilities of generative AI, this course was built using several AI technologies alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. In this course, we will define ethical AI and explore the ethical considerations and challenges surrounding advanced AI models. Next, we will examine the potential consequences of using AI and ChatGPT, the impact on society and culture, and the importance of transparency and accountability. Then we will investigate the impact of AI models on privacy and security, the ethical considerations for developing and deploying AI models, and the importance of understanding the risks involved in sharing sensitive intellectual property. Finally, we will discover the roles of government, industry, and society in regulating AI models and how to protect company data.
In the spirit of exploring the exciting possibilities of generative AI, this course was built using several AI technologies alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. Begin this course with insightful, forward-thinking conversations on AI's potential influence across sectors including high tech, education, and government. Then you will apply concepts of generative AI to practical scenarios in everyday life. Next, you will critically explore the ethical dimensions of AI. Finally, you will examine the evolving nature of work in an AI-integrated landscape and hypothesize about the progression of generative AI in the next 5 to 10 years. Be prepared for a journey of continuous learning and future thinking that transcends traditional classroom boundaries.
In the spirit of exploring the exciting possibilities of generative AI, this course was built using several AI technologies paired alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for the narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. Practical prompt engineering is the process of designing refined input prompts to generate responses for natural language processing applications. This involves consideration of prompt factors and various techniques to improve model performance and accuracy. Through this course, learn about practical prompt engineering and the types of prompts used in ChatGPT. Discover how to write effective prompts for ChatGPT and prompt use in real-world applications. Next, compare prompt types and their uses, explore the ethical considerations of using prompts in ChatGPT, and the impact of prompts on the accuracy of ChatGPT's responses. Finally, examine the relationship between prompts and the training data used to develop ChatGPT. After course completion, you'll be able to apply prompt engineering practices to develop effective prompts for ChatGPT.
In the spirit of exploring the exciting possibilities of generative artificial intelligence (AI), this course was built using several AI technologies alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. This course covers important techniques for writing effective ChatGPT prompts. You will use different techniques to write prompts for use in real-world applications. Next, you will explore the use of chain of thought prompting and zero-shot chain of thought prompting. You will discover how to integrate ChatGPT with other technologies and work through various use cases.
In the spirit of exploring the exciting possibilities of generative AI, this course was built using several AI technologies paired alongside Skillsoft's trusted design methodologies. Generative AI was used to draft the curriculum plan and on-screen text, while AI text-to-speech services were used for the narration. In addition, generative AI was used to produce the course assessment and AI assistive technologies helped translate the course captions into multiple languages. What is ChatGPT and what can it do? This groundbreaking technology has revolutionized natural language processing and its applications are far-reaching. In this course, you will discover ChatGPT, focusing on its purpose and key features. Then you will compare its abilities to other language models. Next, you will explore the history of language models and artificial intelligence (AI) and the different uses of ChatGPT, including how it can be used in real-world applications. Finally, you will examine the potential benefits and limitations of using ChatGPT, as well as the ethical considerations surrounding its use. By the end of this course, you will be able to describe the purpose of ChatGPT, outline its key features, and consider its potential impact on society and culture.
Wouldn't it be great if you had a personal assistant at work to help you with tasks like summarizing meeting notes or creating reports? Begin this course by exploring how AI can be employed as a personal assistant to manage various tasks, enhance efficiency and free up time for more creative work. You will investigate specific AI use cases to help streamline and improve the quality of work. Then, you will find out how to simplify complex jargon and transform disorganized meeting notes into actionable items with generative AI (GenAI). Take a look at how large language models (LLMs) are trained, in order to get a better idea of how AI works and mitigate the risk of hallucinations as output. Next, you will discover the wider world of AI, going beyond LLMs into the types of GenAI, multimedia GenAI, robotics and reinforcement learning. Finally, you will focus on the practical concepts, techniques, and technologies for effectively collaborating with AI.
Responsible and effective use of AI is based on a set of principles and practices designed to integrate AI into organizational workflows while maintaining ethical standards and mitigating risks. In this course, you will explore the strategic integration of AI in your organization. First, you will dive into the principles of responsible and effective AI use and learn about ethical guidelines for AI implementation. Next, you will identify and mitigate business risks associated with AI, ensuring safe and efficient AI operations. You will also develop strategies to advocate for and innovate with AI, enhancing your role as a leader in the evolving digital landscape. Finally, you will understand how to leverage AI to drive productivity and innovation while maintaining ethical standards and managing potential risks.
AI is transforming customer service by enabling personalized, efficient, and responsive customer interactions. In this course, you will explore the transformative impact of AI on customer service operations, enhancing both customer satisfaction and agent productivity. You will also learn about the diverse applications of generative AI (GenAI), including automated responses, sentiment analysis enhancement, and multilingual support, and how these technologies ensure consistent and high-quality customer experiences across various communication channels. The course also addresses the ethical implications and potential risks of using AI, providing strategies for data privacy, bias mitigation, and maintaining transparency. Additionally, you will examine emerging trends such as emotion recognition and the integration of AI with virtual and augmented reality to stay ahead in the evolving landscape of customer service. By the end of this course, you will be equipped to maximize customer service with AI, making data-driven decisions and creating delightful customer experiences.
In today's fast-paced and competitive market, the integration of AI in product management has become a crucial driver of innovation and efficiency. As AI technologies continue to evolve, product managers must stay ahead of the curve to leverage these tools effectively. This course focuses on how generative AI (GenAI) can transform the product management landscape, making it indispensable for professionals looking to enhance their strategic and operational capabilities. Begin by delving into the role of GenAI in product management, including how it can revolutionize product development with large context window language models. You will identify practical applications of generative AI in various stages of product management, from ideation to market launch. The course also addresses the ethical implications of AI, offering strategies to mitigate potential risks. Additionally, you will gain insights into future trends in AI, equipping you with the knowledge to anticipate and navigate the evolving AI landscape in product management.
AI technology is revolutionizing the world of sales, ushering in an era of enhanced efficiency, customization, and productivity. The rise in generative AI (GenAI) has opened up a landscape of unprecedented personalization and data-driven insights, enabling sales teams to further streamline their processes and spend more time focusing on high-level tasks. In this course, you will discover how to leverage AI technology to benefit sales departments, enhance sales operations, and integrate AI tools into your daily workflow. Additionally, you will explore how to handle ethical considerations and prepare for the transformative future of AI in sales.
AI is revolutionizing how marketing strategies are conceived, executed, and optimized. In this course, you will learn how to leverage AI to enhance marketing efficiency, personalize customer interactions, and foster creative innovation through practical insights and real-world applications. You will explore top use cases of generative AI (GenAI) in marketing, gain practical understanding through a hands-on demo of AI tools for marketing analytics, and learn about ethical practices and risk mitigation strategies for AI implementation. Additionally, this course offers a forward-looking perspective on emerging trends and future directions in AI, preparing marketers to stay ahead in this ever-evolving field. By the end of this course, you will be equipped to harness AI for marketing success, make data-driven decisions, and create impactful campaigns.
The integration of generative AI (GenAI) in Human Resources (HR) is revolutionizing the industry and leading to greater efficiency and effectiveness. This course offers a comprehensive exploration of how AI is transforming HR practices, making it essential for HR professionals to stay updated and proficient in these advanced technologies. In this course, you will learn how GenAI is automating and personalizing HR functions, and you will explore practical applications of AI tools in streamlining HR processes. You will also explore the ethical considerations and potential risks associated with using AI in HR, ensuring you can implement these technologies responsibly. Additionally, you will gain insights into emerging trends in AI, equipping you with the knowledge to stay ahead in the evolving HR landscape.
In today's dynamic business landscape, harnessing the transformative power of artificial intelligence is essential. This introductory course will equip you to navigate the change to AI effectively. The course explores the key forces shaping the AI revolution. We'll delve into how factors like automation, data-driven decision-making, evolving customer demands, ethics, legal aspects, new AI technology, and how new business models are driving organizations to embrace AI. You'll gain a comprehensive understanding of the multifaceted impacts of AI, analyzing both the challenges and opportunities it presents. This includes potential job displacement and the need for reskilling, alongside the exciting possibilities of increased productivity, data-driven insights for better decision-making, and the creation of innovative new business models. Finally, the course introduces you to the power of data-driven insights for navigating AI change. We'll explore the limitations of traditional methods and introduce you to AI-driven data analysis techniques like sentiment analysis. This empowers you to gain deeper insights from various data sources, allowing you to proactively address potential resistance and tailor communication strategies for a smooth and successful AI integration within your organization.
Artifacts are substantial, self-contained pieces of content created by Claude, such as documents or code snippets, that are shared in a separate window for easy modification, reference, or reuse later. In this course, you'll learn how to use Anthropic AI's Artifacts feature to create documents, websites, and interactive components with GenAI. Discover the practical aspects of working with Claude, including how to create and publish Artifacts, use prompt engineering to create markdown documents, and generate SVG graphics and Mermaid.js diagrams by prompting and iterating with Claude. Finally, explore how to create HTML websites and interactive React components with Claude Artifacts. By the end of the course, you'll have a solid understanding of how to work with Claude Artifacts, and be able to use them to create a wide range of digital projects.
Artificial intelligence (AI) is transforming the way businesses and governments are developing and using information. This course offers an overview of AI, its history, and its use in real-world situations; prior knowledge of machine learning, neural network, and probabilistic approaches is recommended. There are multiple definitions of AI, but the most common view is that it is software which enables a machine to think and act like a human, and to think and act rationally. Because AI differs from plain programing, the programming language used will depend on the application. In this series of videos, you will be introduced to multiple tools and techniques used in AI development. Also discussed are important issues in its application, such as the ethics and reliability of its use. You will set up a programing environment for developing AI applications and learn the best approaches to developing AI, as well as common mistakes. Gain the ability to communicate the value AI can bring to businesses today, along with multiple areas where AI is already being used.
This course covers simple and complex types of AI (artificial intelligence) available in today's market. In it, you will explore theories of mind research, self-aware AI, artificial narrow intelligence, artificial general intelligence, and artificial super intelligence. First, learn the ways in which AI is used today in agriculture, medicine, by the military, in financial services, and by governments. As a special field of computer science that uses mathematics, statistics, cognitive and behavioral sciences, AI uses unique applications to perform actions based on data it uses as an input, and does so by mimicking the activity within the human brain. No data can be 100 percent accurate, bringing a certain degree of uncertainty to any kind of AI application. So this course seeks to explain how and why AI needs to be developed for a particular use scenario, helping you understand the many aspects involved in AI programming and how AI performance needs to be good enough to complete a certain task.
Discover how to implement Robotic Process Automation (RPA) using Python, and explore various RPA frameworks with the practical implementation of UiPath.
Explore the various machine learning techniques and implementations using Java libraries, and learn to identify certain scenarios where you can implement algorithms.
Discover the essential features and capabilities of Neuroph framework and Neural Networks, and also how to work with and implement Neural Networks using Neuroph framework.
Explore the concepts of expert system along with its Implementation using Java based frameworks, and examine the implementation and usages of ND4J and Arbiter to facilitate optimization.
Images often require to be manipulated to extract meaningful portions of an image or prepare them for a machine learning pipeline. OpenCV can help with this. In this course, you'll investigate a variety of image manipulation operations using OpenCV. You'll begin by recognizing how to filter certain portions of an image using bitwise operations. Next, you'll explore the concept of masks and how to use them while extracting parts of an image. You'll then outline how to apply geometrical operations by resizing an image to specific dimensions and discover challenges that such operations present. You'll finish the course by examining image transformations such as rotations and translations to help orient an image to your requirements. Finally, you'll discover how to flip and warp images to present them from a different perspective.
Many image processing operations involve complex math, but when using OpenCV, much of that is abstracted from the developer. In this course, you'll gain a high-level understanding of advanced image operations in OpenCV. You'll begin by recognizing how to apply different blur operations to an image. These range from simple blurs to Gaussian and median blurs. While doing so, you'll examine their specific advantages and disadvantages and how to distinguish between them. Moving on, you'll outline how to highlight objects in an image using edge detection and augment images by adding shapes and objects to them. Finally, you'll discover how to work with pre-trained classifiers to detect people in an image and perform morphological transformations to emphasize or suppress specific parts of an image.
In developing AI (artificial intelligence) applications, it is important to play close attention to human-computer interaction (HCI) and design each application for specific users. To make a machine intelligent, a developer uses multiple techniques from an AI toolbox; these tools are actually mathematical algorithms that can demonstrate intelligent behavior. The course examines the following categories of AI development: algorithms, machine learning, probabilistic modelling, neural networks, and reinforcement learning. There are two main types of AI tools available: statistical learning, in which large amount of data is used to make certain generalizations that can be applied to new data; and symbolic AI, in which an AI developer must create a model of the environment with which the AI agent interacts and set up the rules. Learn to identify potential AI users, the context of using the applications, and how to create user tasks and interface mock-ups.
Human computer interaction (HCI) design is the starting point for an artificial intelligence (AI) program. Overall HCI design is a creative problem-solving process oriented to the goal of satisfying largest number of customers. In this course, you will cover multiple methodologies used in the HCI design process and explore prototyping and useful techniques for software development and maintenance. First, learn how the anthropomorphic approach to HCI focuses on keeping the interaction with computers similar to human interactions. The cognitive approach pays attention to the capacities of a human brain. Next, learn to use the empirical approach to HCI to quantitatively evaluate interaction and interface designs, and predictive modeling is used to optimize the screen space and make interaction with the software more intuitive. You will examine how to continually improve HCI designs, develop personas, and use case studies and conduct usability tests. Last, you will examine how to improve the program design continually for AI applications; develop personas; use case studies; and conduct usability tests.
In this course, you'll explore basic Computer Vision concepts and its various applications. You'll examine traditional ways of approaching vision problems and how AI has evolved the field. Next, you'll look at the different kinds of problems AI can solve in vision. You'll explore various use cases in the fields of healthcare, banking, retail cybersecurity, agriculture, and manufacturing. Finally, you'll learn about different tools that are available in CV.
In this course, you'll explore Computer Vision use cases in fields like consumer electronics, aerospace, automotive, robotics, and space. You'll learn about basic AI algorithms that can help you solve vision problems and explore their categories. Finally, you'll apply hands-on development practices on two interesting use cases to predict lung cancer and deforestation.
To implement cognitive modeling inside AI systems, a developer needs to understand the major differences between commonly used cognitive models and their best qualities. Today cognitive models are actively utilized in healthcare, neuroscience, manufacturing and psychology and their importance compared to other AI approaches is expected to rise. Developing a firm understanding of cognitive modeling and its use cases is essential to anyone involved in creating AI systems. In this course, you'll identify unique features of cognitive models, which help create even more intelligent software systems. First you will learn about the different types of cognitive models and the disciplines involved in cognitive modeling. Further, you will discover main use cases for cognitive models in the modern world and learn about the history of cognitive modeling and how it is related to computer science and AI.
Practice plays an important role in AI development and helps one get familiarized with commonly used tools and frameworks. Knowing which methods to apply and when is critical to completing projects quickly and efficiently. Based on code examples provided, you will be able to quickly learn important cognitive modeling libraries and apply this knowledge to new projects in the field. In this course, you'll learn the essentials of working with cognitive models in a software system. First, you will get a detailed overview of each type of learning used in cognitive modeling. Further, you will learn about the toolset used for cognitive modeling with Python and recall which role cognitive models play in AI and business. Finally, you will go through various cognitive model implementations to develop skills necessary to implement cognitive modeling in real world.
An Artificial Intelligence (AI) Architect works and interacts with various groups in an organization, including IT Architects and IT Developers. It is important to differentiate between the work activities performed by these groups and how they work together. This course will introduce you to the AI Architect role. You'll discover what the role is, why it's important, and who the architect interacts with on a daily basis. We will also examine and categorize their daily work activities and will compare those activities with those of an IT Architect and an IT Developer. The AI Architect helps many groups within the organization, and we will examine their activities within those groups as well. Finally, we will highlight the roles the AI Architect plays in the organizations which they are a member of.
In this course, you'll be introduced to the concepts, methodologies, and tools required for effectively and efficiently incorporating AI into your IT enterprise planning. You'll look at enterprise planning from an AI perspective, and view projects in tactical/strategic and current, intermediate, or future state contexts. You'll explore how to use an AI Maturity Model to conduct an AI Maturity Assessment of the current and future states of AI planning, and how to conduct a gap analysis between those states. Next, you'll learn about the components of a discovery map, project complexity, and a variety of graphs and tables that enable you to handle complexity. You'll see how complexity can be significantly reduced using AI accelerators and how they affect specific phases of the AI development lifecycle. You'll move on to examine how to create an AI enterprise roadmap using all of the artifacts just described, plus a KPIs/Value Metrics table, and how both of these can be used as inputs to an analytics dashboard. Finally, you'll explore numerous examples of AI applications of different types in diverse business areas.
The inner workings of many deep learning systems are complicated, if not impossible, for the human mind to comprehend. Explainable Artificial Intelligence (XAI) aims to provide AI experts with transparency into these systems. In this course, you'll describe what Explainable AI is, how to use it, and the data structures behind XAI's preferred algorithms. Next, you'll explore the interpretability problem and today's state-of-the-art solutions to it. You'll identify XAI regulations, define the "right to explanation", and illustrate real-world examples where this has been applicable. You'll move on to recognize both the Counterfactual and Axiomatic methods, distinguishing their pros and cons. You'll investigate the intelligible models method, along with the concepts of monotonicity and rationalization. Finally, you'll learn how to use a Generative Adversarial Network.
Adopting the foundational techniques of natural language processing (NLP), together with the Bidirectional Encoder Representations from Transformers (BERT) technique developed by Google, allows developers to integrate NLP pipelines into their projects efficiently and without the need for large-scale data collection and processing. In this course, you'll explore the concepts and techniques that pave the foundation for working with Google BERT. You'll start by examining various aspects of NLP techniques useful in developing advanced NLP pipelines, namely, those related to supervised and unsupervised learning, language models, transfer learning, and transformer models. You'll then identify how BERT relates to NLP, its architecture and variants, and some real-world applications of this technique. Finally, you'll work with BERT and both Amazon review and Twitter datasets to develop sentiment predictors and create classifiers.
Solid knowledge of the AI technology landscape is fundamental in choosing the right tools to use as an AI Architect. In this course, you'll explore the current and future AI technology landscape, comparing the advantages and disadvantages of common AI platforms and frameworks. You'll move on to examine AI libraries and pre-trained models, distinguishing their advantages and disadvantages. You'll then classify AI datasets and see a list of dataset topics. Finally, You'll learn how to make informed decisions about which AI technology is best suited to your projects.
AI architecture patterns, some of which have been known for many years, have been formally identified as such only in the last couple of years. In this course, you'll identify 12 reusable, standard AI architecture patterns, and 3 AI architecture anti-patterns frequently used to architect common AI applications. You'll learn to differentiate between architecture and design patterns and explore how they're used. Next, you'll examine the structure of an AI architecture pattern, and that of an anti-pattern and its different parts. You'll identify when specific patterns should or can be used, when they need to be avoided, and how to avoid using anti-patterns. You will also learn that even good patterns can become anti-patterns when applied to solve a problem they were not intended for.
Designing successful and competitive AI products involves thorough research on its existing application in various markets. Most large scale businesses use AI in their workflows to optimize business operations. AI Architects should be aware of all possible applications of AI so they can look at market trends and come up with the most appropriate, novel, and useful AI solutions for their industry. In this course, you'll explore examples of standard AI applications in various industries like Finance, Marketing, Sales, Manufacturing, Transportation, Cybersecurity, Pharmaceutical, and Telecommunications. You'll examine how AI is utilized by leading AI companies within each of these industries. You'll identify which AI technologies are common across all industries and which are industry-specific. Finally, you'll recognize why AI is imperative to the successful operation of many industries.
AI Practitioner is a cross-industry advanced AI Developer position that has a growing demand in the modern world. Candidates for this role need to demonstrate proficiency in optimizing and tuning AI solutions to deliver the best possible performance in the real world. AI Practitioners require more advanced knowledge of algorithm implementations and should have a firm knowledge of latest toolsets available. In this course, you'll be introduced to the AI Practitioner role in the industry. You'll examine an AI Practitioner's skillset and responsibilities in relation to AI Developers, Data Scientists, and ML and AI Engineers. Finally, you'll learn about the scope of work for AI Practitioners, including their career opportunities and pathways.
Optimization is required for any AI model to deliver reliable outcomes in most of the use cases. AI Practitioners use their knowledge of optimization techniques to choose and apply various solutions and improve accuracy of existing models. In this course, you'll learn about advanced optimization techniques for AI Development, including multiple optimization approaches like Gradient Descent, Momentum, Adam, AdaGrad and RMSprop optimization. You'll examine how to determine the preferred optimization technique to use and the overall benefits of optimization in AI. Lastly, you'll have a chance to practice implementing optimization techniques from scratch and applying them to real AI models.
Any aspiring AI developer has to clearly understand the responsibilities and expectations when entering the industry in this role. AI Developers can come from various backgrounds, but there are clear distinctions between this role and others like Software Engineer, ML Engineer, Data Scientist, or AI Engineers. Therefore, any AI Developer candidate has to posses the required knowledge and demonstrate proficiency in certain areas. In this course you will learn about the AI Developer role in the industry and compare the responsibilities of AI Developers with other engineers involved in AI development. After completing the course, you will recognize the mindset required to become a successful AI Developer and become aware of multiple paths for career progression and future opportunities
A working knowledge of multiple AI development frameworks is essential to AI developers. Depending on the particular focus, you may decide on a particular framework of your choice. However, various companies in the industry tend to use different frameworks in their products, so knowing the basics of each framework is quite helpful to the aspiring AI Developer. In this course you will explore popular AI frameworks and identify key features and use cases. You will identify main differences between AI frameworks and work with Microsoft CNTK and Amazon SageMaker to implement model flow.
Robots can utilize machine learning, deep learning, reinforcement learning, as well as probabilistic techniques to achieve intelligent behavior. This application of AI to robotic systems is found in the automotive, healthcare, logistics, and military industries. With increasing computing power and sophistication in small robots, more industry use cases are likely to emerge, making AI development for robotics a useful AI developer skill. In this course, you'll explore the main concepts, frameworks, and approaches needed to work with robotics and apply AI to robots. You'll examine how AI and robotics are used across multiple industries. You'll learn how to work with commonly used algorithms and strategies to develop simple AI systems that improve the performance of robots. Finally, you'll learn how to control a robot in a simulated environment using deep Q-networks.
Cognitive modeling can provide additional human qualities to AI systems. It is traditionally used in cognitive machines and expert systems. However, with extra computing power, it can be applied to more profound AI approaches like neural networks and reinforcement learning systems. Knowledge of cognitive modeling applications is essential to any AI developer aspiring to design AI architectures and develop large-scale applications. In this course, you'll examine the role of cognitive modeling in AI development and its possible applications in NLP, image recognition, and neural networks. You'll outline core cognitive modeling concepts and significant industry use cases. You'll list open source cognitive modeling frameworks and explore cognitive machines, expert systems, and reinforcement learning in cognitive modeling. Finally, you'll use cognitive models to solve real-world problems.
The world of technology continues to transform at a rapid pace, with intelligent technology incorporated at every stage of the business process. Intelligent information systems (IIS) reduce the need for routine human labor and allow companies to focus instead on hiring creative professionals. In this course, you'll explore the present and future roles of intelligent informational systems in AI development, recognizing the current demand for IIS specialists. You'll list several possible IIS applications and learn about the roles AI and ML play in creating them. Next, you'll identify significant components of IIS and the purpose of these components. You'll examine how you would go about creating a self-driving vehicle using IIS components. Finally, you'll work with Python libraries to build high-level components of a Markov decision process.
Bidirectional Encoder Representations from Transformers (BERT), a natural language processing technique, takes the capabilities of language AI systems to great heights. Google's BERT reports state-of-the-art performance on several complex tasks in natural language understanding. In this course, you'll examine the fundamentals of traditional NLP and distinguish them from more advanced techniques, like BERT. You'll identify the terms attention and transformer and how they relate to NLP. You'll then examine a series of real-life applications of BERT, such as in SEO and masking. Next, you'll work with an NLP pipeline utilizing BERT in Python for various tasks, namely, text tokenization and encoding, model definition and training, and data augmentation and prediction. Finally, you'll recognize the benefits of using BERT and TensorFlow together.
Bidirectional Encoder Representations from Transformers (BERT) can be implemented in various ways, and it is up to AI practitioners to decide which one is the best for a particular product. It is also essential to recognize all of BERT's capabilities and its full potential in NLP. In this course, you'll outline the theoretical approaches to several BERT use cases before illustrating how to implement each of them. In full, you'll learn how to use BERT for search engine optimization, sentence prediction, sentence classification, token classification, and question answering, implementing a simple example for each use case discussed. Lastly, you'll examine some fundamental guidelines for using BERT for content optimization.
Tuning hyper parameters when developing AI solutions is essential since the same models might behave quite differently with different parameters set. AI Practitioners recognize multiple hyper parameter tuning approaches and are able to quickly determine best set of hyper parameters for particular models using AI toolbox. In this course, you'll learn advanced techniques for hyper parameter tuning for AI development. You'll examine how to recognize the hyper parameters in ML and DL models. You'll learn about multiple hyper parameter tuning approaches and when to use each approach. Finally, you'll have a chance to tune hyper parameters for a real AI project using multiple techniques.
In recent times, natural language processing (NLP) has seen many advancements, most of which are in deep learning models. NLP as a problem is very complicated, and deep learning models can handle that scale and complication with many different variations of neural network architecture. Deep learning also has a broad spectrum of frameworks that supports NLP problem solving out-of-the-box. Explore the basics of deep learning and different architectures for NLP-specific problems. Examine other use cases for deep learning NLP across industries. Learn about various tools and frameworks used such as - Spacy, TensorFlow, PyTorch, OpenNMT, etc. Investigate sentiment analysis and explore how to solve a problem using various deep learning steps and frameworks. Upon completing this course, you will be able to use the essential fundamentals of deep learning for NLP and outline its various industry use cases, frameworks, and fundamental sentiment analysis problems.
Natural language processing (NLP) is constantly evolving with cutting edge advancements in tools and approaches. Neural network architecture (NNA) supports this evolution by providing a method of processing language-based information to solve complex data-driven problems. Explore the basic NNAs relevant to NLP problems. Learn different challenges and use cases for single-layer perceptron, multi-layer perceptron, and RNNs. Analyze data and its distribution using pandas, graphs, and charts. Examine word vector representations using one-hot encodings, Word2vec, and GloVe and classify data using recurrent neural networks. After you have completed this course, you will be able to use a product classification dataset to implement neural networks for NLP problems.
In the journey to understand deep learning models for natural language processing (NLP), the subsequent iterations are memory-based networks, which are much more capable of handling extended context in languages. While basic neural networks are better than machine learning (ML) models, they still lack in more significant and large language data problems. In this course, you will learn about memory-based networks like gated recurrent unit (GRU) and long short-term memory (LSTM). Explore their architectures, variants, and where they work and fail for NLP. Then, consider their implementations using product classification data and compare different results to understand each architecture's effectiveness. Upon completing this course, you will have learned the basics of memory-based networks and their implementation in TensorFlow to understand the effect of memory and more extended context for NLP datasets.
The essential aspect of human intelligence is our learning processes, constantly augmented with the transfer of concepts and fundamentals. For example, as a child, we learn the basic alphabet, grammar, and words, and through the transfer of these fundamentals, we can then read books and communicate with people. This is what transfer learning helps us achieve in deep learning as well. This course will help you learn the fundamentals of transfer learning for NLP, its various challenges, and use cases. Explore various transfer learning models such as ELMo and ULMFiT. Upon completing this course, you will understand the transfer learning methodology of solving NLP problems and be able to experiment with various models in TensorFlow.
Get down to solving real-world GitHub bug prediction problems in this case study course. Examine the process of data and library loading and perform basic exploratory data analysis (EDA) including word count, label, punctuation, and stop word analysis. Explore how to clean and preprocess data in order to use vectorization and embeddings and use counter vector and term frequency-inverse document frequency (TFIDF) vectorization methods with visualizations. Finally, assess different classifiers like logistic regression, random forest, or AdaBoost. Upon completing this course, you will understand how to solve industry-level problems using deep learning methodology in the TensorFlow ecosystem.
Enterprises across the world are creating large amounts of language data. There are many different kinds of data with language components including reports, word documents, operational data, emails, reviews, sops, and legal documents. This course will help you develop the skills to analyze this data and extract valuable and actionable insights. Learn about the various building blocks of natural language processing to help in understanding the different approaches used for solving NLP problems. Examine machine learning and deep learning approaches to handling NLP issues. Finally, explore common use cases that companies are approaching with NLP solutions. Upon completion of this course, you will have a strong foundation in the fundamentals of natural language processing, its building blocks, and the various approaches that can be used to architect solutions for enterprises in NLP domains.
Without fundamental building blocks and industry-accepted tools, it is difficult to achieve state-of-art analysis in NLP. In this course, you will learn about linguistic features such as word corpora, tokenization, stemming, lemmatization, and stop words and understand their value in natural language processing. Begin by exploring NLTK and spaCy, two of the most widely used NLP tools, and understand what they can help you achieve. Learn to recognize the difference between these tools and understand the pros and cons of each. Discover how to implement concepts like part of speech tagging, named entity recognition, dependency parsing, n-grams, spell correction, segmenting sentences, and finding similar sentences. Upon completion of this course, you will be able to build basic NLP applications on any raw language data and explore the NLP features that can help businesses take actionable steps with this data.
Sometimes, business wants to find similar-sounding words, specific word occurrences, and sentiment from the raw text. Having learned to extract foundational linguistic features from the text, the next objective is to learn the heuristic approach to extract non-foundational features which are subjective. In this course, learn how to extract synonyms and hypernyms with WordNet, a widely used tool from the Natural Language Toolkit (NLTK). Next, explore the regex module in Python to perform NLTK chunking and to extract specific required patterns. Finally, you will solve a real-world use case by finding sentiments of movies using WordNet. After comleting this course, you will be able to use a heuristic approach of natural language processing (NLP) and to illustrate the use of WordNet, NLTK chunking, regex, and SentiWordNet.
Machine learning (ML) is one of the most important toolsets available in the enterprise world. It gives predictive powers to data that can be leveraged to investigate future behaviors and patterns. It can help companies proactively improve their business and help optimize their revenue. Learn how to leverage machine learning to make predictions with language data. Explore the ML pipelines and common models used for Natural Language Processing (NLP). Examine a real-world use case of identifying sarcasm in text and discover the machine learning techniques suitable for NLP problems. Learn different vectorization and feature engineering methods for text data, exploratory data analysis for text, model building, and evaluation for predicting from text data and how to tune those models to achieve better results. After completing this course, you'll be able to illustrate the use of machine learning to solve NLP problems and demonstrate the use of NLP feature engineering.
There are many tools available in the Natural Language Processing (NLP) tool landscape. With single tools, you can do a lot of things faster. However, using multiple state-of-art tools together, you can solve many problems and extract multiple patterns from your data. In this course, you will discover many important tools available for NLP such as polyglot, Genism, TextBlob, and CoreNLP. Explore their benefits and how they stand against each other for performing any NLP task. Learn to implement core linguistic features like POS tags, NER, and morphological analysis using the tools discussed earlier in the course. Discover defining features of each tool such as multiple language support, language detection, topic models, sentiment extractions, part of speech (POS) driven patterns, and transliterations. Upon completion of this course, you will feel confident with the Python tool ecosystem for NLP and will be able to perform state-of-art pattern extraction on any kind of text data.
Using natural language processing (NLP) tools, an organization can analyze their review data and predict the sentiments of their customers. In this course, we'll learn how to implement NLP tools to solve a business problem end-to-end. To begin, learn about loading, exploring, and preprocessing business data. Next, explore various linguistic features and feature engineering methods for data and practice building machine learning (ML) models for sentiment prediction. Finally, examine the automation options available for building and deploying models. After completing this course, you will be able to solve NLP problems for enterprises end-to-end by leveraging a variety of concepts and tools.
With recent advancements in cheap GPU compute power and natural language processing (NLP) research, companies and researchers have introduced many powerful models and architectures that have taken NLP to new heights. In this course, learn about Transformer models like Bert and GPT and the maturity of AI in NLP areas due to these models. Next, examine the fundamentals of Transformer models and their architectures. Finally, discover the importance of attention mechanisms in the Transformer architecture and how they help achieve state-of-the-art results in NLP tasks. Upon completing this course, you'll be able to understand different aspects of Transformer architectures like the self-attention layer and encoder-decoder models.
In every domain of artificial intelligence, there is one algorithm that transforms the entire field into an industry-matured tool to be used across a broad spectrum of use cases. BERT is that algorithm for natural language processing (NLP). In this course, explore the fundamentals of BERT architecture, including variations, transfer learning capabilities, and best practices. Examine the Hugging Face library and its role in sentiment analysis problems. Practice model setup, pre-processing, sentiment classification training, and evaluating models using BERT. Finally, take a critical look to recognize the challenges of using BERT. Upon completing this course, you'll be able to demonstrate how to solve simple sentiment analysis problems.
Generative Pre-trained Transformer (GPT) models go beyond classifying and predicting text behavior to helping actually generate text. Imagine an algorithm that can produce articles, songs, books, or code - anything that humans can write. That is what GPT can help you achieve. In this course, discover the key concepts of language models for text generation and the primary features of GPT models. Next, focus on GPT-3 architecture. Then, explore few-shot learning and industry use cases and challenges for GPT. Finally, practice decoding methods with greedy search, beam search, and basic and advanced sampling methods. Upon completing this course, you will understand the fundamentals of the GPT model and how it enables text generation.
Translating from one language to another is a common task in Natural Language Processing (NLP). The transformer model works by passing multiple words through a neural network simultaneously and is one of the newest models propelling a surge of progression, sometimes referred to as transformer AI. In this course, you will solve real-world machine translation problems, translating from English to French. Explore machine translation problem formulation, notebook setup, and data pre-processing. Then, learn to tokenize and vectorize data into a sequence of integers, where each integer represents the index of a word in a vocabulary. Discover transformer encoder-decoder and see how to produce input and output sequences. Finally, define the attention layer and assemble, train, and evaluate the translation model end to end. Upon completing this course, you will be able to solve industry-level problems using deep learning methodology in the TensorFlow ecosystem.
Keeping up with current events can be challenging, especially when you live or work in a country where you do not speak the language. Learning a new language can be difficult and time-consuming when you have a busy schedule. In this course, you will learn how to scrape news articles written in Arabic from websites, translate them into English, and then summarize them. First, focus on the overall architecture of your summarization application. Next, discover the Transformers library and explore its role in translation and summarization tasks. Then, create a user interface for the application using Gradio. Upon completion of this course, you'll be able to use an application to scrape data written in Arabic from any URL, translate it into English, and summarize it
Most current question answering datasets will frame the task as reading comprehension, where the question is about a paragraph or document and the answer often is a span in the document. Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. This course focuses on the architecture of the Q&A pipeline. First, install the Transformers library and import a text comprehension model to create your Q&S pipeline. Then, use Gradio to develop a user interface for answering questions about a given article. Upon completion, you'll be able to develop an application that can answer questions asked by a user about a given article.
An AI chatbot is a program within a website or app that simulates human conversations using natural language processing (NLP). Chatbots are programmed to address users' needs independently of a human operator. Common chatbot functions include answering frequently asked questions and helping users navigate a website or app. In this course, explore the AI chatbot application flow and learn about data loading and text preprocessing. Next, discover how to transform the data into numeric values and perform one-hot data encoding. Finally, practice creating and training models, loading a trained model, defining a response function, and setting test questions. Upon completion, you'll be able to develop a simple chatbot using transformers that will automatically reply to user questions.
Natural Language Processing (NLP) is a branch of artificial intelligence (AI) that focuses on programmatically working with text or speech - the term ‘natural' here emphasizes that the program must work with and be aware of everyday language, grammar and semantics, rather than structured text data such as might be found in database or string processing. In this course, you will learn about the two main branches of NLP, natural language understanding and natural language generation. You will also explore the Natural Language Toolkit (NLTK) and spaCy, two popular Python libraries for natural language processing and analysis. Next, you will delve into common preprocessing steps for natural language data. This includes cleaning and tokenizing data, removing stopwords from your text, performing stemming and lemmatization, part-of-speech (POS) tagging, and named entity recognition (NER). Finally, you will get set up with your Python environment and libraries for NLP and explore some text corpora that NLTK offers for working with text.
Tokenization, stemming, and lemmatization are essential natural language processing (NLP) tasks. Tokenization involves breaking text into units (tokens), such as words or phrases, facilitating analysis. Stemming reduces words to a common base form by removing prefixes or suffixes, promoting simplicity in representation. In contrast, lemmatization considers grammatical aspects to transform words into their base or dictionary form. You will begin this course by tokenizing text using the Natural Language Toolkit (NLTK) and SpaCy, which involves splitting a large block of text into smaller units called tokens, usually words or sentences. You will then remove stopwords, common words such as "a" and "the" that add little meaning to text. Next, you'll explore the WordNet lexical database, which contains information about the semantic relationship between words. You'll use Synsets to view similar words and explore hypernyms, hyponyms, meronyms and holonyms. Finally, you'll compare stemming and lemmatization using NLTK and SpaCy. You will explore both processes with NLTK and perform lemmatization using SpaCy.
Sentiment Analysis is a common use-case within the discipline of Natural Language Processing (NLP). Here, a model attempts to understand the contents of a text document well enough to capture the feelings, or sentiments, conveyed by the text. Sentiment Analysis is widely used by political forecasters, marketing professionals, and hedge fund managers looking to spot trends in voter, user, or market behavior. You will start this course by loading and preprocessing your data. You will read in data on movie reviews from IMDB and explore the dataset. You will then visualize the data using histograms and box plots to understand review length distribution. After that, you will perform basic data cleaning on text, utilizing regular expressions to remove elements like URLs and digits. Finally, you will conduct sentiment analysis using the Valence Aware Dictionary and Sentiment Reasoner (VADER) and TextBlob.
When performing sentiment classification using machine learning, it is necessary to encode text into a numeric format because machine learning models can only parse numbers, not text. There are a number of encoding techniques for text data, such as one-hot encoding, count vector encoding, and word embeddings. In this course, you will learn how to use one-hot encoding, a simple technique that builds a vocabulary from all words in your text corpus. Next, you will move on to count vector encoding, which tracks word frequency in each document and explore term frequency-inverse document frequency (TF-IDF) encoding, which also creates vocabularies and document vectors but uses a TF-IDF score to represent words. Finally, you will perform sentiment analysis using encoded text. You will use a count vector to encode your input data and then set up a Gaussian Naïve-Bayes model. You will train the model and evaluate its metrics. You will also explore how to improve the model performance by stemming words, removing stopwords, and using N-grams.
Before training any text-based machine learning model, it is necessary to encode that text into a machine-readable numeric form. Embeddings are the preferred way to encode text, as they capture data about the meaning of text, and are performant even with large vocabularies. You will start this course by working with Word2Vec embeddings, which represent words and terms in feature vector space, capturing the meaning and context of a word in a sentence. You will generate Word2Vec embeddings on your data corpus, set up a Gaussian Naïve-Bayes classification model, and train it on Word2Vec embeddings. Next, you will move on to GloVe embeddings. You will use the pre-trained GloVe word vector embeddings and explore how to view similar words and identify the odd one out in a set. Finally, you will perform classification using many different models, including Naive-Bayes and Random Forest models.
Deep learning has revolutionized natural language processing (NLP), offering powerful techniques for understanding, generating, and processing human language. Through deep neural networks (DNNs), NLP models can now comprehend complex linguistic structures, extract meaningful information from vast amounts of text data, and even generate human-like responses. Begin this course by learning how to utilize Keras and TensorFlow to construct and train neural networks. Next, you will build a DNN to classify messages as spam or not. You will find out how to encode data using count vector and term frequency-inverse document frequency (TF-IDF) encodings via the Keras TextVectorization layer. To enhance the training process, you will employ Keras callbacks to gain insights into metrics tracking, TensorBoard integration, and model checkpointing. Finally, you will apply sentiment analysis using word embeddings, explore the use of pre-trained GloVe word vector embeddings, and incorporate convolutional layers to grasp local text context.
Recurrent neural networks (RNNs) are a class of neural networks designed to efficiently process sequential data. Unlike traditional feedforward neural networks, RNNs possess internal memory, which enables them to learn patterns and dependencies in sequential data, making them well-suited for a wide range of applications, including natural language processing. In this course, you will explore the mechanics of RNNs and their capacity for processing sequential data. Next, you will perform sentiment analysis with RNNs, generating and visualizing word embeddings through the TensorBoard embedding projector plug-in. You will construct an RNN, employing these word embeddings for sentiment analysis and evaluating the RNN's efficacy on a set of test data. Then, you will investigate advanced RNN applications, focusing on long short-term memory (LSTM) and bidirectional LSTM models. Finally, you will discover how LSTM models enhance the processing of long text sequences and you will build and train a bidirectional LSTM model to process data in both directions and capture a more comprehensive understanding of the text.
Transfer learning is a powerful machine learning technique that involves taking a pre-trained model on a large dataset and fine-tuning it for a related but different task, significantly reducing the need for extensive datasets and computational resources. Transformers are groundbreaking neural network architectures that use attention mechanisms to efficiently process sequential data, enabling state-of-the-art performance in a wide range of natural language processing tasks. In this course, you will discover transfer learning, the TensorFlow Hub, and attention-based models. Then you will learn how to perform subword tokenization with WordPiece. Next, you will examine transformer models, specifically the FNet model, and you will apply the FNet model for sentiment analysis. Finally, you will explore advanced text processing techniques using the Universal Sentence Encoder (USE) for semantic similarity analysis and the Bidirectional Encoder Representations from Transformers (BERT) model for sentence similarity prediction.
Attention mechanisms in natural language processing (NLP) allow models to dynamically focus on different parts of the input data, enhancing their ability to understand context and relationships within the text. This significantly improves the performance of tasks such as translation, sentiment analysis, and question-answering by enabling models to process and interpret complex language structures more effectively. Begin this course by setting up language translation models and exploring the foundational concepts of translation models, including the encoder-decoder structure. Then you will investigate the basic translation process by building a transformer model based on recurrent neural networks without attention. Next, you will incorporate an attention layer into the decoder of your language translation model. You will discover how transformers process input sequences in parallel, improving efficiency and training speed through the use of positional and word embeddings. Finally, you will learn about queries, keys, and values within the multi-head attention layer, culminating in training a transformer model for language translation.
Hugging Face, a leading company in the field of artificial intelligence (AI), offers a comprehensive platform that enables developers and researchers to build, train, and deploy state-of-the-art machine learning (ML) models with a strong emphasis on open collaboration and community-driven development. In this course, you will discover the extensive libraries and tools Hugging Face offers, including the Transformers library, which provides access to a vast array of pre-trained models and datasets. Next, you will set up your working environment in Google Colab. You will also explore the critical components of the text preprocessing pipeline: normalizers and pre-tokenizers. Finally, you will master various tokenization techniques, including byte pair encoding (BPE), Wordpiece, and Unigram tokenization, which are essential for working with transformer models. Through hands-on exercises, you will build and train BPE and WordPiece tokenizers, configuring normalizers and pre-tokenizers to fine-tune these tokenization methods.
Sentiment analysis, named entity recognition (NER), question answering, and text generation are pivotal tasks in the realm of Natural Language Processing (NLP) that enable machines to interpret and understand human language in a nuanced manner. In this course, you will be introduced to the concept of Hugging Face pipelines, a streamlined approach to applying pre-trained models to a variety of NLP tasks. Through hands-on exploration, you will learn how to classify text using zero-shot classification techniques, perform sentiment analysis with DistilBERT, and apply models to specialized tasks, utilizing the power of NLP to adapt to niche domains. Next, you will discover how to employ models to accurately answer questions based on provided contexts and understand the mechanics behind model-based answers, including their limitations and capabilities. Finally, you will discover various text generation strategies such as greedy search and beam search, learning how to balance predictability with creativity in generated text. You will also explore text generation through sampling techniques and the application of mask filling with BERT models.
Language translation, text summarization, and semantic textual similarity are advanced problems within the field of Natural Language Processing (NLP) that are increasingly solvable due to advances in the use of large language models (LLMs) and pre-trained models. In this course, you will learn to translate text between languages with state-of-the-art pre-trained models such as T5, M2M 100, and Opus. You will also gain insights into evaluating translation accuracy with BLEU scores and explore multilingual translation techniques. Next, you will explore the process of summarizing text, utilizing the powerful BART and T5 models for abstractive summarization. You will see how these models extract and generate key information from large texts and learn to evaluate the quality of summaries using ROUGE scores. Finally, you will master the computation of semantic textual similarity using sentence transformers and apply clustering techniques to group texts based on their semantic content. You will also learn to compute embeddings and measure similarity directly.
Fine-tuning in the context of text-based models refers to the process of taking a pre-trained model and adapting it to a specific task or dataset with additional training. This technique leverages the general language understanding capabilities acquired by the model during its initial extensive training on a large corpus of text and refines its abilities to perform well on a more narrowly defined task or domain-specific data. In this course, you will learn how to fine-tune a model for sentiment analysis, starting with the preparation of datasets optimized for this purpose. You will be guided through setting up your computing environment and preparing a BERT classifier for sentiment analysis. Next, you will discover how to structure text data and align named entity recognition (NER) tags with subword tokenization. You will build on this knowledge to fine-tune a BERT model specifically for NER, training it to accurately identify and classify entities within text. Finally, you will explore the domain of question answering, learning to handle the challenges of long contexts to extract precise answers from extensive texts. You will prepare QnA data for fine-tuning and utilize a DistilBERT model to create an effective QnA system.
Causal language modeling (CLM), text translation, and summarization demonstrate the versatility and depth of language understanding and generation by artificial intelligence (AI). Fine-tuning models help improve the performance of models for these specific tasks. In this course, you will explore CLM with DistilGPT-2 and masked language modeling (MLM) with DistilRoBERTa, learning how to prepare, process, and fine-tune models for generating and predicting text. Next, you will dive into the nuances of language translation, focusing on translating English to Spanish. You will prepare and evaluate training data and learn to use BLEU scores for assessing translation quality. You will fine-tune a pre-trained T5-small model, enhancing its accuracy and broadening its linguistic capabilities. Finally, you will explore the intricacies of text summarization. Starting with data loading and visualization, you will establish a benchmark using the pre-trained T5-small model. You will then fine-tune this model for summarization tasks, learning to condense extensive texts into succinct summaries.
Explore the concept of machine learning in TensorFlow, including TensorFlow installation and configuration, the use of the TensorFlow computation graph, and working with building blocks.
Explore how to model language and text with word embeddings and how to use those embeddings in Recurrent Neural Networks. Leveraging TensorFlow to build custom RNN models is also covered.
Discover how to construct neural networks for sentiment analysis. How to generate word embeddings on training data and use pre-trained word vectors for sentiment analysis is also covered.
Discover how to differentiate between supervised and unsupervised machine learning techniques. The construction of clustering models and their application to classification problems is also covered.
Explore how to perform dimensionality reduction using powerful unsupervised learning techniques such as Principal Components Analysis and autoencoding.
Data science methods are used across several industries to deliver value to businesses. Machine learning (ML) is a data science method that uses prediction algorithms that find patterns in massive amounts of data, allowing machines to predict future results and make decisions with minimal human intervention. Through this course, learn foundational methods for using machine learning. In this course, you will examine what machine learning is, how it is categorized, and some everyday use cases for supervised and unsupervised machine learning. Then you will discover feature engineering and its impact on model performance. Next, focus on common types of machine learning tasks, such as clustering, classification, and simple and multiple linear regression. Finally, explore various machine learning challenges and how to overcome them. Upon completion, you will be able to define machine learning and methods for using it.
In data science, many statistical and analytical techniques can be used to pull meaningful insights from data. Some advanced data science methods rely on other foundational data science methods, such as text mining. In this course, you will learn about advanced data science methods and their use cases. Begin this course with an exploration of advanced machine learning (ML) methods, such as text mining and graph analysis, and their uses. Next, you will discover the anomaly and novelty detection processes. You will examine association rule mining and neural networks, including their use cases across industries. Then you will focus on common challenges during artificial intelligence (AI) and ML model training, the trade-offs between model complexity and interpretability, and the role of natural language processing (NLP) in text analysis. Finally, you will investigate the potential of computer vision techniques and applications of reinforcement learning.
Artificial intelligence (AI) provides cutting-edge tools to help organizations predict behaviors, identify key patterns, and drive decision-making in a world that is increasingly made up of data. In this course, you will explore the full definition of AI, how it works, and when it can be used, focusing on informative use cases. You will identify the types of data, as well as the tools and technologies AI uses to operate. Next, you will discover a framework for using the AI life cycle and data science process. Then you will examine how data science, machine learning (ML), and AI are relevant in the modern business landscape. Finally, you will investigate the key differences between AI and traditional programming approaches, the benefits and challenges associated with integrating AI and ML into business approaches, and the potential impact of AI on job roles and workforce dynamics. Upon completion of this course, you'll be familiar with common concepts and use cases of artificial intelligence (AI) and be able to outline strategies for each part of the AI life cycle.
Understanding model evaluation is crucial for making reliable, accurate, and ethical decisions when using artificial intelligence (AI) and machine learning (ML) in practical scenarios. In this course, you'll explore AI/ML model evaluation and interpretability in-depth, gaining a strong grasp of these essential components to make AI/ML work effectively for your organization. This course focuses on the key concepts and metrics needed to assess how well models perform. Understanding model evaluation is crucial for making reliable, accurate, and ethical decisions when using AI/ML in practical scenarios. Upon completing this course, you will be well-prepared to make informed decisions and maximize the potential of AI/ML within your organization.
Although OpenAI can create text, like short stories or ads, it does have some limits. With planning, however, these limitations can be worked around. OpenAI has a rich Application Programming Interface (API) for working with images, including creating images from a description and being able to manipulate just a part of an image. OpenAI can also translate text to speech and even translate spoken language. In this course, you will explore advanced text creation and manipulation concepts in OpenAI. You will work with image generation and modification using an image mask. You will discover object recognition concepts using OpenAI Contrastive Language-Image Pre-Training (CLIP). Finally, you will use the OpenAI API to convert speech to and from another language.
OpenAI's potential to increase productivity really shows when it comes to generating, completing partially provided code, or fixing already written code. Other handy features are using embeddings to measure or determine relatedness and fine-tuning a model to solve domain-specific problems. In this course, you will explore OpenAI's code generation and completion application programming interface (API). You will discover how to generate code from comments or complete partially provided code and how to use OpenAI to find libraries to solve problems or rewrite code. Next, you will focus on sentiment analysis and the tone of text and how to use embeddings for searching, clustering, recommending, and classifying. Then, you will examine OpenAI fine-tuning. Finally, you will create and fine-tune a customer-facing chatbot to handle specific scenarios.
In our increasingly digital world, the convergence of ethical hacking and generative AI technologies has become a crucial frontier in cybersecurity. As technology advances, so do the methods employed by hackers, making it essential for ethical hackers, or white hat hackers, to stay one step ahead. This course introduces you to the exciting world of ethical hacking and generative AI technologies. You will gain insights into the evolving cybersecurity landscape, learn about the techniques used by both malicious and ethical hackers, and explore how generative AI can be leveraged for both offensive and defensive purposes. By the end of this course, you will be equipped with a solid foundation in ethical hacking and generative AI, enabling you to understand the complex dynamics between security and innovation in the digital age.
In today's rapidly evolving digital landscape, the convergence of ethical hacking and generative AI has emerged as a powerful force in countering cybersecurity threats. As malicious hackers adapt and exploit advanced technologies, the need for innovative defenses becomes paramount. This course explores the cutting-edge domain of generative artificial intelligence (AI) and reconnaissance techniques. In this course, you will explore reconnaissance techniques leveraging AI's potential and apply passive and active reconnaissance techniques. Next, you'll explore the challenges and solutions associated with reconnaissance and generative AI and consider the methods used to protect against it. Finally, you'll explore how ethical hackers can use the information obtained during reconnaissance.
In today's dynamic and evolving digital terrain, the need for robust cybersecurity measures has never been more critical. As technology continues to advance, hackers are constantly adapting their methods, making it imperative for ethical hackers to stay one step ahead. This course explores the convergence of ethical hacking and generative AI, specifically focusing on the scanning and enumeration phase of the ethical hacking process. You will begin by examining the network scanning phase of the ethical hacking process and how generative AI and prompt engineering enhance this process. With a solid foundation, you will then discover ways that prompt engineering can optimize scanning, the risks and benefits of using AI-assisted scanning, and how generative AI can boost various scanning techniques. Finally, you will take a look at realistic examples involving scanning and enumeration techniques, focusing on the role that generative AI plays in these scenarios.
In a world increasingly reliant on interconnected systems, safeguarding them from malicious actors is critical. Ethical hackers are key to maintaining a robust cybersecurity posture, and must commit to continuous improvement to stay ahead of malicious actors. This course provides an immersive journey into the techniques and methodologies employed by ethical hackers to protect and secure systems while focusing on generative artificial intelligence (AI) as a transformative resource in the hacking toolchain. You will start by exploring the fundamental concepts and methodologies that underpin system hacking and how generative AI can enhance hacking techniques and countermeasures. With this foundation in place, you'll investigate how to enhance attacks with generative AI, protect against system hacking by detecting vulnerabilities, use AI to drive penetration tests, and recognize the impact generative AI has on system hacking. Finally, you'll explore real-world system hacking scenarios and reflect on the transformative potential of generative AI.
In the dynamic digital landscape, cybersecurity is more important than ever. Malicious actors are well aware that people are often the weakest link in an organization's cybersecurity posture. Ethical hackers commit significant effort into penetration testing and strategizing to combat malware and social engineering efforts executed by threat actors. This course journeys deep into the techniques and methodologies employed by ethical hackers to uncover malware and educate users in the ongoing battle against social engineering and how generative artificial intelligence (AI) can assist them in this pursuit. You'll start by exploring the types of malware and social engineering principles and techniques. With this foundation, you'll explore how generative AI can be used to help detect malware and social engineering attacks. You'll discover potential damage that can result from such attacks along with countermeasures that can be taken and then you'll explore proactive efforts to train users about social engineering attacks. You'll investigate how to use generative AI and prompt engineering to counter malware and social engineering, and you'll proceed to explore how malware and social engineering evolve over time. Lastly, you'll examine some real-world case studies that feature malware and social engineering attacks.
In the digital era, boundaries are constantly being redefined. Our networks, once clear and tangible, have been blurred by the advent of cloud technology, Internet of Things (IoT), and a myriad of interconnected systems. As these boundaries shift, the methods employed to breach them also evolve. To fortify our defenses and safeguard ever-expanding perimeters, it becomes critical for organizations to harness cutting-edge advancements, such as generative artificial intelligence (AI), to envision, predict, and counteract these emerging threats. Begin by exploring the fundamentals of network and perimeter hacking, typical network attack techniques, and how these attacks can be enhanced using generative AI. You will investigate AI-assisted countermeasures, examine how AI impacts both attack and defensive posture, and assess the effectiveness of network security hardware and software. You will use generative AI for enhanced network vulnerability discovery of wired and wireless networks and conduct a simulated AI-driven attack on a wired network. Finally, you will discover why network and perimeter security is so important and assess real-world case studies involving network and perimeter hacking.
This course is a comprehensive journey into the tools and techniques available to ethical hackers to discover vulnerabilities and weaknesses in web applications and databases. Particular attention will be paid to how generative artificial intelligence (AI) can help in the discovery and mitigation of attacks on these technologies, including those enhanced by integrating AI in the attack. You'll start by exploring common web application and database vulnerabilities and how they are exploited and mitigated. You'll also examine common database hacking techniques. Next, you will explore how generative AI can be applied to enhance the security of web applications and databases and you will conduct a generative AI-assisted SQL injection attack in a simulated environment. Then, you'll explore strategies to secure Web applications and databases and use generative AI and prompt engineering to detect and address vulnerabilities in a web application. You'll explore the implications of AI-enhanced Web and database hacking and consider the importance of regular updates and patches. Lastly, you'll review and assess real-world case studies of web application and database hacking and focus on the potential impact of generative AI on such attacks.
As the digital revolution marches forward, cloud computing and the Internet of Things (IoT) have taken center stage in transforming our world into a seamlessly interconnected ecosystem. This course dives deep into the interplay between Cloud computing, IoT, and generative artificial intelligence (AI), shedding light on their collective vulnerabilities and potential and demonstrating how generative AI can be both a formidable adversary and a powerful ally. You will begin by exploring common cloud service and IoT device vulnerabilities and how generative AI can help in either exploiting their respective vulnerabilities or protecting against such weaknesses. Then, you will then conduct a simulated hack against a cloud-based service with the assistance of generative AI. Next, you'll identify security best practices for cloud and IoT devices and evaluate security measures for cloud services and IoT devices. You'll employ generative AI to identify and mitigate IoT device vulnerabilities and you'll discover trends and advancements in cloud and IoT security and the role of generative AI. Finally, you will explore real-world case studies involving cloud and IoT hacking focusing on the potential impact of generative AI on these attacks.
Mobile devices are involved in vast facets of our daily lives, and as these devices and AI evolve, they become powerful tools and vulnerable gateways. With AI-driven functionalities at the forefront of mobile technology innovation, understanding the security implications and nuances is paramount. In this course, explore common mobile platform vulnerabilities and utilize generative AI to bolster mobile security measures. Next, learn how to execute mobile attacks with generative AI, evaluate mobile security measures, and use generative AI to identify and patch app vulnerabilities. Finally, examine the impact of AI-enhanced mobile hacking, the importance of regular updates and patches, and real-world mobile hacking case studies. Upon completion, you'll be able to outline the intricate landscape of mobile security and the implications of integrating generative AI.
As cyber threats become more sophisticated, the art of evading detection, covering one's tracks, and maintaining access becomes an essential craft for every ethical hacker. The potential of artificial intelligence (AI) to revolutionize clandestine operations in cybersecurity cannot be overlooked. This course examines the fusion of traditional stealth techniques in hacking with the cutting-edge capabilities of generative AI, looking at how generative AI can be harnessed to leave no digital footprint, ensuring prolonged access to compromised systems. You'll begin by exploring the techniques used by hackers to maintain access and the role of generative AI in detecting and mitigating attempts to cover tracks. You'll orchestrate a generative AI-assisted operation to cover tracks in a simulated environment and use generative AI to identify signs of advanced track-covering activities. Next, you'll identify the implications of failing to cover tracks, and of maintaining prolonged access to compromised systems. Lastly, you'll analyze and evaluate real-world case studies focusing on the potential impact of generative AI.
Python and Google Bard can be combined to create applications and programs via the PaLM 2 API. These programs can solve problems or integrate Bard into workflows or processes. In this course, you will learn to solve code problems with Bard and how to use the Python Client API library to connect and use PaLM to create applications that integrate Bard. In particular, you will explore how to programmatically check content for appropriate communications, adjust parameters to fine-tune responses, troubleshoot common problems, add security to a process, and create a simple chatbot.
Different types of prompts serve distinct purposes when interacting with language models. Each type enables tailored interactions, from seeking answers and generating code to engaging in creative storytelling or eliciting opinions. In this course, you will learn the four elements of a prompt: context, instruction, input data, and output format. Next, will also explore prompt categories, such as open-ended, close-ended, multi-part, scenario-based, and opinion-based. Finally, you will look at different types of prompts based on the output that they provide. You will use prompts that generate objective facts, abstractive and extractive summaries, classification and sentiment analysis, and answers to questions. You'll tailor prompts to perform grammar and tone checks, ideation and roleplay, and mathematical and logical reasoning.
Data generation prompts instruct language models to generate synthetic data, useful for creating datasets. Code generation prompts are used to produce code snippets or entire programs, aiding developers in coding tasks. Zero-shot prompts challenge models to respond to unfamiliar tasks, relying on their general knowledge. Few-shot prompts provide limited context to guide models in addressing specific tasks, enhancing their adaptability. You will start this course by working with data generation and code generation prompts. You will explore how to use starter code prompts, convert code from one language to another, and prompt models to explain a piece of code. Next, you will see how to leverage generative AI to debug your code and generate complex bits of code with step-by-step instructions. Finally, you will explore techniques to improve prompt performance.
This comprehensive course delves deep into the fascinating world of Generative AI. Through a combination of engaging lectures and hands-on practice, participants will gain an in-depth understanding of what generative models are, how they differ from other AI techniques, and the theories and principles underlying them. You will discover various types of generative models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs), and explore the process involved in training these models. Then you will examine the strengths, limitations, and practical applications of generative models across various domains, such as image generation, text generation, and data augmentation. Next, you will learn how to evaluate the performance of generative models and focus on ethical considerations in generative AI and the potential societal impact of these technologies. Finally, you will have the opportunity to generate synthetic data using generative models for training and testing purposes and investigate the notion of responsible AI in the generative era. Upon course completion, you will be prepared not just to use these powerful tools, but to use them wisely and ethically.
This course dives deep into the world of generative models, providing learners with a comprehensive understanding of various generative techniques and their applications. This course is carefully designed to bridge theoretical concepts with practical applications, demystifying the methods used in popular generative models like generative adversarial networks (GANs), variational autoencoders (VAEs), and more. Through a combination of rich imagery, illustrative examples, and detailed explanations, participants will explore the differences between generative and discriminative modeling, the foundational framework of generative artificial intelligence (AI), and the various evaluation metrics that gauge the success of these models. Whether you're a budding data scientist, an AI enthusiast, or a seasoned researcher, this course offers a deep dive into the cutting-edge techniques that are shaping the future of artificial intelligence.
Dive deep into the expansive realm of large language models (LLMs), a pivotal cornerstone in today's artificial intelligence (AI)-driven landscape. This course unravels the intricacies of these models, from their architecture and training methods to their profound implications in real-world scenarios. Begin by exploring the significance of LLMs in the world of AI. Then you will examine the architecture of LLMs, evaluate the impact of data on the effectiveness of LLMs, and fine-tune your LLM for a specific task. Next, you will investigate the ethical implications of using LLMs, including potential biases and privacy issues. Finally, you will discover the potential and limitations of LLMs and learn how to stay updated with the latest advancements in this dynamic field.
It's important to understand the intricacies of large language models (LLMs) and their pivotal role in the realm of generative artificial intelligence (AI). This course offers an exploration of the architecture, training, and fine-tuning of LLMs. Begin by learning how to implement various techniques tailored for specific generative tasks and delve into the integration of multimodal AI approaches, combining text and visuals. This course not only stresses the technical aspects but also confronts the ethical dilemmas, spotlighting bias and fairness in AI applications. Next, through a blend of theory, demonstrations, and emerging research discussions, explore how the full potential of LLMs can be harnessed and how to prepare for the next wave of AI innovations.
In the modern digital era, generative artificial intelligence (AI) emerges as a game-changer, introducing unprecedented capabilities to the business landscape. This course is tailored for professionals seeking to understand the depth and breadth of generative AI's impact on the business world. Dive into the essentials, from the foundational concepts to ethical ramifications and real-world implementations. You will explore the transformative potential of generative AI on business operations, products, and customer experiences and delve into the algorithms propelling these innovations. Discover the possibilities for the interplay of human expertise with AI, managing data for AI deployments, and navigating legal landscapes. At the end of the course, participants will be adept at assessing the business value of generative AI and equipped with the knowledge to strategically integrate it into their organization's digital evolution.
The Azure Cloud platform's generative artificial intelligence (AI) solutions are robust and mature. Encompassing multiple services and supporting rich integration with other Azure services, it's relatively easy to get up and running with generative AI solutions using Azure. In this course, you'll dive deep into generative AI with Azure OpenAI, beginning with an introduction to Azure OpenAI architecture, an overview of OpenAI Studio, and a hands-on demonstration of how to customize models in Azure OpenAI. Then you'll learn how to build a custom app, fine-tune OpenAI models, and deploy models with Azure OpenAI. You'll explore text generation, translation, question answering, image generation, and troubleshooting. Finally, you'll discover Azure OpenAI service integrations, AI Search (formerly Cognitive Search), privacy and compliance issues, and quotas and collaboration with Azure OpenAI.
The chatbot is a popular example of how generative artificial intelligence (AI) is being used today, in all industries and environments. In this course, you will learn about the Azure Bot service and its features, including how generative AI can be leveraged to build intelligent and interactive chatbot applications. You will begin with an introduction to Azure Bot service, generative models and bots, and a high-level discussion of how to build a basic bot. Then you will explore ethical issues surrounding AI-driven bots, find out how to get started with bot creation in Copilot Studio, and learn how to train and deploy models for conversational language understanding (CLU) projects. Next, you will discover Azure Cognitive Services and scaling, and monitoring and troubleshooting bot performance. Finally, you will examine data security, usage auditing, and cost tracking.
GitHub, in conjunction with Git, provides a powerful framework for collaboration in software development. Git handles version control locally, while GitHub extends this functionality by serving as a remote repository, enabling teams to collaborate seamlessly by sharing, reviewing, and managing code changes. In this course, you will begin by setting up a GitHub account and authenticating yourself from the local repo using personal access tokens. You will then push your code to the remote repository and view the commits. Next, you will explore additional features of Git and GitHub using generative AI tools as a guide. You will also create another user to collaborate on your remote repository, and you'll sync changes made by other users to your local repo. Finally, you will explore how to merge divergent branches. You will discover how to resolve a divergence using the merge method with help from ChatGPT and bring your local repository in sync with remote changes.
Branches are separate, independent lines of development for people working on different features. Once you have finished your work, you can merge all your branches together. You will start this course by creating separate feature branches on Git and pushing commits to these branches. You will use prompt engineering to get the right commands to use for branching and working on branches. You will also explore how to develop your code on the main branch, switch branches, and then ultimately commit to a feature branch. Next, you will explore how you can stash changes to your project to work on them later. Finally, you will discover how to resolve divergences in the branches. You will try out both the merge and rebase methods and confirm that the branch commits are combined properly.
Python is a general-purpose programming language used for web development, machine learning, game development, and education. It is known for its simplicity, readability, and large community of users and resources. Begin this course by exploring Python with the help of AI tools like ChatGPT, focusing on the importance of prompt engineering. You will install Python using the Anaconda distribution and write code in Python using Jupyter notebooks, allowing you to quickly prototype and get feedback. Next, you will discover variables and data types and work with strings, integers, and floats. You will encounter errors as you program and learn how to debug your code using generative AI tools. Finally, you will examine Python data structures such as lists, tuples, sets, and dictionaries. This course will provide you with a solid knowledge of the basics of Python and confidence in using generative AI tools to help you with your learning journey.
Loops and functions are two of the most important concepts in Python programming. Loops allow you to repeat a block of code a certain number of times, or until a condition is met. Functions allow you to group together related code and reuse it throughout your program. Begin this course by exploring shallow and deep copies of data structures and performing string splicing operations in Python. Then you will use comparison and logical operators to check whether a condition is satisfied. You will evaluate conditions using prompt engineering to find solutions to your problem scenarios. Next, you will iterate over sequences using for and while loops and use the break and continue keywords. You will create and use functions in Python, discovering how to prompt generative AI tools to write functions for you in response to stubs that you provide. Finally, you will focus on first class functions and lambdas.
Classes and objects are fundamental concepts in Python programming, while data analysis and visualization are two of the most important applications of Python. You will begin this course by creating a simple class in Python and instantiating objects of that class. You will see how you can use prompts to define what you want your classes to do and then leverage generative AI tools to write the code you need for your classes, member variables, and methods. Next, you will perform real-world tasks like data analysis and visualization using Python libraries. Then you will use prompt engineering help to make HTTP requests to REST APIs to retrieve and add data. You will write and execute basic code in Python and accept input from the command line. To round off this course, you will import your script into another file the same way you would import a library such as Pandas.
Generative artificial intelligence (AI) is cutting-edge technology that is commonly used to optimize content creation, product design, and customer experience enhancement for everyday businesses. In this course, you will explore strategies that help leverage the power of generative AI to reshape your organization's cybersecurity solutions and processes. Discover basic concepts of artificial intelligence and discuss the effectiveness of various AI-based security measures based on real-life case studies. Consider the strengths and weaknesses of AI and generative AI in security-related scenarios and discover how to classify types of threats that AI and generative AI can help mitigate. Learn how AI can be used for threat classification, detection, and prevention, and explore ethical considerations when employing AI in cybersecurity. Lastly, look at implementing AI in hypothetical cybersecurity scenarios, and discover future trends that may intersect AI and cybersecurity based on current industry advancements.
Generative AI is artificial intelligence technology commonly geared towards creating content, however, it also has the potential to impact everyday business activities in many areas. In this course, you'll learn about commonly used tools in cybersecurity, their primary functions, and the potential benefits of enhancing cybersecurity tools with AI and generative AI. Explore the limitations of traditional security tools that can be addressed with AI and how to classify various types of AI models suitable for different security tools and contexts. Finally, discover AI integration challenges, tool maintenance, and migration planning, and explore how AI can enhance the predictive capabilities of common security tools leading to more robust and proactive cybersecurity measures.
Identity security is used to secure access to digital information or services based on the authenticated identity of an entity. Many identity security solutions can leverage artificial intelligence (AI) to help streamline processes and provide actionable insights to administrators and users. In this course, you'll discover common use cases for identity security, including securing DevOps, enabling access, and enforcing privilege. You will explore the key benefits and possible challenges of identity security that can be addressed by AI and ethical and privacy considerations when using AI for identity security. Next, you will discover strategies for integrating AI into existing identity security frameworks and how to predict future trends in AI-powered identity security based on current industry advancements. Lastly, you will explore the potential of AI in preventing identity theft and other related security threats.
Organizations go to great lengths to protect email accounts and communications from unauthorized access, loss, or compromise. Prioritizing email security helps ensure confidentiality, data protection, business continuity, and malware defense. In this course, you will explore common email threats to organizations including URL phishing, spear phishing, brand impersonation, malware, and spam. You'll discover how email security enhanced by artificial intelligence (AI) can help reduce administrative efforts and strengthen an organization's security posture. Next, you will explore various AI approaches suitable for enhancing email security, and discover challenges and risks associated with an ever-evolving email threat landscape. Then, you will learn the benefits of automating email security, including greater cost savings and cyber resilience, and discover privacy implications and potential drawbacks when using AI for email security. Lastly, you will explore future trends in AI-powered email security based on current industry advancements and discover the importance of education and awareness when it comes to email security.
Data protection principles are fundamental guidelines that ensure the security, privacy, and integrity of data. As organizations increasingly rely on data analytics to extract valuable insights, it becomes crucial to prioritize data protection. In this course, you'll learn how artificial intelligence (AI) can be leveraged to enhance data protection and validation. Explore the principles of data protection including data availability, data life cycle management, and information life cycle management, and discover key areas where AI can enhance user data protection and validation. Next, consider the potential benefits and risks of leveraging AI for user data protection and validation and discover how to protect sensitive data and AI models. Lastly, explore common AI security risks such as AI model attacks, data security risks, code maintainability, and supply chain complexity, and consider how data integrity measures and AI can work hand in hand to improve an organization's security posture.
Authentication refers to the process of validating the identity of a registered user or process to ensure a subject is who they claim to be. In this course, you will explore the role of artificial intelligence (AI) in authentication and how it can be leveraged to confirm a user's identity through a selfie, fingerprint, or voice recognition. Discover common authentication vulnerabilities, including SQL injection, username enumeration, and weak login credentials, and investigate how behavioral biometrics can analyze a user's physical and cognitive behavior to determine threats. Learn about other areas to consider when building a secure authentication solution, including anomaly detection, adaptive authentication, voice recognition, and facial recognition. Finally, find out how to detect security threats using AI and examine how AI models can streamline a common authentication process.
Instruction detection systems (IDSs) can help organizations monitor networks or systems for malicious activity or policy violations and alert them when such activity is discovered. In this course, you will explore the key roles artificial intelligence (AI) plays in cybersecurity, including prediction, detection, and response. Then you will discover key differences between intrusion detection systems, vulnerability management systems, behavioral analytics, and security auditing systems. Next, you will investigate intrusion detection system types like network intrusion, network node, host intrusion, protocol-based, and application protocol-based intrusion detection systems. Examine intrusion detection system methods, such as signature-based intrusion, anomaly-based, and hybrid intrusion detection, and take a look at AI-based intrusion detection benefits and challenges. Finally, focus on how AI can detect and prevent security threats, and dig into possible future trends related to security threats.
Structured query language (SQL) is a powerful query language designed for managing and manipulating relational databases. Its declarative nature allows users to interact with databases by specifying the desired result, leaving the system to determine the optimal method of execution. Begin this course with an introduction to SQL, including the features of SQL and how and where SQL is used. Then, you will install and operate MySQL, utilizing the assistance of generative artificial intelligence (AI) chatbots ChatGPT and Bard. You will work with the MySQL Workbench and learn to create tables, insert data into tables, and update and delete records in tables. Next, you will find out how to apply constraints on tables, use NOT NULL constraints to prohibit missing values, and use unique constraints, which ensure distinct values in columns. Finally, you will create and work with primary key constraints that are used to uniquely identify records in a table.
In structured query language (SQL), filtering and grouping are fundamental operations that enable precise control and analysis of data. Begin this course by filtering your data using SQL queries. You will learn how to use the WHERE clause with comparison and logical operators to create predicates and how to use the LIKE and IN keywords for filtering. Then you will discover how to read data from a CSV file into a MySQL table using the MySQL workbench. Next, you will work with subqueries to help execute queries using data from multiple tables and you will explore foreign-key constraints to maintain referential integrity. You will examine grouping and aggregation of your data, including simple numeric aggregations like COUNT, SUM, and AVG. Finally, you will perform basic string manipulation and math operations on table columns, divide your data by categorical columns and perform aggregations for each category, and filter the results of grouped and aggregated tables.
Joins in SQL combine data from multiple tables, linking related information based on common columns. Views offer a virtual representation of data, simplifying complex queries. Triggers execute predefined actions in response to specific events and transactions safeguard against data inconsistencies in the event of failures. Begin this course by creating indexes on SQL tables to enhance query performance. Next, you will explore joining two tables in SQL, use prompt engineering to specify how you want to combine data, and have ChatGPT and Bard generate the queries you need. Then you will create and query simple views and perform normalization so that your data adheres to principles that reduce redundancy and maintain data integrity. You will create triggers for row insertions, updates, and deletes, all with generative artificial intelligence (AI) help. Finally, you will work with transactions, use prompt engineering to state your problem statement, and have generative AI write your queries for you.
HTML templates and database access play integral roles in modern web development, providing tools to enhance website structure, presentation, and responsiveness. HTML templates serve as the backbone for web pages, defining the document structure and content while database integration serves to make your app serve dynamic, personalized content. Begin this course by adding life to your Django apps with dynamic content. Real apps invariably rely on user input, so you will learn to weave data from Python functions into your HTML templates using template parameters. Then you will use dynamic named URLs to add links between your app's pages. You will use generative AI tools and template inheritance to reduce boilerplate HTML code. Next, you will style and theme your application using Bootstrap templates. Finally, you will dive into data models with Django, using Django's built-in SQLite database and object-relational mapper. You will create a class representing a table and evolve your database schema using migrations.
In Django, data models are the essential building blocks for defining the structure and behavior of a web application's database. Utilizing an object-relational mapping (ORM) system, Django abstracts the database schema into Python classes, allowing developers to interact with the database using high-level, Pythonic syntax. Begin this course by logging into the Django Admin interface, your control center for modifying models, adding users, and performing other administrative actions. You will create a superuser and add a regular user. Next, you will use HTML templates and the associated Python code to dynamically pull data from the models based on user input. Then you will focus on deletion and deactivation of resources. Finally, you will add and modify data. You will create forms in your HTML templates and configure your code to modify resources based on the input. By the end of this journey, you will be well-versed in Django, with skills ranging from creating a user interface using HTML templates to managing data stored in backend databases.
Data manipulation involves getting your data in the right format to generate further insights. Prompt engineering allows you to specify your problem statements in natural language and generate code to meet your needs. You will begin this course by applying filters to your DataFrames in pandas. You will use logical and comparison operators to specify filter predicates and filter based on datetime data. Next, you will group and aggregate your DataFrames. You will use prompt engineering to explain your grouping and aggregation requirements and tweak generated code to tailor your solutions. Additionally, you will learn about the split-apply-combine method, a step-by-step technique for grouping and aggregation. You will then tackle data cleaning. You will remove rows with duplicate records and deal with missing values and other inaccuracies in your data. Finally, you will explore the use of pivot tables, which help rearrange and reshape data into a format more suitable for analysis.
Combining data is a key data manipulation technique and is well supported in the pandas library. Exploratory data analysis involves data visualization to understand the relationships that exist in your data. Prompt engineering can help you pick the right visualization for viewing and understanding relationships between variables and can also generate code for these visuals. You will start this course by combining data in DataFrames learning techniques to join DataFrames using different constructs such as the inner join, left and right joins, and the full outer join. Next, you will delve into time-series analysis and visualization. You will use prompt engineering help to visualize your time series data to identify trends and patterns. Finally, you will explore data visualization in Python. You will begin by crafting univariate visualizations that display information about a single variable. You'll see that tools such as ChatGPT and Bard can help you pick the right visualizations for different use cases. You will explore bivariate visuals and use Plotly to generate interactive visualizations which are more user-friendly and intuitive.
Google Cloud Platform (GCP) has a broad range of powerful generative AI tools that can be used to leverage the power of modern artificial intelligence (AI). A main component of that toolbox is Google's Vertex AI platform, which can aid organizations in streamlining machine learning (ML) workflows and deliver better machine learning models. In this course, you'll be introduced to Vertex AI, beginning with an overview of its features, model deployment, and end-to-end workflow. Then you'll explore pros and cons of Vertex AI, how it can be used to accelerate development, and model selection, training, and evaluation considerations. Finally, you'll learn about Vertex AI integration, data preparation, feature engineering, model evaluation, and Vertex AI success stories.
A main component of the Google Cloud Platform (GCP) machine learning offering is Google's Vertex AI Platform, which provides tools that can aid organizations in leveraging Vertex AI for generative artificial intelligence (GenAI) projects. This course investigates the unique support and capabilities offered by Vertex AI specifically for generative AI models, empowering learners to unlock their creative potential. In this course, you will dig into harnessing generative AI using Vertex AI, beginning with an overview of generative AI support in Vertex AI, Vertex AI's generative AI models, and the features and uses of Google's Generative AI Studio. Then you will discover the Vertex AI API, explore Vertex AI Model Garden, and create custom models in Vertex AI. Finally, you will learn about responsible AI with safety filters and find out how to use Vertex AI Search and Conversation. After completing this course, you will be able to confidently leverage Vertex AI to build generative AI models.
Generative artificial intelligence (AI) has taken the world by storm, and Google Cloud Platform (GCP) has a broad range of powerful generative AI tools that can be used to leverage the power of modern artificial intelligence. Google's Generative AI Studio is a cloud console tool for rapid prototyping and testing of generative AI models and leverages the power of Vertex AI, allowing developers to interact, tune, and deploy large-scale AI models. In this course, you will take a deep dive into Generative AI Studio, beginning with generative AI prompt design, model training, and tuning language models. Then you will explore model performance evaluation and generative AI deployment. Next, you will focus on case studies and practical examples to discover how organizations use GCP generative AI. Finally, you will learn how to convert speech to text and prototype language applications.
Google Cloud Platform (GCP) has a wide range of powerful generative artificial intelligence (AI) services that can leverage the power of modern AI. Using these services, developers can build powerful applications with GCP, and with the proper use of various technologies, developers new to GCP can gain the knowledge and skills necessary to develop and deploy cutting-edge generative AI applications. In this course, you will create a generative AI-powered app in GCP, beginning with planning and building a generative AI model, GCP environment setup, and Python environment creation. Then you will learn how to build and train a generator and a discriminator with TensorFlow. Next, you will build and run a training loop, and dockerize the training script. Finally, you will deploy your finished model to an application programming interface (API) endpoint and test that model.
Amazon Web Services (AWS) provides both AI and generative AI services that can be used for developing and deploying business solutions. In this course, we will begin by introducing the foundational principles of generative artificial intelligence (GenAI). We will explore the benefits AWS offers for generative AI development and deployment. Next, we will delve into the intricacies of foundational models, focusing on their pivotal role in top AI startups and their relationship with generative models. Then, we will explore generative AI services available from AWS, distinguishing them from standard AI offerings, and gaining insight into the nuances of building generative AI systems on AWS. Finally, we will look at the robust infrastructure AWS provides to support generative AI development.
This comprehensive course introduces learners to the world of generative artificial intelligence (AI) models within machine learning (ML), focusing on Amazon SageMaker as a prime tool for design, training, optimization, deployment, and monitoring. In this course, we will begin with a deep dive into the fundamental concepts and types of generative models like generative adversarial networks (GANs) and variational autoencoders (VAEs). We will explore their pros, cons, and relevant architectures. Next, we will use hands-on tutorials and in-depth discussions to guide you through the steps of designing a GAN architecture, leveraging SageMaker's built-in algorithms, preprocessing data, and distributed training capabilities. We will then delve into optimization techniques, transfer learning, and quality evaluation methods, looking at ways apply these concepts in real-world scenarios. Lastly, we will introduce deployment strategies in SageMaker, highlighting how to serve models as endpoints for real-time inferences and how to efficiently monitor and troubleshoot their generative models.
Dive deep into the world of Amazon Bedrock and examine its applications in generative artificial intelligence (GenAI) development. Starting with the core concepts, this course takes you on a comprehensive journey through the various ways Amazon Bedrock can be applied and the inherent benefits of using it for generative AI development. You will explore the Bedrock console and playgrounds and learn how to design and develop different generative AI models. Then you will focus on evaluating a generative AI project. Finally, you will design a business application using Bedrock and investigate the platform's strengths and weaknesses. At the end of the course, you will be equipped with the skills to design innovative business applications and develop new generative AI models using Amazon Bedrock.
In the rapidly evolving world of software development, efficiency and productivity are paramount. This course provides an in-depth exploration of Amazon CodeWhisperer, one of the industry's most cutting-edge tools for automating the code generation process. You will begin by investigating the advantages of artificial intelligence (AI) code generation and its impact on productivity. Then you will explore Amazon CodeWhisperer, focusing on its role in automating code generation, its different techniques for code generation, specific use cases, and key features. Then you will set up CodeWhisperer for various platforms, including JetBrains and JupyterLab. Next, you will examine the code suggestions features of CodeWhisperer and learn how to configure security scans. Finally, you will discover how CodeWhisperer can optimize software development and you will identify best practices for generating and integrating code with CodeWhisperer.
Delve into the groundbreaking capabilities of Amazon Polly in revolutionizing text-to-speech applications across diverse sectors. Starting with the basics of Polly's text-to-speech technology, you will discover how to preprocess input texts to ensure clarity and effectiveness in speech output. You will learn how to install, design, and use Polly's rich set of voices and languages to create dynamic and engaging audio content for various purposes, from audiobooks to customer service chatbots. Then you will explore Polly's customization options, including adjusting speech rate, pitch, and emphasis to produce natural-sounding speech that fits your specific needs. Next, you will investigate security, speech synthesis, and how to implement Speech Synthesis Markup Language (SSML) tags to add pauses, adjust pronunciation, and control the volume of the speech output, enhancing the listening experience. Finally, you will examine best practices for managing and optimizing your Polly usage to keep costs down while maintaining high-quality speech output.
Autoencoders are a class of artificial neural networks employed in unsupervised learning tasks, primarily focused on data compression and feature learning. Begin this course off by exploring autoencoders, learning about the functions of the encoder and the decoder in the model. Next, you will learn how to create and train an autoencoder, using the Google Colab environment. Then you will use PyTorch to create the neural networks for the autoencoder, and you will train the model to reconstruct high-dimensional, grayscale images. You will also use convolutional autoencoders to work with multichannel color images. Finally, you will make use of the denoising autoencoder, a type of model that takes in a corrupted image with Gaussian noise, and attempts to reconstruct the original clean image, thus learning better representations of the input data. In conclusion, this course will provide you with a solid understanding of basic autoencoders and their use cases.
Variational autoencoders (VAEs) represent a powerful variant of traditional autoencoders, designed to address the challenge of generating new and diverse samples from the learned latent space. VAEs introduce probabilistic components, incorporating a probabilistic encoder that maps input data to a distribution in the latent space and a decoder that reconstructs data from samples drawn from this distribution. Begin this course by discovering how variational autoencoders can be used for generating images. Next, you will create and train VAEs in Python and the Google Colab environment. Then you will construct the encoder and decoder. Finally, you will train the VAE on multichannel color images. Upon course completion, you will have a solid understanding of variational autoencoders and their use in generating images.
Generative adversarial networks (GANs) represent a revolutionary approach to generative modeling within the realm of artificial intelligence. Begin this course by discovering GANs, including the basic architecture of a GAN, which involves two neural networks competing in a zero-sum game - the generator and the discriminator. Next, you will explore how to construct and train a GAN using the PyTorch framework to create and train the models. You will define the generator and discriminator separately, and then kick off the model training. Finally, you will focus the deep convolutional GAN, which uses deep convolutional neural networks (CNNs) rather than regular neural networks. CNNs are optimized for working with grid-like data, such as images and these can generate better-quality images than GANs built using dense neural networks. In conclusion, this course will provide you with a strong understanding of generative adversarial networks, their architecture, and their usage scenarios.
DALL-E and Whisper are OpenAI's image and audio-based model offerings. DALL-E, an image generation model, demonstrates the ability to create visually striking images based on textual prompts. Whisper represents a state-of-the-art automatic speech recognition (ASR) system. With its high accuracy in transcribing spoken words, Whisper finds utility in various applications, from voice assistants to transcription services. You will begin this course by generating images using OpenAI's DALL-E model. You will generate images using text prompts, create variations of existing images, and perform image inpainting using natural language. Then, you will work with the Whisper model, which caters to speech transcription and translation. You will transcribe and translate audio in different languages and accents, and you will evaluate the performance of these models.
Fine-tuning models is a critical aspect of leveraging pre-trained artificial intelligence models to suit specific tasks or domains. OpenAI allows developers to fine-tune models like GPT-3 and 4, enabling customization for particular applications. You will begin this course by creating prompt-completion pairs for fine-tuning, running a fine-tuning job, and observing the model's performance. You will send prompts based on the training data and examine the model's attempt to answer questions. Next, you will dive into connecting with the Assistants API programmatically. You will create an assistant by providing a role description and model, and you will initiate a thread to simulate user-assistant conversations. You will also upload files and query the assistant based on information contained in the files. Finally, you will explore creating and comparing text embeddings, efficient numerical representations of text that capture meaning and semantics of the text. You will learn how embeddings of similar words are numerically close to one another and how embeddings can be used as a preprocessing technique to represent text for other machine learning applications such as clustering and classification.
Generative AI (GenAI) is fundamentally altering the way organizations operate, necessitating that project managers become familiar with GenAI and its application in project work. In this course, you will explore the many applications of GenAI in project management and compare different GenAI models. Next, you will examine the application of GenAI in resource allocation, data optimization and programming, decision-making, risk, and project quality. Then you will discover how to identify a GenAI hallucination, recognize GenAI as a possible project risk, and create GenAI prompts. Finally, you will identify ethical and regulatory considerations and evaluate emerging trends and improvements in GenAI for project management.
Dive into the transformative world of Chatbots and their pivotal role in revolutionizing customer interactions. This course deciphers the intricacies of Amazon Lex, a premier tool for crafting conversational interfaces, and its seamless integration with Amazon Polly for lifelike speech. Participants will navigate through the creation of intent-driven, context-aware chatbots, delve deep into advanced capabilities like sentiment analysis and multi-turn conversations, and harness Amazon Polly for dynamic user responses. Moreover, attendees will learn to integrate external services to elevate chatbot functionality, ensuring the end products are not only interactive but also adhere to the highest standards of security, privacy, and compliance.
Embark on an enlightening journey into the realm of Generative artificial intelligence (AI) and its transformative impact across diverse industries, powered by AWS's robust suite of services. This course unravels the optimal AWS tools for deploying generative AI models, guiding participants through the nuances of configuration, setup, and deployment options. Engage with best practices for data preparation, model training, and optimization using premier AWS services like Amazon SageMaker. Further, participants will refine their expertise in implementing scalable architectures, leveraging AWS's advanced monitoring tools, ensuring rigorous security protocols, and mastering performance optimization strategies. All these culminate in crafting efficient, scalable, and secure generative AI applications on AWS.
In the rapidly evolving landscape of information technology (IT), the integration of generative artificial intelligence (AI) is becoming increasingly essential. Generative AI, a branch of artificial intelligence, is responsible for generating content, such as text, images, and even code that is often indistinguishable from that created by humans. This course is designed to provide IT professionals with the knowledge and practical skills to leverage generative AI in the field of IT administration. Begin this course by exploring generative AI concepts and tools. You will learn how to deploy generative AI models and apply generative AI techniques on Azure. Then you will investigate how to create effective prompts and how to integrate your organization's data with Azure OpenAI models. Next, you will focus on analyzing generative AI models, automating tasks, and integrating generative AI tools. Finally, you will discover security concerns with using generative AI, find out how to apply security protocols, and troubleshoot generative AI. At the end of this course, you will be able to apply Generative AI models to address real-world IT administration challenges.
Unlock the transformative potential of advanced generative AI (GenAI) models in site reliability engineering (SRE). This course caters to SRE professionals, IT architects, and those eager to harness the full scope of generative AI in the context of SRE and covers the concepts of advanced generative AI techniques to bolster system reliability, scalability, and efficiency in SRE. In this course, we will begin with an overview of advanced GenAI and SRE. You'll explore how to deploy advanced models from the Azure AI model catalog and transition into testing approaches for applications integrating GenAI. Next, you'll deploy a GenAI-based application to Azure Kubernetes Service (AKS), explore the suitability of GenAI models for SRE, and see which SRE tools incorporate GenAI. With that foundation, you'll experiment with various methods for supporting SRE operations with GenAI including chatbots, implementing a backoff mechanism, evaluating model performance, and configuring log analytics for GenAI models on Azure. Lastly, you'll explore GenAI and SRE advancements, and fine-tune a GenAI model in Azure OpenAI Studio.
The rapid integration of artificial intelligence (AI) in information technology (IT) brings forth ethical responsibilities that demand critical attention. This course delves into the ethical and responsible use of AI in IT, providing IT professionals, developers, and decision-makers with the knowledge and tools needed to ensure AI models are designed and deployed ethically. Begin by exploring ethical considerations and biases and the implications of biased AI models. Then you will learn how to design an AI model while incorporating legal and compliance considerations and use anonymization to ensure privacy. You will configure content safety filters in Azure OpenAI Studio to prevent the generation of harmful content. You will discover strategies, guidelines, and legal and compliance considerations for ethical AI model development, as well as the ethical impact of AI models. Next, you will examine common AI principles and goals and dive into approaches and algorithms to help identify and reduce biases. Finally, you will investigate how ignoring sensitive features when training a predictor does not necessarily address AI model disparities.
As a product manager, mastering generative artificial intelligence (GenAI) can enhance your ability to develop, strategize, and innovate technical products. This course is designed to bridge the gap between traditional product management and the cutting-edge capabilities offered by GenAI. Begin this course by discovering GenAI and its applications in product management. Then you will learn how to leverage different GenAI models to your advantage and explore the integration of GenAI throughout the product life cycle, from initial idea generation to analyzing customer feedback for continuous improvement. Next, you will investigate how GenAI can aid in persona development and product ideation, providing new ways to understand your users and create products that genuinely meet their needs. You will examine the use of GenAI as a dynamic tool for product innovation, helping you to organize product requirements more efficiently and effectively. Finally, you will identify the ethical and regulatory considerations of using GenAI and take a look at emerging trends.
Generative AI (GenAI) is fundamentally altering the way organizations operate, necessitating that program managers become familiar with GenAI and take a proactive stance in establishing governance around its use. In this course, you will explore the applications of GenAI in program management, design a GenAI governance plan, and support its integration across projects. Then you will examine how to develop an AI strategy and assess the adoption readiness of an organization. Next, you will discover how to ensure the accuracy of GenAI outputs, develop effective prompts, and assess the influence of GenAI on employee sentiment and team motivation. Finally, you will recognize GenAI as a program risk, identify ethical and regulatory considerations, and evaluate emerging trends in GenAI for program management.
In a business environment where optimizing operational efficiency is pivotal, the integration of artificial intelligence (AI) automation stands out as a transformative approach to advancing IT operational frameworks. In this course, you will see how AI automation can be integrated with IT operations to design AI automation workflows that will enhance IT operations efficiency and automate IT tasks and processes effectively. Next, you will explore how IT automation can be evaluated and optimized using AI Tools and continuous improvement strategies. Finally, we will demonstrate the use of AI tools to integrate IT Operations.
Practical AI automation integration is not just a technological upgrade. It is a strategic enabler for enhanced system reliability, performance, and business continuity, allowing organizations to be more responsive and adaptive in a dynamic market landscape. In this course, you will see how AI-powered scripts can be used for enhanced IT support and site reliability engineering (SRE) operations, as well as tasks such as server monitoring and log analysis. Next, you will explore strategies for implementing secure and effective AI-powered IT solutions and evaluating their impact on IT support and SRE operations. You will also consider best practices for implementing secure and effective AI-powered IT solutions. Finally, you will discover how to optimize AI automation workflows for better performance in IT operations.
Artificial intelligence (AI) has taken the world by storm, and image generation has become one of AI's most interesting contributions to the modern world. In this course, examine the pros and cons of generative AI (GenAI), the history of image generation, the uses of AI in creative content generation, and generative AI models and methods. Next, discover how to use generative AI to create content, generative AI pipeline components, and use cases for generative AI. Finally, learn about popular generative AI frameworks and tools, ethical considerations of generative AI, how AI influences art, the implications of generative AI, and the possible future of generative AI. After course completion, you'll be able to comprehensively describe the fundamentals of AI-powered image generation.
Amazing, controversial, and game-changing. It's remarkable to think that we're only at the beginning, only starting to see the opportunities offered by artificial intelligence (AI)-powered image generation. Yet here we are, at the start of a technological marvel that's taking the world by storm. In this course, you'll explore image generation frameworks, beginning with variational autoencoders (VAEs), generative adversarial networks (GANs), and comparing GANs and VAEs. Then you'll explore GAN architectures, GAN use cases, GAN training, and the DCGAN, WGAN, CycleGAN, and StyleGAN architectures. Finally, you'll learn about autoregressive models, autoregressive models in comparison to other techniques, diffusion models, diffusion model use cases, and the pros and cons of image generation frameworks.
Artificial intelligence has taken the world by storm over the past few years, and it is remarkable to think that we are only starting to see the opportunities offered by AI-powered image generation. To grasp the inner workings of the technology, an understanding of the mathematical foundations of AI-powered image generation is critical. In this course, you will explore the mathematical foundations of image generation, beginning with the role of generative adversarial networks (GANs) in image generation, basic GAN usage, probability distributions, and generative models. Then you will learn about noise vectors, activation functions in GANs, and loss functions. Next, you will investigate backpropagation, conditional GANs, and style transfer methods. You will discover latent space and adversarial training. Finally, you will create your own GAN-based image generation project.
With artificial intelligence (AI)-powered image generation, you are only limited by your imagination, your understanding of the technology, and the AI's capabilities in what you can accomplish. With a good understanding of generative AI, the sky's the limit. In this course, you'll be introduced to image editing with variational autoencoders (VAEs), beginning with fundamental principles of VAEs, probabilistic encoding and decoding with VAEs, and latent space in generative models. Then you'll be introduced to Keras, the Keras Environment, image editing with VAEs, and VAE training. Finally, you'll explore latent space variables in Keras, how to build an autoencoder in Keras, how to generate images with Keras, and Keras case studies.
Multiple technologies and techniques are used in artificial intelligence (AI)-powered image generation, and this broad field is growing so much every day that you would be forgiven for feeling overwhelmed by the opportunities. In many ways, AI image generation is in the very early stages, so it is staggering to think about what the future holds. In this course, you will be introduced to image generation with Stable Diffusion in Keras, beginning with an overview of Stable Diffusion and diffusion models. Then you will examine variations of diffusion models and uses for diffusion models. You will explore high-resolution image creation, realism, and detail enhancement with Stable Diffusion. Next, you will learn how to implement Stable Diffusion with Keras. Finally, you will investigate methods for prompt creation, inpainting, latent diffusion, and Stable Diffusion models.
Image generation is one of the most interesting contributions artificial intelligence (AI) is making to the modern world, and we have only scratched the surface of what is possible. Because AI image generation is still in its early stages, we have to wonder what the future holds. In this course, you will be introduced to high-resolution image generation with AI, beginning with Stable Diffusion use cases, model construction, and model training. Then you will delve into fine-tuning strategies, denoising diffusion models with Keras, and high-resolution image generation in Keras. Next, you will explore the advantages of high-resolution image generation in Keras, the inner workings of image generation, and the impact that Stable Diffusion has on AI-powered image generation. Finally, you will investigate the steps involved in moving an image generator to a cloud provider.
Generative Pre-trained Transformer (GPT) models go beyond classifying and predicting text behavior to helping actually generate text. Imagine an algorithm that can produce articles, songs, books, or code - anything that humans can write. That is what GPT can help you achieve. In this course, discover the key concepts of language models for text generation and the primary features of GPT models. Next, focus on GPT-3 architecture. Then, explore few-shot learning and industry use cases and challenges for GPT. Finally, practice decoding methods with greedy search, beam search, and basic and advanced sampling methods. Upon completing this course, you will understand the fundamentals of the GPT model and how it enables text generation.
Successful DevOps has to monitor the entire life cycle and use tools to facilitate collaboration to maximize the speed, quality, and rapid deployments necessary for today's business environment. Artificial intelligence (AI) can be a fundamental tool in achieving these goals, allowing teams to focus on core work while leaving monitoring and other tasks to AIs. This course will introduce how AI can play a role in DevOps, including the benefits and limitations of using DevOps in this way. You will explore the ethics of using AI and how AI automation can impact DevOps efficiency and accuracy. Next, you will discover how to use generative AI in DevOps automation and how it can generate, complete, document, and explain relevant code. You will then explore how generative AI can impact continuous deployment, including integrating with tools like Bard, Jenkins, and ChatGPT. Finally, you will see how you can use generative AI to improve the handling and triaging of customer feedback.
Using artificial intelligence (AI)-powered cloud platforms to build, train, and deploy AI models can help combat the increasing complexity and resultant pain points of DevOps implementation. In this course, you will explore AI-powered cloud platforms like Amazon SageMaker, Google AI Cloud Platform, Microsoft Azure AI, and IBM Watson. Then you will assess the scalability, performance, and monitoring benefits of AI platforms in DevOps. Next, you will explore integration, delivery, and deployment strategies for AI-powered cloud platforms, learn how to integrate IBM's Watson for handling customer support, and work with Amazon CodeGuru to analyze code for security issues. You will also investigate Microsoft Azure AI solutions to create a pipeline using Azure AI DevOps. Finally, you will examine Google Cloud AI tools, including Duet AI, to modernize apps, create application programming interfaces (APIs), and use natural language prompts to analyze data.
As software demands for businesses increase, the need to release quicker and with fewer issues has become paramount. Continuous integration and continuous deployment (CI/CD) have become necessary steps in DevOps to improve software delivery and maintain automation. Now, with artificial intelligence (AI), it is possible to augment these steps to develop even better and faster processes. In this course, you will learn how AI can help in the CI/CD pipeline and some of the AI-based tools available to take advantage of the AI offerings and to make the transition smoother. You will begin the course with an overview of the CI/CD pipeline and look at advantages, disadvantages, and strategies for using AI-powered CI/CD pipelines. You will then explore AI tools that can support CI/CD pipelines and ways to optimize them. Next, you will discover how analytics and automation work with CI/CD pipelines. Finally, you will delve into some security options to fortify your CI/CD pipelines.
Test automation and continuous testing are critical components of DevOps. Testing is a manual and time-consuming task that AI is capable of automating, especially with its visual capabilities to spot issues similar to how a human would. In this course, learn about AI's effect on test automation, the benefits and challenges of AI test automation, and common AI test automation tools. Next, discover how to create an interface model and perform no-code tests with Testim.io, the differences between continuous testing and test automation, and strategies for achieving continuous testing involving AI. Finally, explore how to run tests automatically with Percy and GitHub Copilot and use Percy to catch visual bugs and implement continuous testing in CI/CD processes using automation.
Orchestration in DevOps involves managing workflows, leveraging tools, and using technology to solve business problems. It helps control the complexity of the systems required by managing the workflows and reducing IT pain points. In this course, explore DevOps infrastructure orchestration, beginning with the benefits of using AI to improve infrastructure orchestration and the challenges of integrating AI into processes. Next, discover strategies for leveraging AI in orchestration, compare different AI-powered orchestration tools for various use cases, create and process alerts with Prometheus and Alertmanager, and use Grafana to parse data and create custom dashboards for predictive analysis. Finally, learn how to work with OpenStack, configure self-managing and self-healing infrastructure within OpenStack, and use MLflow and TensorFlow Serving to deploy machine learning models into production environments.
To build robust, secure, and performant systems it is necessary to be able to monitor and observe the entire DevOps lifecycle. Such monitoring can generate significant amounts of data and AI tools can be game changers for identifying issues, tracking down the root cause of problems, and some tools can even provide real-time metrics or predict future failures based on collected data. In this course, you will learn about monitoring and observability tools that incorporate elements of AI and the benefits that such tools can provide to the DevOps process.
Release management is responsible for managing the software process from initial development to deployment. When releases are smaller, shorter, and can be done quicker, it requires enormous effort to manage. In this course, you will discover how AI and AI-powered release management tools can enhance and optimize software release management. You will examine the impact that AI can have on release management, including how AI can help reduce errors and optimize workflows. Then you will investigate various AI release management tools. Next, you will explore release management with JFrog Artifactory, Bitbucket, and Jira. Additionally, you will learn how AI-powered release management tools can improve change management and enhance deployment reliability. Finally, you will use the GitLab tool to create, manage, and edit a release.
AI has impacted almost every phase of the DevOps life cycle, however, new technology like generative AI is still relatively new and continually improving. AI will likely continue to have an impact on DevOps well into the future. In this course, you will explore the potential future changes AI will bring to DevOps. You will identify future AI trends that will impact software development and influence test automation. Then you will discover how future AI will transform security, protecting networks and businesses from threats. Next, you will investigate how AI is changing business infrastructure, impacting observability, and influencing release and deployment. You will examine how AI can enhance collaboration, change how businesses make decisions, and transform continuous improvement. Finally, you will take a look at the impact of AI on business and occupations and prepare to integrate these advancements.
Implementing AI Change Management in your organization requires leaders who are ready to take the reins and navigate this exciting transition. This course will provide you with the building blocks to be that leader. This course dives into the unique challenges presented by AI integration, such as the rapid pace of change, potential job displacement, and ethical considerations. We'll explore how these challenges differ from traditional change initiatives. You'll identify key areas requiring transformation, including work processes, skill sets, and decision-making strategies. Crucially, the course equips you with AI-specific change management frameworks and explores how these frameworks promote a smooth transition through strategies like reskilling and upskilling your workforce, building trust through data governance, and fostering a culture of continuous learning. Moving beyond frameworks, this course provides practical guidance for developing your own change management strategy. The course also explores how AI will impact different roles within your organization and how to analyze potential skills gaps. Finally, the course concludes with invaluable insights from real-world success stories. We'll showcase compelling case studies from leading organizations that have successfully integrated AI, highlighting the key change management strategies they employed.
As you navigate the transformative power of AI within your organization, fostering a continuous learning culture becomes paramount for long-term success. To achieve this, you will need the strategies and tools to empower your workforce in this dynamic environment. This course delves into the importance of a strong learning culture. You'll explore how it empowers your employees to adapt to new technologies like AI, embrace continuous change, and ultimately drive the long-term success of your organization. Moving beyond the "why," the course equips you with practical strategies. We'll explore how to foster a growth mindset amongst your employees, shifting their perception of AI from fear to excitement for its potential. Next, this course explores innovative learning and development approaches specifically tailored for the AI era. We'll delve into examples like microlearning platforms for on-demand learning, personalized learning paths, gamified experiences to boost engagement, and collaborative learning opportunities that encourage knowledge sharing across your organization. Finally, we'll discuss the importance of measuring its effectiveness by exploring key metrics to track. Additionally, we'll explore how relevant key performance indicators (KPIs) and metrics might need to be adjusted in the AI era.
EARN A DIGITAL BADGE WHEN YOU COMPLETE THESE COURSES
Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.
Written by an industry expert and teacher, this guide is a very practical, hands-on Python book with several projects or case studies to build, and provides real-world templates that you may re-purpose for your own coding projects.
"Pythonic AI" is a book that teaches you how to build AI models using Python. It also includes practical projects in different domains so you can see how AI is used in the real world.
Making AI more accessible than ever, this hands-on book provides a clear overview of the technology, the common misconceptions surrounding it, and a fascinating look at its applications in everything from self-driving cars and drones to its contributions in the medical field.
Presenting an integrated, real-life case study incorporating all of the related managerial aspects, this book covers all facets of artificial intelligence (AI) and expert systems in a lucid and coherent manner.
3h 58m By K Sarukesi, P Gopalakrishnan, V S Janakiraman
Including contributions from leading scholars in a diverse set of fields, this resource is comprised of chapters addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence.
Offering insight into solving some well-known AI problems using the most efficient problem-solving methods by humans and computers, this book discusses the importance of developing critical-thinking methods and skills, and develops a consistent approach toward each problem.
4h 17m By Christopher Pileggi, Danny Kopec, David Ungar, Shweta Shetty
Providing a gentle introduction to the world of Artificial Intelligence (AI) using the Raspberry Pi as the computing platform, this book explores most of the major AI topics, including expert systems, machine learning both shallow and deep, fuzzy logic control, and more.
With examples in C# that can easily be applied across a wide range of platforms, including web, desktop, and mobile, this book is your accessible and practical guide to building AI-powered applications using the easy-to-use Cognitive Services APIs from Microsoft.
This book provides a comprehensive explanation of precision (i.e., personalized) healthcare and explores how it can be advanced through artificial intelligence (AI) and other data-driven technologies.
Teaching you how to build practical applications of computer vision using the OpenCV library with Python, this book discusses different facets of computer vision such as image and object detection, tracking and motion analysis and their applications with examples.
Containing real examples that you can implement and modify to build useful computer vision systems, this book will teach you to apply computer vision and machine learning concepts in developing business and industrial applications using a practical, step-by-step approach.
A heavily illustrated, practical introduction to an exciting field, this detailed resource explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries.
Exploring deep learning applications using frameworks such as TensorFlow and Keras, this book helps you to ramp up your practical know-how in a short period of time and focuses you on the domain, models, and algorithms required for deep learning applications.
Taking you through the history of chatbots, including when they were invented and how they became popular, this book will show how to build a chatbot for your next project using best practices and focusing on the technological implementation and UX.
If you are a developer interested in learning how to build your own conversational bot from scratch, this book is for you. Upon completion, you will be able to build a text-based Facebook Messenger bot and a voice-based custom skill for Amazon's Alexa Voice Service.
1h 13m By Amit Kothari, Joshua Hoover, Rania Zyane
Providing a comprehensive look at all the major bot frameworks available, this book will teach you the basics for each framework helping you to get a clear picture for which one is best for your needs.
Containing real-world examples throughout, this practical book will teach you how to develop bots with zero coding knowledge using the Azure Cognitive QnA Maker service, a GUI cognitive service from Microsoft.
Showing how you can use bots for just about everything, this book teaches you about bot programming, using all the latest and greatest programming languages, including Python, Go, and Clojure, so you can feel at ease writing your Telegram bot in a way that suits you.
Looking at the popular paradigms for chatbot construction, this book will provide a comprehensive source of algorithms and architectures for building chatbots for various domains based on the recent trends in computational linguistics and machine learning.
Introduction to Generative AI guides you through benefits, risks, and limitations of Generative AI technology. You'll discover how AI models learn and think, explore best practices for creating text and graphics, and consider the impact of AI on society, the economy, and the law.
This book provides an introduction to hyperautomation, highlighting its key components and providing guidance on how organizations can implement it to streamline everyday business operations.
4h 39m By Dr. Jagreet Kaur, Navdeep Singh Gill, Suryakant
Reimagine different generative AI as useful creative prototyping tools that can be used to augment your own creative process and projects. Gain a deeper understanding of how generative AI can elevate your creative future.
Generative AI and ChatGPT have the potential to transform industries and society by improving efficiency, enhancing creativity, and enabling more personalized experiences. If you are someone who is looking to stay ahead of the curve in this rapidly evolving digital age and utilize its potential, this book is for you.
2h 43m By Soumyadeep Roy, Sumit Kumar, Utpal Chakraborty
This book provides a deep dive into the world of generative AI, covering everything from the basics of neural networks to the intricacies of large language models like ChatGPT and Google Bard. It serves as a one-stop resource for anyone interested in understanding and applying this transformative technology and is particularly aimed at those just getting started with generative AI.
ChatGPT For Dummies demystifies the artificial intelligence tool that can answer questions, write essays, and generate just about any kind of text it's asked for. This powerful example of generative AI is widely predicted to upend education and business. In this book, you'll learn how ChatGPT works and how you can operate it in a way that yields satisfactory results.
This book will show how generative technology works and the drivers. It will also look at the applications - showing what various startups and large companies are doing in the space. There will also be a look at the challenges and risk factors.
Learn how to use the large-scale natural language processing model developed by OpenAI: ChatGPT. This book explains how ChatGPT uses machine learning to autonomously generate text based on user input and explores the significant implications for human communication and interaction.
Generative AI and ChatGPT have the potential to transform industries and society by improving efficiency, enhancing creativity, and enabling more personalized experiences. If you are someone who is looking to stay ahead of the curve in this rapidly evolving digital age and utilize its potential, this book is for you.
2h 43m By Soumyadeep Roy, Sumit Kumar, Utpal Chakraborty
Making AI more accessible than ever, this hands-on book provides a clear overview of the technology, the common misconceptions surrounding it, and a fascinating look at its applications in everything from self-driving cars and drones to its contributions in the medical field.
Written by world-class researchers and scientists, this book explores how Artificial Intelligence (AI), by leading to an increase in the autonomy of machines and robots, is offering opportunities for an expanded but uncertain impact on society by humans, machines, and robots.
6h 35m By Donald Sofge, Ranjeev Mittu, Stephen Russell (eds), W.F. Lawless
Containing real examples that you can implement and modify to build useful computer vision systems, this book will teach you to apply computer vision and machine learning concepts in developing business and industrial applications using a practical, step-by-step approach.
Teaching you how to build practical applications of computer vision using the OpenCV library with Python, this book discusses different facets of computer vision such as image and object detection, tracking and motion analysis and their applications with examples.
A heavily illustrated, practical introduction to an exciting field, this detailed resource explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries.
Designed as a self-teaching introduction to the fundamental concepts of artificial intelligence, this book's coverage includes searching processes, knowledge representation, machine learning, expert systems, programming, and robotics.
Presenting an integrated, real-life case study incorporating all of the related managerial aspects, this book covers all facets of artificial intelligence (AI) and expert systems in a lucid and coherent manner.
3h 58m By K Sarukesi, P Gopalakrishnan, V S Janakiraman
Organizations spend huge resources in developing software that can perform the way a human does. Image classification, object detection and tracking, pose estimation, facial recognition, and sentiment estimation all play a major role in solving computer vision problems.
Providing a detailed explanation of the real world and industry wide natural language processing use-cases, this book is for anyone looking to develop the fundamental concepts in NLP and explore more advanced problems.
This book covers wide areas, including the fundamentals of Machine Learning, Understanding and optimizing Hyperparameters, Convolution Neural Networks (CNN), and Recurrent Neural Networks (RNN).
Whether you want to be a better leader, manager, negotiator, salesperson, or decision-maker, this book contains numerous examples and practical exercises that will help you use neurolinguistic programming (NLP) to improve your career and achieve success at work, whether in the private or public sector, and regardless of your current role.
This book teaches you how to create practical NLP applications without getting bogged down in complex language theory and the mathematics of deep learning.
This book is for developers who are looking for an overview of basic concepts in Natural Language Processing. It casts a wide net of techniques to help developers who have a range of technical backgrounds.
This book teaches you to create powerful NLP solutions quickly by building on existing pretrained models. This instantly useful book provides crystal-clear explanations of the concepts you need to grok transfer learning along with hands-on examples so you can practice your new skills immediately. As you go, you'll apply state-of-the-art transfer learning methods to create a spam email classifier, a fact checker, and more real-world applications.
This book is for developers who are looking for an introduction to basic concepts in NLP and machine learning. Numerous code samples and listings are included to support myriad topics.
As part of the best-selling Pocket Primer series, this book is designed to introduce beginners to basic machine learning algorithms using TensorFlow 2.
Providing a friendly, easy-to-follow book on TensorFlow, this thorough resource tames this sometimes intimidating technology and explains, in simple steps, how to write TensorFlow applications.
This book is designed so that you can focus on the parts you are interested in. You will explore topics as regularization, optimizers, optimization, metric analysis, and hyper-parameter tuning.
Covering the basics of Reinforcement Learning with the help of the Python programming language, this book touches on several aspects, such as Q learning, MDP, RL with Keras, and OpenAI Gym and OpenAI Environment, and also cover algorithms related to RL.
Exploring deep learning applications using frameworks such as TensorFlow and Keras, this book helps you to ramp up your practical know-how in a short period of time and focuses you on the domain, models, and algorithms required for deep learning applications.
Introducing readers to the latest version of the TensorFlow library, this book will teach you how to use TensorFlow 2.0 to build machine learning and deep learning models with complete examples.
Intended for software developers who are advanced beginners, this book is designed to prepare programmers for machine learning and deep learning/TensorFlow topics.
You will delve into the theory behind AI and machine learning projects, examining techniques for learning from data, the use of neural networks and why algorithms are so important in the development of a new AI agent or system.
Starting with a basic definition of AI and explanations of data use, algorithms, special hardware, and more, this reference simplifies this complex topic for anyone who wants to understand what operates the devices we can't live without.
This book discusses a number of critical issues, such as AI ethics and privacy, the development of a conscious mind, and autonomous robotics in our daily lives.
Helping with the identification of which business problems and opportunities are right for AI, this book provides the reader with an easy to understand roadmap for how to take an organization through the adoption of AI technology.
Generative AI and ChatGPT have the potential to transform industries and society by improving efficiency, enhancing creativity, and enabling more personalized experiences. If you are someone who is looking to stay ahead of the curve in this rapidly evolving digital age and utilize its potential, this book is for you.
2h 43m By Soumyadeep Roy, Sumit Kumar, Utpal Chakraborty
Organizations that use AI to improve existing KPIs or create new ones realize more business benefits than organizations that adjust their KPIs without AI.
10m By David Kiron, François Candelon, Michael Chu, Michael Schrage, Shervin Khodabandeh
Reimagine different generative AI as useful creative prototyping tools that can be used to augment your own creative process and projects. Gain a deeper understanding of how generative AI can elevate your creative future.
Generative AI and ChatGPT have the potential to transform industries and society by improving efficiency, enhancing creativity, and enabling more personalized experiences. If you are someone who is looking to stay ahead of the curve in this rapidly evolving digital age and utilize its potential, this book is for you.
2h 43m By Soumyadeep Roy, Sumit Kumar, Utpal Chakraborty
In I, Human psychologist Tomas Chamorro-Premuzic takes readers on an enthralling and eye-opening journey across the AI landscape. Though AI has the potential to change our lives for the better, he argues, AI is also worsening our bad tendencies, making us more distracted, selfish, biased, narcissistic, entitled, predictable, and impatient.
In the book, you will explore the history of social engineering and social robotics, the psychology of deception, considerations of machine sentience and consciousness, and the history of how technology has been weaponized in the past.
Generative AI: The Insights You Need from Harvard Business Review will help you understand the potential of these new technologies, pick the right Gen AI projects, and reinvent your business for the new age of AI.
3h 4m 59s By David De Cremer, Ethan Mollick, Harvard Business Review, Prabhakant Sinha, Tsedal Neeley
A hands-on guide to evolving your company with ethical AI along with thought-provoking insights and predictions from a variety of well-known industry leaders.
In the book, you will explore the history of social engineering and social robotics, the psychology of deception, considerations of machine sentience and consciousness, and the history of how technology has been weaponized in the past.
The Python Resource Optimization Awareness benchmark will measure your ability to read images from file system into Python source and analyze and process using OpenCV and Faust. You will be evaluated on your ability to recognize fundamental concepts related to computer vision and the basic operations performed on images using OpenCV. A learner who scores high on this benchmark demonstrates that they have the skills to perform basic operations such as add and subtract using two images and to perform stream processing through windowing operations in Faust.
The Prompt Engineering for Python Literacy (Beginner Level) benchmark evaluates your knowledge of using generative AI tools like ChatGPT, Bard, and Bing to execute basic Python programming actions, including working with variables and comments and creating lists, tuples, sets, and dictionaries. You will be assessed on your ability to perform string operations and work with control flow statements in Python using prompt engineering. Learners who score high on this benchmark demonstrate that they know the basics of Python and how to use generative AI tools.
The Prompt Engineering for Python Competency (Intermediate Level) benchmark evaluates your knowledge of creating and using functions in Python, prompting generative AI tools to write functions for you, and working with first class functions and lambdas in Python. You will be assessed on your ability to leverage detailed and relevant prompts for generating code to create classes, instantiate objects, store and visualize data, and execute code using a script. Learners who score high on this benchmark demonstrate that they have the skills to apply prompt engineering to execute Python functions, classes, scripts, and data analysis.
The Prompt Engineering Fundamentals for Programmers Literacy (Beginner Level) benchmark evaluates your knowledge and understanding of the basics of prompt engineering. You will be assessed on your understanding of generative AI language models and how their responses can be tuned in the OpenAI Playground. Learners who score high on this benchmark demonstrate that they can apply prompt engineering when working with the OpenAI Playground.
The Prompt Engineering for Git Literacy (Beginner Level) benchmark evaluates your foundational understanding of the basics of Git. You will be assessed on your ability to leverage prompt engineering to navigate your way around Git commands. Learners who score high on this benchmark demonstrate that they have the skills to leverage prompt engineering to execute Git commands.
The Prompt Engineering for Git Competency (Intermediate Level) benchmark evaluates your working knowledge of using remote repositories and leveraging generative AI to navigate Git commands. You will be assessed on your ability to use prompt engineering to execute complex techniques like branching, merging, rebasing, and cloning in Git. Learners who score high on this benchmark demonstrate that they have the skills to leverage prompt engineering with generative AI to work with remote repositories and Git branches.
The Prompt Engineering for Data Science Literacy (Beginner Level) benchmark measures your recognition of the basics of pandas DataFrames and Series. You will be evaluated on your knowledge of how to leverage prompts, the capabilities of pandas DataFrame objects, and how generative AI can help you solve your common data manipulation problems. A learner who scores high on this benchmark demonstrates that they have the knowledge required to start leveraging prompt engineering and generative AI for data science.
The Leveraging Google AI APIs Literacy (Beginner Level) benchmark measures your knowledge of using Bard to answer questions and create content while recognizing Bard's limitations, features, and best practices and the ethics, privacy, and security concerns that can come with using a generative AI technology like Bard. You will be assessed on your ability to unlock creativity with Bard for content creation, summaries, image recognition, and translations. Learners who score high on this benchmark demonstrate that they recognize Google Bard's generative AI features and capabilities.
The Leveraging Google AI APIs Competency (Intermediate Level) benchmark evaluates your knowledge of utilizing Bard's analytical features and the PaLM 2 API for programmatically accomplishing tasks, including using PaLM models, supported languages, libraries, and communication interfaces. You will be evaluated on your skills in solving code problems with Bard, using the Python client API to integrate PaLM seamlessly, and utilizing advanced features like communication adjustments, response fine-tuning, troubleshooting, security enhancements, and chatbot creation. Learners who score high on this benchmark demonstrate that they have the skills to use Bard's analytical features and can leverage the PaLM API for advanced features and application development.
The Leveraging OpenAI APIs Literacy (Beginner Level) benchmark measures your understanding of the OpenAI API and how to generate an API key and work with models and endpoints. You will be assessed on your ability to use the language translation API and identify organizational best practices when using OpenAI to handle scaling, latency, and limits. Learners who score high on this benchmark demonstrate that they have the skills necessary to use OpenAI via its application programming interface (API).
The Leveraging OpenAI APIs Competency (Intermediate Level) benchmark evaluates your knowledge of using advanced OpenAI features to manipulate text, images, and audio. You will be assessed on your ability to use code generation, embeddings, and fine-tuning to solve real-world problems. Learners who score high on this benchmark demonstrate that they have the skills necessary to use advanced OpenAI API features.
The Leveraging Generative AI APIs Literacy (Beginner Level) benchmark evaluates your understanding of the basics of generative AI and the history and future of generative AI. You will be assessed on your ability to recognize the ethical, safety, security, and privacy concerns associated with its use. Learners who score high on this benchmark demonstrate recognition of generative AI functionality and the best practices.
The Prompt Engineering for Django Literacy (Beginner Level) benchmark evaluates your knowledge of using generative AI tools like ChatGPT and Google Bard to initiate a Django project and utilize its basic functionality. You will be assessed on your ability to develop a Django web app with a basic view, render HTML templates, incorporate static assets, and navigate through potential misdirection from generative AI tools. Learners scoring high on this benchmark demonstrate that they have a solid foundation for working with Django.
The Leading Security Teams for GenAI Solutions Competency (Intermediate Level) benchmark measures your knowledge of key concepts used to enhance security posture by monitoring and leveraging new technologies. You will be evaluated on your ability to identify the concepts and steps required to protect and detect intellectual property and recall the concepts and steps required to stay secure when using generative artificial intelligence (GenAI) solutions. A learner who scores highly on this benchmark demonstrates the requisite skills and knowledge in generative AI security and can assist security teams with the integration of generative AI solutions.
The Leading Security Teams for GenAI Solutions Proficiency (Advanced Level) benchmark measures your knowledge of the key elements around governance for generative artificial intelligence (GenAI) solutions. You will be evaluated on your recollection of key concepts surrounding the security implications created by GenAI. A learner who scores highly on this benchmark demonstrates the requisite skills and expertise in generative AI security. They can lead security teams in integrating generative AI solutions and make informed decisions to ensure safe navigation in the emerging GenAI landscape.
The OpenAI APIs Literacy (Beginner Level) benchmark measures your knowledge of OpenAI application programming interfaces (APIs) and insight into their functionalities. You will be evaluated on your recognition of how to access and manipulate OpenAI application programming interfaces (APIs) via Python. A learner who scores high on this benchmark demonstrates a solid understanding of OpenAI's APIs, including their functionalities and core concepts.
The AI Foundations Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) concepts and some of its most common tools and practices. You will be evaluated on your knowledge of the utilization of AI in the workplace and the responsible use of AI. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI and generative AI (GenAI) and the best practices that go along with both technologies.
The Harnessing AI for Marketing Success Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices in the corporate marketing and messaging domain. You will be evaluated on your knowledge of AI use cases, responsible and ethical AI use, the role of AI in marketing, and applying generative AI (GenAI) in marketing scenarios. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI, GenAI, and the best practices that go along with both as used in marketing and communication practices.
The Maximizing Customer Service with AI Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices in the customer service domain. You will be evaluated on your knowledge of AI use cases, responsible and ethical AI use, the effect of generative AI on the customer service industry, and practical uses of AI tools in customer service. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI, generative AI (GenAI), and the best practices that go along with both as used in customer service.
The AI-enhanced Sales Strategies Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices in the sales domain. You will be evaluated on your knowledge of AI use cases, responsible and ethical AI use, how generative AI (GenAI) can be used to enhance and streamline sales processes, and the integration of AI tools in a sales team's daily workflow. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI, GenAI, and the best practices that go along with both as used in sales practices.
The Smart AI in Product Management Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices with product marketing. You will be evaluated on your knowledge of AI use cases, responsible and ethical AI use, the role of generative AI (GenAI) in product management, and how AI uses large context window language models to enhance product development. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI, GenAI, and the best practices that go along with both as used in product management.
The Leveraging AI in Human Resources Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices in the human resources (HR) domain. You will be evaluated on your knowledge of AI use cases, responsible and ethical AI use, how AI is revolutionizing human resources, and leveraging AI tools to streamline HR functions. A learner who scores high on this benchmark demonstrates an entry-level understanding of AI, generative AI (GenAI), and the best practices that go along with both as used in human resources practices.
The AI Change Management Literacy (Beginner Level) benchmark measures your knowledge of reasons for organizational change triggered by AI adoption. You will be evaluated on your recognition of the challenges and opportunities associated with AI integration and how AI-driven data insights can inform and support effective change management strategies. A learner who scores high on this benchmark demonstrates that they have the foundational knowledge of the drivers and impacts of AI adoption in organizations.
The AI Change Management Proficiency (Intermediate Level) benchmark measures your knowledge of the unique challenges of AI integration within a change management framework, strategies to manage resistance and promote employee buy-in, and the necessary skills and training required for workforce AI adoption. You will be evaluated on your recognition of the importance of a learning culture for successful AI integration and long-term success, strategies to foster a growth mindset and embrace new technologies, and strategies to measure effectiveness and assess the impact of AI adoption and organizational performance. A learner who scores high on this benchmark demonstrates that they can lead the transformation in their organizations.
The Generative AI, Prompting and Ethics Awareness benchmark measures your foundational knowledge of generative AI concepts. You will be assessed on generative AI principles, prompting and ethics. A learner who scores high on this benchmark demonstrates that they have the skills to use generative AI tools on a day to day basis.
The Python Resource Optimization Competency (Intermediate Level) benchmark measures your ability to implement a variety of image manipulations using OpenCV to prepare images for end users or machine learning pipelines. You will be evaluated on your knowledge of representing streaming elements using Faust models, processing them using agents, and sending and receiving messages using channels. A learner who scores high on this benchmark demonstrates that they have the skills to manipulate images in OpenCV and perform stream processing using models, agents, and channels in Faust.
The Python Resource Optimization Literacy (Beginner Level) benchmark assesses your ability to use OpenCV to read and write images, explore color scale and grayscale images, and perform basic image transformations. You will be evaluated on your ability to differentiate batch and stream processing, recall the components in a stream processing architecture, and install stream processing using a Faust application that reads streaming messages from a Kafka topic. A learner who scores high on this benchmark demonstrates that they have the skills to perform basic operations in OpenCV and Faust for image transformation and stream processing.
The Python Resource Optimization Proficiency (Advanced Level benchmark measures your ability to perform image transformation in OpenCV using advanced image operations to generate augmented or pre-processed images. You will be evaluated on your ability to implement operations for processing Faust stream data and use tables for fault-tolerance and stateful stream processing transformations. You will also be assessed on your knowledge of different windowing operation types, performing windowing operations, exposing app metrics using web views, and the differences between event time, ingestion time, and processing time. A learner who scores high on this benchmark demonstrates that they have the skills to perform advanced image operations using OpenCV, and can maintain state in tables and implement stream processing using windows operations in Faust.
The Natural Language Processing Awareness (Beginner Level) benchmark measures your exposure to natural language processing (NLP) concepts, such as the fundamentals of NLP, text mining, and analytics. Learners who score high on this benchmark demonstrate that they have the foundational knowledge of natural language processing.
The Natural Language Processing Mastery (Advanced Level) benchmark measures your experience level in using natural language processing (NLP) advanced techniques, such as transformer models, BERT, GPT, and more, to build advanced NLP applications. A learner who scores high on this benchmark demonstrates that they have mastery in developing NLP applications.
The Text Mining and Analytics Literacy (Beginner Level) benchmark measures your exposure to natural language processing (NLP) concepts, such as the fundamentals of NLP, text mining, and analytics, as well as the various libraries and frameworks used for NLP. Learners who score high on this benchmark demonstrate that they have beginner-level knowledge of natural language processing.
The Text Mining and Analytics Competency (Intermediate Level) benchmark measures your experience with natural language processing (NLP) techniques and tools, such as text mining and analytics, spaCy, NLTK, libraries and frameworks for app development, and sentiment analysis. Learners who score high on this benchmark demonstrate that they have a good working knowledge of natural language processing text mining and analytics and can work on NLP text analytics projects with minimal supervision.
The Text Mining and Analytics Proficiency (Advanced Level) benchmark measures your working experience with natural language processing (NLP) techniques and tools, such as text mining and analytics, spaCy, NLTK, NLP libraries, and sentiment analysis. Learners who score high on this benchmark demonstrate that they have good working experience in text mining and analytics using natural language processing and can work on NLP text analytics projects independently without any supervision.
The Deep Learning for Natural Language Processing Literacy (Beginner Level) benchmark measures your basic understanding of deep learning techniques and concepts for developing natural language processing (NLP) applications. Learners who score high on this benchmark demonstrate that they have a good understanding of deep learning frameworks and techniques used for NLP application development.
The Deep Learning for Natural Language Processing Competency (Intermediate Level) benchmark measures your understanding and working knowledge of deep learning techniques and concepts, neural networks, RNNs, and memory-based networks for developing natural language processing (NLP) applications. Learners who score high on this benchmark demonstrate that they have a good understanding of deep learning frameworks and techniques used for NLP application development and can work on NLP projects with minimal supervision.
The Deep Learning for Natural Language Processing Proficiency (Advanced Level) benchmark measures your working knowledge of deep learning techniques and concepts. You will be evaluated on your ability to work with neural networks, RNNs, memory-based networks, and transfer learning models for developing natural language processing (NLP) applications. Learners who score high on this benchmark demonstrate that they have experience applying deep learning frameworks and techniques used for NLP application development and can work on NLP projects independently without any supervision.
The Fundamentals of Text Processing in NLP Literacy (Beginner Level) benchmark measures your ability to recall and understand the basics of text preprocessing and cleaning in Natural Language Processing (NLP). You will be evaluated on your knowledge of the Natural Language Toolkit (NLTK), SpaCy, and Python libraries for text processing. A learner who scores high on this benchmark demonstrates that they have a basic understanding of the foundations of text processing in NLP.
The Fundamentals of Text Processing in NLP Competency (Intermediate Level) benchmark measures your ability to recognize rule-based models for sentiment analysis in Natural Language Processing (NLP). You will be evaluated on your knowledge of representing text as numeric features and using word embeddings to capture relationships in text. A learner who scores high on this benchmark demonstrates that they have good experience in developing text processing applications using NLP.
The NLP and LLMs Competency (Intermediate Level) benchmark measures your knowledge of working with tokenizers in Hugging Face. You will be evaluated on your recognition of Hugging Face classification, QnA pipelines, and text generation pipelines. A learner who scores high on this benchmark demonstrates that they have good experience in developing NLP and LLM applications using Hugging Face with minimal supervision.
The NLP and LLMs Proficiency (Advanced Level) benchmark measures your knowledge of the concepts of language translation, summarization, and semantic similarity. You will be evaluated on your skills in fine-tuning models for classification and question answering and fine-tuning models for language translation and summarization. A learner who scores high on this benchmark demonstrates that they have expertise in developing NLP and LLM applications and can work on NLP and LLM projects without any supervision.
The NLP with Deep Learning Competency (Intermediate Level) benchmark measures your ability to identify the structure of neural networks, train a Deep Neural Network (DNN) model, and generate term frequency-inverse document frequency (TF-IDF) encodings for text. You will be evaluated on your ability to train models using pre-trained word vector embeddings, recognize the structure of a recurrent neural network (RNN), and train an RNN for sentiment analysis and with long short-term memory (LSTM). A learner who scores high on this benchmark demonstrates that they have good knowledge and experience in developing NLP applications using deep learning models.
The NLP with Deep Learning Proficiency (Advanced Level) benchmark measures your knowledge of out-of-the-box transformer models for Natural Language Processing (NLP). You will be evaluated on your ability to use attention-based models and transformers for NLP. A learner who scores high on this benchmark demonstrates that they have good experience in developing NLP applications using deep learning models, including transformer architectures, and can work on projects with minimal supervision.
The AI/ML for Non-tech Learners benchmark measures your ability to recognize key terms and concepts related to artificial intelligence (AI) and machine learning (ML). You will be evaluated on your knowledge of the benefits of ML and AI, leveraging AI in projects, and ML methods. Learners who score high on this benchmark demonstrate that they have the skills related to AI and ML terminology and concepts.
The Fundamentals of AI and ML Literacy (Beginner Level) benchmark measures your knowledge of the foundational concepts of data science, artificial intelligence (AI), and machine learning (ML). You will be evaluated on your ability to recognize use cases of AI and ML and outline foundational and advanced data science methods. A learner who scores high on this benchmark demonstrates that they have a basic understanding of AI and ML.
The Fundamentals of AI and ML Competency (Intermediate Level) benchmark measures your knowledge of the key concepts and use cases of advanced data science, artificial intelligence (AI), and machine learning. You will be evaluated on your ability to recall ML methods and algorithms and outline strategies for each part of the AI life cycle. A learner who scores high on this benchmark demonstrates that they have good knowledge of the basics of AI and ML concepts.
The Fundamentals of AI and ML Proficiency (Advanced Level) benchmark measures your knowledge of the key concepts and use cases of artificial intelligence (AI). You will be evaluated on your ability to outline strategies for each part of the AI life cycle, assess the performance of AI/ML models, and recognize the metrics used to measure success. A learner who scores high on this benchmark demonstrates that they have a deeper grasp on the fundamentals of AI and ML and can start working on the AI/ML projects.
The AI for Software and Data Engineers Competency (Intermediate Level) benchmark evaluates your familiarity with a comprehensive set of prompting techniques used for work and daily life. You will be assessed on your ability to improve the performance of generative AI models using a variety of prompting techniques. You will then be able to apply prompt engineering for various use cases and also make use of APIs for prompt engineering. Learners who score high on this benchmark demonstrate that they have the skills to apply prompt engineering techniques in various use cases.
The AI for Data Analysts Competency (Intermediate Level) benchmark evaluates your familiarity with a comprehensive set of prompting techniques used for work and daily life. You will be assessed on your ability to improve the performance of generative AI models using a variety of prompting techniques. You will then be able to apply prompt engineering for various use cases and also use prompt engineers for analyzing data. Learners who score high on this benchmark demonstrate that they have the skills to apply prompt engineering techniques in various use cases.
The AI for Data Analysts Competency (Intermediate Level) benchmark evaluates your familiarity with a comprehensive set of prompting techniques used for work and daily life. You will be assessed on your ability to improve the performance of generative AI models using a variety of prompting techniques. You will then be able to apply prompt engineering for various use cases and also estashbliing models with Google Vertex AI. Learners who score high on this benchmark demonstrate that they have the skills to apply prompt engineering techniques in various use cases.
The Prompt Engineering Fundamentals for Programmers Competency (Intermediate Level) benchmark evaluates your familiarity with a comprehensive set of prompting techniques used for work and daily life. You will be assessed on your ability to improve the performance of generative AI models using a variety of prompting techniques. Learners who score high on this benchmark demonstrate that they have the skills to apply prompt engineering techniques in various use cases.
The Prompt Engineering for Data Science Competency (Intermediate Level) benchmark measures your recognition of how to effectively leverage generative AI to filter, aggregate, merge, analyze, and visualize data. A learner who scores high on this benchmark demonstrates that they have the competency to leverage prompt engineering and generative AI for data science tasks.
The Prompt Engineering for Django Competency (Intermediate Level) benchmark measures your skills in implementing dynamic data display, template parameters, inheritance, Bootstrap integration, HTML refactoring, and data model creation with migrations in the Django app. You will be assessed on your ability to create a superuser, log in to Django admin, manage users and permissions, and perform CRUD operations on a database. Learners scoring high on this benchmark demonstrate that they have the skills to leverage prompt engineering to create a Django app with dynamic data, templates, Bootstrap, and models, as well as manage users, permissions, and CRUD operations in Django admin.
The Prompt Engineering for Querying SQL Databases Literacy (Beginner Level) benchmark evaluates your knowledge of operating MySQL and utilizing the assistance of generative artificial intelligence (GenAI) chatbots like ChatGPT and Bard. You will be assessed on your ability to work with MySQL tables and constraints using GenAI. Learners scoring high on this benchmark demonstrate that they have the skills necessary to leverage prompt engineering to work with tables and constraints in MySQL.
The Prompt Engineering for Querying SQL Databases Competency (Intermediate Level) benchmark evaluates your knowledge of and ability to harness the power of generative artificial intelligence (GenAI) tools such as ChatGPT and Bard to filter and group data using SQL. You will be assessed on your ability to use prompt engineering to work with advanced SQL features. Learners scoring high on this benchmark demonstrate that they have the skills to utilize AI tools for filtering and grouping data with SQL and can employ prompt engineering for advanced SQL features.
The Generative AI and Prompt Engineering for Ethical Hacking Awareness (Entry Level) benchmark measures your entry-level exposure to basic security issues and generative AI tools. A learner who scores high on this benchmark demonstrates awareness in basic areas of ethical hacking and generative AI.
The Generative AI and Prompt Engineering for Ethical Hacking Literacy (Beginner Level) benchmark measures your exposure to basic security issues and practices like ethical hacking and generative AI tools. A learner who scores high on this benchmark demonstrates literacy in basic areas of ethical hacking and generative AI and can actively participate in conversations and dialogs about their topics with relative confidence.
The Generative AI and Prompt Engineering for Ethical Hacking Competency (Intermediate Level) benchmark measures your on-the-job experience with basic security issues and generative AI tools. A learner who scores high on this benchmark demonstrates competency in many areas of ethical hacking and generative AI and can work somewhat independently on the topic areas with supervision.
The Generative AI and Prompt Engineering for Ethical Hacking Proficiency (Advanced Level) benchmark measures your advanced working understanding of many security-related issues, procedures, and tools and extended exposure to generative AI tools and best practices. A learner who scores high on this benchmark demonstrates a proficient understanding of ethical hacking and generative AI in several areas. They are considered a team leader and can work independently with minimal to no supervision.
The Generative AI and Prompt Engineering for Ethical Hacking Mastery (Expert Level) benchmark measures your thought leader experience in security issues and generative AI tools and practices. A learner who scores high on this benchmark demonstrates mastery in just about every area of ethical hacking and generative AI and is considered a thought and practice leader.
The Generative AI Foundations for IT Awareness (Entry Level) benchmark measures your familiarity with artificial intelligence (AI) and some of its most common tools and practices in the information technology (IT) domain. You will be evaluated on your knowledge of generative AI fundamentals and concepts, using prompt engineering to enhance generative AI model performance, security considerations when using generative AI, and more. A learner who scores high on this benchmark demonstrates a basic and entry-level understanding of AI, generative AI, and the best practices associated with both when used in IT.
The Generative AI Foundations for IT Literacy (Beginner Level) benchmark measures your exposure to artificial intelligence (AI) and some of its most common tools and practices in the information technology (IT) domain. You will be assessed on your knowledge of applying security protocols using generative AI, advanced generative AI techniques, testing generative AI application reliability, supporting SRE operations with AI chatbots, and more. A learner who scores high on this benchmark demonstrates a beginner-level understanding of AI, generative AI, and the best practices associated with both as used in IT.
The Generative AI Foundations for IT Competency (Intermediate Level) benchmark measures your working experience with artificial intelligence (AI) and common tools and practices in the information technology (IT) domain. You will be evaluated on your skills in improving the reliability of generative AI applications, optimizing and fine-tuning advanced generative AI models, designing AI models with a focus on legal and compliance considerations, and more. A learner who scores high on this benchmark demonstrates an intermediate-level understanding of AI, generative AI, and the best practices associated with both when used in IT. They can work independently with some supervision from a more advanced AI practitioner.
The Generative AI Foundations for IT Proficiency (Advanced Level) benchmark measures your extensive working experience with artificial intelligence (AI) and common tools and practices in the information technology (IT) domain. You will be assessed on your skills in anonymizing sensitive information for AI systems, following ethical guidelines and best practices for developing ethical AI models, reducing potential biases from sensitive features, and more. A learner who scores high on this benchmark demonstrates an advanced-level understanding of AI, generative AI, and the best practices associated with both when used in IT. They can work independently with little or no supervision from a more advanced AI practitioner.
The OpenAI APIs Competency (Intermediate Level) benchmark measures your knowledge of working with OpenAI's diverse models, including those for image, audio, and text-based applications, as well as their capabilities. You will be evaluated on your ability to leverage OpenAI's application programming interfaces (APIs) for use in real-world scenarios. A learner who scores high on this benchmark demonstrates a strong understanding of OpenAI's diverse models. This competency reflects a readiness to innovate and deploy AI tools across various domains within an organization.
The Leveraging OpenAI APIs Competency (Intermediate Level) benchmark evaluates your knowledge of using advanced OpenAI features to manipulate text, images, and audio. You will be assessed on your ability to use code generation, embeddings, and fine-tuning to solve real-world problems. Learners who score high on this benchmark demonstrate that they have the skills necessary to use advanced OpenAI API features.
The Generative AI Models Competency (Intermediate Level) benchmark measures your ability to create an autoencoder model using PyTorch, train an autoencoder and a convolutional autoencoder, and use denoising autoencoders. You will be evaluated on your skills in using variational autoencoders to generate images, recognizing the key concepts of how generative adversarial networks (GANs) work, and creating and viewing the training of a GAN and a deep convolutional GAN. A learner who scores high on this benchmark demonstrates good competency in working with core generative AI models. They are capable of independently creating, training, and evaluating these models using PyTorch with minimal supervision. They can confidently apply their skills to real-world projects involving generative AI techniques.