SKILL BENCHMARK
Advanced Analytics and Machine Learning with Snowflake Competency (Intermediate Level)
- 23m
- 23 questions
The Advanced Analytics and Machine Learning with Snowflake Competency (Intermediate Level) benchmark measures your ability to use Snowpark ML APIs and the Model Registry, Snowflake Feature Store and Datasets, and use Streamlit with Snowflake. You will be assessed on your skills in registering a model with the Snowflake Model Registry, utilizing Snowpark-optimized warehouses and grid search for hyperparameter tuning, creating a feature store and entity using Snowpark APIs, and creating a basic Streamlit app to display table data. Learners who score high on this benchmark demonstrate that they have good experience in performing advanced analytics and machine learning (ML) in Snowflake and can work on projects with minimal supervision.
Topics covered
- access the model registry and display a dropdown of models and model version numbers in a Streamlit app
- access the Snowflake Model Registry and register a model with it, then consume a model by specifying the model name
- add heatmaps, scatter plots, and other seaborn and Matplotlib visualizations to a Streamlit app
- configure a Python virtual environment to run a Jupyter Notebook that uses Snowflake ML APIs
- connect to Snowflake from Jupyter and use the Snowpark API from Python
- create a basic Streamlit app to display data from a table when a checkbox is selected
- create a feature store and entity using Snowpark APIs
- create a managed feature view and analyze the implementation as a dynamic table
- create an external feature view, join two feature views, and query for all feature views associated with a specific entity
- create a Snowflake ML pipeline for logistic regression
- create tags and associate metrics and a tag with a model
- identify the need for the Snowflake Feature Store and analyze how feature views and entities work
- implement sliders, selection boxes, radio buttons, and other UI controls in Streamlit
- outline how to use the Streamlit library for interactive web applications in Snowflake
- outline support in Snowpark ML for model training using scikit-learn, xgboost, and lightgbm, as well as for hyperparameter tuning
- outline types of feature views and how entities are associated with feature views
- recognize the process and benefits of registering a model version
- recognize the workflow for using feature stores and highlight the benefits of this workflow
- register a model with tuned hyperparameter values with the model registry
- register models and versions, view model artifacts, delete model versions, and invoke model methods dynamically
- share the completed Streamlit app with a different user in view-only mode
- utilize Snowpark-optimized warehouses and grid search for hyperparameter tuning
- utilize the Snowflake ML APIs to compute correlation matrices, construct pipelines, and fit models