Uncertainty

Artificial Intelligence    |    Beginner
  • 13 videos | 39m 30s
  • Includes Assessment
  • Earns a Badge
Rating 4.3 of 63 users Rating 4.3 of 63 users (63)
Many problems aren't fully observable and have some degree of uncertainty, which is challenging for AI to solve. Discover how to make agents deal with uncertainty and make the best decisions.

WHAT YOU WILL LEARN

  • Describe uncertainty and how it applies to ai
    Describe how probability theory is used to represent knowledge to help an intelligent make decisions
    Describe utility theory and how an agent can calculate expected utility of decisions
    Describe how preferences are involved in decision making and how the same problem can have different utility functions with different agents
    Describe how risks are taken into consideration when calculating utility and how attitude for risks can change the utility function
    Describe the utility of information gain and how information gain can influence decisions
    Define markov chains
  • Define the markov decision process and how it applies to ai
    Describe the value iteration algorithm to decide on an optimal policy for a markov decision process
    Define the partially observable markov decision process and contrast it with a regular markov decision process
    Describe how the value iteration algorithm is used with the partially observable markov decision process
    Describe how a partially observable markov decision process can be implemented with an intelligent agent
    Describe the markov decision process and how it can be used by an intelligent agent

IN THIS COURSE

  • 3m 29s
    Upon completion of this video, you will be able to describe uncertainty and how it applies to artificial intelligence. FREE ACCESS
  • 5m 7s
    After completing this video, you will be able to describe how probability theory is used to represent knowledge to help an intelligent agent make decisions. FREE ACCESS
  • Locked
    3.  Utility Theory
    1m 18s
    Upon completion of this video, you will be able to describe utility theory and how an agent can calculate expected utility of decisions. FREE ACCESS
  • Locked
    4.  Utility and Preferences
    3m 9s
    Upon completion of this video, you will be able to describe how preferences are involved in decision making and how the same problem can have different utility functions with different agents. FREE ACCESS
  • Locked
    5.  Utility and Risks
    3m 45s
    After completing this video, you will be able to describe how risks are taken into consideration when calculating utility and how attitude towards risks can change the utility function. FREE ACCESS
  • Locked
    6.  Value of Information
    2m 29s
    After completing this video, you will be able to describe the usefulness of information gain and how information gain can influence decisions. FREE ACCESS
  • Locked
    7.  Markov Chains
    3m 20s
    Find out how to define Markov chains. FREE ACCESS
  • Locked
    8.  Markov Decision Process
    2m 27s
    In this video, you will learn about the Markov Decision Process and how it applies to AI. FREE ACCESS
  • Locked
    9.  MDP Value Iteration
    2m 29s
    Upon completion of this video, you will be able to describe the value iteration algorithm and how it can be used to decide on an optimal policy for a Markov Decision Process. FREE ACCESS
  • Locked
    10.  Partially Observable Markov Decision Process (POMDP)
    2m 46s
    In this video, you will learn how to define the partially observable Markov Decision Process and how it differs from a regular Markov Decision Process. FREE ACCESS
  • Locked
    11.  POMDP Value Iteration
    3m 30s
    After completing this video, you will be able to describe how the value iteration algorithm is used with the partially observable Markov Decision Process. FREE ACCESS
  • Locked
    12.  Applying POMDPs
    3m 7s
    Upon completion of this video, you will be able to describe how a partially observable Markov Decision Process can be implemented with an intelligent agent. FREE ACCESS
  • Locked
    13.  Exercise: Describe the Markov Decision Process
    2m 35s
    Upon completion of this video, you will be able to describe the Markov Decision Process and how it can be used by an intelligent agent. FREE ACCESS

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE

Rating 5.0 of 1 users Rating 5.0 of 1 users (1)
Rating 4.5 of 264 users Rating 4.5 of 264 users (264)

PEOPLE WHO VIEWED THIS ALSO VIEWED THESE

Rating 4.3 of 79 users Rating 4.3 of 79 users (79)
Rating 4.3 of 181 users Rating 4.3 of 181 users (181)
Rating 4.5 of 1758 users Rating 4.5 of 1758 users (1758)