Machine Learning with R, the tidyverse, and mlr
- 9h 19m
- Hefin I. Rhys
- Manning Publications
- 2020
Machine learning (ML) is a collection of programming techniques for discovering relationships in data. With ML algorithms, you can cluster and classify data for tasks like making recommendations or fraud detection and make predictions for sales trends, risk analysis, and other forecasts. Once the domain of academic data scientists, machine learning has become a mainstream business process, and tools like the easy-to-learn R programming language put high-quality data analysis in the hands of any programmer. Machine Learning with R, the tidyverse, and mlr teaches you widely used ML techniques and how to apply them to your own datasets using the R programming language and its powerful ecosystem of tools. This book will get you started!
About the book
Machine Learning with R, the tidyverse, and mlr gets you started in machine learning using R Studio and the awesome mlr machine learning package. This practical guide simplifies theory and avoids needlessly complicated statistics or math. All core ML techniques are clearly explained through graphics and easy-to-grasp examples. In each engaging chapter, you’ll put a new algorithm into action to solve a quirky predictive analysis problem, including Titanic survival odds, spam email filtering, and poisoned wine investigation.
What's inside
- Using the tidyverse packages to process and plot your data
- Techniques for supervised and unsupervised learning
- Classification, regression, dimension reduction, and clustering algorithms
- Statistics primer to fill gaps in your knowledge
About the reader
For newcomers to machine learning with basic skills in R.
About the Author
Hefin I. Rhys is a senior laboratory research scientist at the Francis Crick Institute. He runs his own YouTube channel of screencast tutorials for R and RStudio.
In this Book
-
About This Book
-
About the Cover Illustration
-
Introduction to Machine Learning
-
Tidying, Manipulating, and Plotting Data with the tidyverse
-
Classifying Based on Similarities with k-Nearest Neighbors
-
Classifying Based on Odds with Logistic Regression
-
Classifying by Maximizing Separation with Discriminant Analysis
-
Classifying with Naive Bayes and Support Vector Machines
-
Classifying with Decision Trees
-
Improving Decision Trees with Random Forests and Boosting
-
Linear Regression
-
Nonlinear Regression with Generalized Additive Models
-
Preventing Overfitting with Ridge Regression, LASSO, and Elastic Net
-
Regression with kNN, Random Forest, and XGBoost
-
Maximizing Variance with Principal Component Analysis
-
Maximizing Similarity with t-SNE and UMAP
-
Self-Organizing Maps and Locally Linear Embedding
-
Clustering by Finding Centers with k-Means
-
Hierarchical Clustering
-
Clustering Based on Density—DBSCAN and OPTICS
-
Clustering Based on Distributions with Mixture Modeling
-
Final Notes and Further Reading