Introduction to Machine Learning
As the primary facilitator of data science and big data, machine learning has garnered much interest by a broad range of industries as a way to increase value of enterprise data assets. Through techniques of supervised and unsupervised statistical learning, organizations can make important predictions and discover previously unknown knowledge to provide actionable business intelligence. In this guide, we’ll examine the principles underlying machine learning based on the R statistical environment. We’ll explore machine learning with R from the open source R perspective as well as the more robust commercial perspective using Revolution Analytics’ Revolution R Enterprise (RRE) for big data deployments. Supervised machine learning is typically associated with prediction where for each observation of the predictor measurements (also known as feature variables) there is an associated response measurement (also known as the class label). Supervised learning is where a model is fit that relates the response to the predictors, with the aim of accurately predicting the response for future observations. Many classical learning algorithms such a linear regression and logistic regression, operate in the supervised domain.
Unsupervised machine learning is a more openended style of statistical learning. Instead of using labeled data sets, unsupervised learning is a set of statistical tools intended for applications where
there is only a set of feature variables measured across a number of observations. In this case, prediction is not the goal because the data set is unlabeled, i.e. there is no associated response variable that can supervise the analysis. Rather, the goal is to discover interesting things about the measurements on the feature variables. For example, you might find an informative way to visualize the data, or discover subgroups among the variables or the observations. One commonly used unsupervised learning technique is k-means clustering that allows for the discovery
of “clusters” of data points. Another technique called principal component analysis (PCA) is used for dimensionality reduction, i.e. reducing the number of feature variables while maintaining the
variation in the data, in order to simplify the data used in other learning algorithms, speed up processing, and reduce the required memory footprint.
All information that you supply is protected by our privacy policy. By submitting your information you agree to our Terms of Use.
* All fields required.