Skip navigation
  • Von: Heli Helskyaho
  • 08/09/2018

Machine Learning in a Nutshell

Machine Learning (ML) is a very popular topic at the moment – but what is it all about? Why is it becoming a thing now? Heli Helskyaho on the origins of ML and today’s requirements for using it.

This article first appeared in the bimonthly ORAWORLD e-magazine, an EOUC publication with exciting stories from the Oracle world, technological background articles and insights into other user groups worldwide.

 

ML is a very important part of artificial intelligence. Already in 1959, Arthur Samuel described machine learning to be a “field of study that gives computers the ability to learn without being explicitly programmed“. In other words, it could be said that machine learning is a systematic study of algorithms and systems that improve their knowledge or their performance with experience, the experience being algorithms and data.

Why ML? Why now?

Simply for two reasons: First of all, technology is finally ready for ML. Secondly, because of all the data available, we need ML to be able to understand the data and to make right decisions based on it. It all comes down to the big V’s of Big Data:

·         Volume:
There is more and more data

·         Variety:
There are different data models and formats

·         Velocity:
The loading is still in progress while data exploration is going on

·         Veracity:
Not all data is reliable

·         Value, Viability, Variability:
We do not know what we are looking for in the data

·         Visualization:
The systems must support non-technical users as well (journalists, investors, politicians)

And with all of this having to be extremely efficient and fast, we have no other option but using machines as much as possible.

When should we use ML?

The first requirement for using ML is that we have enough data and the data is of good quality. This is needed to let the machine make good predictions and to learn. Part of the data is used for finding the model, while the other part is used to prove that the model works. ML is most usable when the rules and equations are complex (image recognition) and/or constantly changing (fraud detection). Typical examples would be spam filters, log filters and alarms, data analytics, image or speech recognition, medical diagnosis, and robotics.

The process of using ML might start with defining the “Task”: the problem to be solved by ML. To solve the problem, we will need an Algorithm that produces the “Model”. A Model is the output of ML. There are different models, for example predictive models (”forecast what might happen in the future”), descriptive models (”what happened”), and prescriptive models that will be recommending one or more courses of action and showing the likely outcome of each decision.

A very important part of ML are features and finding the best set of features for the task. Features/dimensions are “individual measurable properties or characteristics of a phenomenon being observed”.[1] Deriving features (feature engineering, feature extraction) is one of the most important parts of machine learning. It turns data into information that a machine learning algorithm can use.

ML in short:

*         Use the right Features

*         with the right Algorithms

*         to build the right Models

*         that archive the right Tasks

Unsupervised and Supervised Learning

There are two main methods or techniques for ML: unsupervised and supervised learning. Unsupervised learning is used when the data is unknown and unlabeled. For example, we have data we know nothing about and we want to learn if there are any hidden patterns or intrinsic structures. Supervised learning is used when we know the data. We train a model on known input and output data to predict future outputs for new input data.

Clustering is the most common method for unsupervised learning and used for exploratory data analysis to find hidden patterns or groupings in data. There are two typical clustering algorithms: hard and soft clustering. In hard clustering, each data point belongs to only one cluster while in soft clustering, each data point can belong to more than one cluster.

In supervised learning, there are two phases in the process: the training phase and the prediction phase. In both phases, the data must be pre-processed in order to have good quality data for processing. Typical predictive models for supervised learning are classification and regression. Classification models are trained to classify data into categories: an e-mail is genuine or spam, a tumor is small, medium size, or large, a person is creditworthy or not. On the other hand, regression is used to predict continuous responses, such as changes in temperature, forecasting stock prices, fluctuations in electricity demand, or failure prediction in hardware.

Continuous Improvement

After the best model with best features has been found and implemented to the application, you might need to improve the model. You might want to increase the accuracy and predictive power of the model, to increase the ability to recognize data from noise, to increase the performance, or to improve some other measures wanted. And of course, to be able to improve something you must understand what to improve and to be able to measure it.

 

Interested in machine learning?

Then come to DOAG 2018 Conference + Exhibition! The author of the article, Heli Helskyaho, will present the basics of ML in detail at the conference. The conference will take place in Nuremberg on November
20 to 23.

Further information and registration

 


[1] Bishop, Christopher (2006), Pattern Recognition and Machine Learning.