Machine Learning vs. Deep Learning vs. Artificial Intelligence from Coding compiler. Machine learning is one of the industry trends of these years, there is no doubt. Or was it deep learning? Or Artificial Intelligence? What is the difference?
Deep learning is considered as a sub-field of machine learning , which in turn is a branch of artificial intelligence (Artificial Intelligence). Machine learning includes all (sometimes very different) methods of classification or regression that the machine itself learns through human-led training. In addition, machine learning also includes unsupervised methods for data mining in particularly large and diverse amounts of data
Deep learning is a sub-type of machine learning and does basically nothing else: it’s about trained classification or regression. Less commonly, deep learning algorithms are also used as an unsupervised learning mechanism for learning pattern noise (data mining). Deep learning refers to the use of artificial neural networks, which are often superior to other methods of machine learning and have other advantages and disadvantages.
This is article is the first article series “Getting Started in Deep Learning“.
Machine Learning
Machine Learning (ML) is a collection of mathematical methods of pattern recognition. These methods recognize patterns, for example, by the best possible, to the best possible entropy, decomposition of data into hierarchical structures ( decision trees ). Or vectors are used to determine similarities between data sets and to train them (eg k-nearest neighbor, in the following simply short: k-
In fact, machine learning algorithms are capable of solving many everyday or even very specific problems. However, in the practice of a machine learning developer, problems often arise when there is either too little data or too many dimensions of the data. Entropy-driven learning algorithms such as decision trees become too complex for many dimensions, and vector space-based algorithms such as the k-nearest neighbor algorithm are constrained by the bane of dimensionality in their performance.
[Related Article: Amazon Machine Learning]
The bane of dimensionality
Data points are well conceivable in a two-dimensional space and it is also conceivable that we would fill such a space (eg a DIN A5 paper sheet) with many data points. Leaving it at the number of data points, but add more dimensions (at least the third dimension we can still well imagine), the distances between the points are greater. n-dimensional spaces can be huge, so that algorithms like the k-nN no longer work well (the n-dimensional space is just too empty).
[Related Article: What is Big Data and BigTable]
Even if there are some concepts for better handling of many dimensions.
Feature engineering
To reduce the number of dimensions, machine learning developers use statistical methods to reduce many dimensions to the (probably) most useful: so-called features. This selection process is called feature engineering and requires the secure handling of statistics as well as ideally also some specialist knowledge of the subject to be examined.
In developing machine learning for productive use, data scientists spend most of their time not fine-tuning their machine-learning algorithms, but choosing suitable features.
[Related Article: Artificial Intelligence Vs Augmented Intelligence]
Deep Learning
Deep Learning (DL) is a discipline of machine learning using artificial neural networks. While the ideas for decision trees, k-nN, or k-means were developed out of a certain mathematical logic, artificial neuronal networks are modeled on nature: biological neural networks.
This input layer vector (a series of dimensions) represents a first layer that expands into other layers with so-called neurons is reduced and abstracted over weights until an output layer is reached which produces an output vector (basically a result key identifying, for example, a particular class: eg cat or dog). Through training, the weights between the neurons are adjusted so that certain input patterns (such as photos of pets) always result in a particular output pattern (eg, “The photograph shows a cat”).
[Related Article: ECMAScript]
The advantage of artificial neural networks is the very profound abstraction of relationships between input data and between the abstracted neuron values with the output data. This happens over several layers of the nets, which can solve very special problems. From these facts derives the parent name: Deep Learning
Deep Learning comes into play when other machine learning methods reach their limits, and even when you have to do without separate feature engineering, because neural networks can automatically reduce many input dimensions to the features required for the correct one over several layers Determining the issue are necessary.
[Related Article: Artificial Intelligence Trends]
Convolutional Neuronal Network
Convolutional Neuronal Networks (CNN) are neural networks used primarily for the classification of image data. They are at the core of classical neural networks but have a folding and a pooling layer upstream. The convolution layer reads the data input (eg a photo) several times in succession, but always only a section of it (in the case of photos then a sector of the photo), the pooling layer reduced the detail data (photos: pixels) to reduced
[Related Article: Binary Large Object Storage]
CNNs are basically a specialized form of artificial neural networks that handle feature engineering even more skilfully.
Deep Autoencoder
Currently, most artificial neural networks are an algorithm model for supervised machine learning ( classification or regression ), but they are also used for unsupervised learning (clustering or dimensionality reduction), the so-called deep auto-encoders.
Deep auto-encoders are neural networks that in the first step reduce a large amount of input dimensions to comparatively few dimensions. The reduction (encoder) is not abrupt, but gradually over several layers, the reduced dimensions become the feature vector. Then the second part of the neural network is used: the reduced dimensions are expanded again through further layers, the original dimensions are reconstructed as a more abstract model (decoder). The purpose of deep autoencoders is to create abstract similarity models. A common field of application is, for example, the machine identification of similar images, texts or acoustic signal patterns.
[Related Article:Design Tools For UI/UX Designers]
Artificial Intelligence
Artificial Intelligence (AI) or Artificial Intelligence (AI) is a scientific field that involves machine learning, but also includes other areas that are needed to build an AI. Artificial intelligence not only has to learn, it also has to be able to efficiently store, classify and retrieve knowledge. It must also have the logic of how to apply knowledge and what has been learned. If we think of biological intelligences, it is not that all abilities have been learned, some are already trained at birth or exist as so-called instincts.
A single machine learning algorithm would hardly pass a Turing test or let a robot handle complex tasks. Therefore, artificial intelligence has to do much more than learn certain things. The scientific field of artificial intelligence includes at least:
- Machine Learning (including Deep Learning and Ensemble Learning )
- Mathematical logic
- propositional logic
- logic
- Default logic
- Modal logic
- Knowledge-based systems
- relational algebra
- graph theory
- Search and optimization procedure:
- gradient
- Breadth first search & depth search
Artificial Intelligence – Machine Learning – Deep Learning (ML (DL))
AI (ML (DL))
[Related Article: Understand The Machine Learning, Deep Learning, AI]
Related Technical Articles:
Learn Walt Disney’s Approach To Data Analysis
SIEM Tools List For Security Information Management