deep-learning

Deep learning (DL) in AI

Deep learning (DL) is a machine learning method that allows computers to mimic the human brain, usually to complete classification tasks on images or non-visual data sets

In this post You’ll learn, what is deep learning, how deep learning works, why it’s become so popular, and teach you to implement your first deep learning model.

What is DL (Deep learning)

Deep Learning (sometimes called Deep Structured Learning) is a type of machine learning algorithm based on Artificial Neural Network  (ANN) technology.

Deep learning and other ANN methods allow computers to learn by example in a similar way to the human brain. This is accomplished through passing input data through multiple levels of Neural Net processing to transform data and narrow the possible predictions each step along the way.

Deep learning algorithms have powerful advantages over other models like:

  • Unmatched accuracy: DL delivers more accurate results and scales better with large data pools than other methods.
  • Unstructured data handling: After trained with structured data, DL models can automatically make sense of unstructured data.
  • Recognize unexpected patterns: Many models require engineers to select what pattern the ML (Machine Learning) algorithm will look for. Any correlations beyond those directly selected go undetected. Deep learning algorithms can track all correlations, even those not requested by engineers.

DL methods are often used for image recognition, speech recognition software, Natural Language Processing (NLP), because Deep learning is best suited to classification patterns that match input data to a learned type.

Example of this, it’s been used to allow self-driving cars to detect signs and obstacles as seen in the following image:

deep-learning-1.png

How DL Work?

DL learns to recognize what features all members of a type have through the analysis of structured training data in the following steps:

  1. Feature extraction: Algorithm analyzes each data point and recognizes similarities between all data points of the same label. This process is called feature extraction.
  2. Decision boundary: Algorithm selects which of these features form the most accurate criteria for each label. This criterion is called the decision boundary.

When program has perfected these steps criteria using all available training data, it uses these learned criteria to classify unstructured input data into the previous labels, as shown in the following figure.

deep-learning-2.png
How Deep Learning Models Learn to Classify Images

Example:

If an engineer pass in 10,000 photos, with 5,000 labeled elephant and another 5,000 labeled not elephant. The model will go through all 1000 pictures and pull out features shared by elephant pictures like “four-legged” or “trunk”.

It would learn that many creatures have 4 legs, therefore if a creature has four legs it may be an elephant. vice versa, only elephants have a trunk. The model can then predict that if a pictured animal has a trunk, it’s very likely an elephant.

The algorithm could then use these “trunk”, “four-legged” and other features to form a model that can assign elephant or not elephant labels to a different, unlabeled set of animal pictures.

Transfer learning: when the model is formed, we can even reuse it as a starting point for another similar deep learning algorithm. The process of reusing models is called transfer learning.

Comprehensive Training Data: A DL model can only be accurate if it is passed of  enough variety of training data. Incorrect outcomes of a DL model are often caused by the training set rather than the model itself.

The key to deep learning is the many hidden layers of processing the input data must go through.

Each layer contains multiple neurons or “nodes” with mathematical functions that collect and classify data. The first layer is Input layer and final layer output layer.

Between them, there are hidden layers with nodes that take the results of previous classifications as input. These nodes run the previous findings through their own classification functions and adjust the weighting of the findings accordingly.

Traditional neural nets before deep learning would only pass data through 2-3 hidden layers before completion. Deep learning increases that number to up to 150 hidden layers to increase result accuracy.

deep-learning-3.png
Visualization of a Single Layer Neural Net
  • Input layer: The input layer is raw data. It’s roughly classified and sent along to the appropriate hidden layer node.
  • First hidden layer : contains nodes that classify on the broadest criteria.
  • Subsequent hidden layers: Each subsequent hidden layer’s nodes get more and more specific to narrow the classification possibilities further via result weighting.
  • Final output layer : The final output layer chooses the most likely classification label out of those that have not been ruled out.

Differences between Deep Learning and Machine Learning

Deep learning is a specialized form of machine learning. The main difference between deep learning and machine learning processes is how features are extracted.

Machine learning (ML): An engineer with knowledge of both the model and the subject being classified manually selects which features the ML algorithm will use as a decision boundary. The algorithm then searches for these set features and uses them to classify data. The following figure shows a ML process.

deep-learning-4.png

Deep learning (DL) : Deep learning is a subset of ML that determines target features automatically, without the help  of a human engineer. This speeds up results as the algorithm can find and select features faster than a human can. DL also increases accuracy because the algorithm can detect all features rather than just those recognizable to the human eye. The following figure shows DL process:

deep-learning-5.png

DL avoids the shallow learning plateau encountered by other types of ML. Shallow learning algorithms are ML algorithms that do not gain in accuracy beyond a certain amount of training data. DL is not shallow learning and continues to scale inaccuracy even with extremely large training data pools. The following figure shows DL vs. ML accuracy.

deep-learning-6.png
Visualization of Deep Learning vs. Shallow Learning Performance

The downside of deep learning is that it requires a larger pool of labeled training data to get started. It also requires a powerful machine with an efficient GPU to rapidly process each image.

In case you haven’t  either of these things, other ML algorithms will be a better choice.

Deep learning tools

Deep learning tools allow data scientists to create programs that can power a computer or a machine to learn like the human brain and process data and patterns before executing decisions. Deep Learning can be regarded as a catalyst that automates the core of predictive analytics.

Deep Learning Languages

The most powerful languages for DL are followings:

  • Python: Python is the most commonly used language for all types of machine learning, not just deep learning. Over 55% of data scientists use Python as their primary language. This is because of Python’s many ML focused libraries and its easy-to-learn syntax.
  • Java: Java is the second most popular language for machine learning, primarily for ML-powered security protocols like classification-based fraud detection. Java is getting more machine learning tools with each version, such as new string and file methods added in Java 11.
  • R: R is a graphics-based language used for statistical analysis and visualization in machine learning. R is a great language to present and explore the results of ML algorithms in a graphical way. It’s especially popular for healthcare technology and biological study presentation.  There are off course other languages like as C++, and more.

You  can learn more about Deep Learning Languages: Large Language Models Explained in 3 Levels of Difficulty

Deep Learning Libraries

The most used libraries for DL are:

  • TensorFlow: TensorFlow is an open-source library that focuses on training deep neural networks. It provides options to deploy ML models to the local device, on-prem database, or via the cloud. TensorFlow is essential to the modern Python data scientist because it allows tools to build and train ML models using the latest techniques.
  • Scikit-learn: Sklearn adds support for a variety of supervised or unsupervised learning algorithms, including deep learning. It is the most popular ML library for Python and allows various other libraries such as SciPy and Pandas to work well together.
  • Keras: Keras is an ML API that provides a Python interface for artificial neural networks (ANNs) and acts as an interface for TensorFlow. It enables fast experimentation with deep neural networks and provides commonly-used neural-network building blocks to speed up development.
  • NumPy: NumPy adds support for multidimensional arrays and matrices as well as complex statistical operations. These are essential for a variety of machine learning models.
  • Theano: Theano is an optimization tool used to manipulate and evaluate matrix-based computations. Theano is great for deep learning models as it automatically optimizes computations to run efficiently on GPUs.
  • PyTorchPyTorch is an ML library developed by Facebook AI and based on the popular Torch library. PyTorch is primarily used for natural language processing and computer vision in companies like Tesla, Uber, and HuggingFace.

Deep Learning Frameworks

The most used Frameworks for DL are:

  • Caffe: Caffe is a deep learning framework designed for image recognition and image segmentation. It’s written in C++ with a Python Interface.
  • Microsoft Cognitive Toolkit (CNTK): CNTK is Microsoft’s deep learning framework that describes neural networks as a series of computation events driven by a graph. Microsoft is no longer developing CNTK but is sometimes used in older deep learning models.

Conclusion

In this post we have explained Deep Learning, Differences between Deep Learning and Machine Learning,  DL Tools, DL languages, DL libraries, DL Framework,

In my next post I am going to explain Deep learning practice Perceptron with some Python program examples.

This post is part of  AI (Artificial Intelligence) step by step

 

Back to home page