Introduction to Deep Learning:
Deep learning is a part of machine learning and involves multiple layers of
neural networks.
Join us on a captivating journey as we explore the fascinating world of
Deep Learning Networks, uncovering their historical roots, inner workings,
remarkable features, and the transformative impact they have on various
industries.
What is a Deep Learning:
Deep Learning involves neural network.
At its core, a Neural Network is a computational model inspired by the
complex network of neurons in the human brain.
It consists of interconnected nodes called artificial neurons or “units”
organized into layers that process and transform input data.
Through a process known as training, Neural Networks learn to recognize
patterns, make predictions, and solve complex problems.
If you want to learn more about Machine Learning, you can read my article at
raktimsingh.com/machine-learning/
If you want to learn more about Artificial Intelligence, you can read my article at
raktimsingh.com/what-is-artificial-intelligence-with-examples/
Interesting Things about Deep Learning:
1. Deep learning contains Neural Networks. They possess the ability to
learn from data, adapt to new information, and make intelligent
decisions, just like the human brain.
2. With deep learning, we can recognize complex patterns and
relationships within vast amounts of data, enabling applications such
as image recognition, natural language processing, and autonomous
vehicles.
3. Deep learning uses layers of algorithms to process data. Each layers
abstracts the meaning from the training data and pass the output to
next layer, which acts as input to this layer.
4. Deep learning is used to visually recognize objects and understand
various texts, including languages and speech.
5. Word ‘Deep’ in ‘Deep Learning’ refers to number of layers involved,
through which the data propagates.
6. Neural Network can learn any nonlinear function. This is done with
the help of activation functions.
History of Deep learning:
In 1943, Walter Pitts and Warren McCulloch created a computer model
based on the neural network of the brain. They also used ‘threshold logic,
with help of various algorithms.
Other milestones include
1. In 1960, Henry J. Kelly developed the basics of ‘Back Propagation
Model’.
2. Kunihiko Fukushima used convolutional neural network. He also
used the concept of ‘weight’, that is manually adjusting the weightage
of important features. He developed ‘Neocognitron’. That was an
artificial neural network, using multi-layered, hierarchical design.
3. In 1995, Dana Cortes and Vladimir Vapnik developed the support
vector machine.
4. In 1997, Sepp Hochreiter and Juergen Schmidhuber developed ‘Long
Short term memory (LSTM) for recurrent neural networks.
5. There was resurgence in Deep Learning from 2000 onwards.
6. Visionaries and organizations such as John Hopfield, Yann LeCun,
and companies like Google and Facebook have played instrumental
roles in advancing the field.
Deep learning has become very popular and getting widely used due
to these reasons
1. Now we have lot of data, which is available on internet. We can
say that availability of this ‘digital data’ has helped in training
various training models.
2. Cheap computing power along with powerful GPU.
3. Better and accurate algorithms
How Deep Learning works:
In deep learning, Neural Networks are involved. They consist of input
layers, hidden layers, and output layers. Each hidden layer
contains relevant algorithms.
Each artificial neuron receives input signals, applies a mathematical
transformation, and passes the output to the next layer.
The connections between neurons, known as weights, are adjusted during
training to optimize the network’s performance.
Through forward propagation and backpropagation, Neural Networks
iteratively adjust these weights to minimize errors and improve accuracy.
What is an Artificial Neural Network
Artificial Neural network or neural network contains interconnected nodes.
Their name and structure are inspired by our biological brain.
It contains one input layer, multiple hidden layers and an output layer. Each
node connects to another and carry certain weightage and threshold.
Data is passed from input layer and moves towards hidden layer.
If the output of a node (on hidden layer) is more than the threshold, that
node is activated and output from that node is forwarded to next node in
the layer.
Types of Deep Learning
These are the main types of Deep Learning.
1. Recurrent Neural Networks (RNN)
2. Long Short term Memory (LSTM)
3. Convolutional Neural Networks (CNNs)
4. Generative Adversarial Networks (GANs)
5. Radial Basis Function Networks (RBFNs)
6. Multilayer Perceptrons (MLPs)
7. Self Organizing Maps (SOMs)
8. Deep Belief Networks (DBNs)
9. Restricted Boltzmann Machines( RBMs)
10. Autoencoders
What is Back Propagation
Back propagation in deep learning refers to the use of errors in training
deep learning models.
So, here the result from the output layer, is compared with desired result. If
there is difference (errors) than weight of neuron is changed, and next
round of training starts.
CNN Deep Learning:
Convoluted neural network (CNN) is a type of artificial neural network. It is
a powerful tool to identify patterns in image. It is used for image recognition
and processing. It works by reading various pixels in an image.
In a typical neuron network, each neuron in input layer is connected to
hidden layer. But in CNN, only certain neurons in input layers are
connected to hidden layer, called local receptive fields.
Local receptive fields are mapped to same feature of an image. All hidden
layers try to detect same image feature in a picture like edge or contours.
For this reason, nodes share same weight and biases.
One important thing of CNN is filter or kernels. These filters (Kernels) are
used to extract the exact feature from the image in a convoluted operation.
Interesting thing is that CNN learns about filters automatically. They detect
and learn about filter without getting mentioned explicitly.
Recurrent Neural Network:
A recurrent neural network (RNN) is good for analysis of sequential data.
They are good at handling time series problem of sequential data.
The input in recurrent neural network consists of current input and the
previous samples. Also, each neuron has an internal memory that keeps
the information of the computation from the previous sample.
Here connections between nodes can create a cycle, allowing output from some
nodes to affect subsequent input to the same nodes.
RNN models are used in Natural Language Processing.
What is Long Short-Term Memory:
Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network
(RNN). They are capable of learning long term dependencies in sequential
data.
In various spoken languages, when we speak a sentence, different words
in a sentence, connote different meaning based on, where that word is
placed.
For example, word ‘run’, can be used for running, runway, runs in cricket
match etc.
LSTM are very useful in in speech recognition, language translation etc.
What is Generative Adversarial Networks:
Here two neural networks compete with each other. So, one neural network
(Generator), creates some data say a picture of a dog. Other neural
network (Discriminator) tries to find errors in that picture. In next iteration,
generator, create better picture of dog and discriminator again tries to find
error in that picture. This goes on till, generator is able to generate data
where discriminator is not able to find any error.
What is an Activation Function
Activation functions brings non-linearity into the output of neuron. For a
neuron, activation function calculates the weighted sum and adds the
required bias.
Now if this value is greater than the threshold than that neuron is activated
and its value is propagated to next layer. It helps in bringing non-linearity into the network.
With proper usage of activation function, non-useful data
is filtered out (as that must be below threshold value).
Important Activation functions
These are some of the important activation functions.
1. Sigmoid activation function
2. Tanh function
3. Rectified Linear Unit (ReLU) function
4. Softmax function
5. Swish function
6. Gaussian Error Linear Unit
7. Scaled Exponential Linear Unit
Important Deep learning frameworks
These are some of the important deep learning frameworks.
1. TENSORFLOW
2. PYTORCH
3. CAFFE
4. D4JS
5. MICROSOFT COGNITIVE TOOLKIT
Advantages of Deep Learning:
1. Non-Linearity: Neural Networks can capture non-linear relationships
in data, making them highly effective in solving complex problems
that involve intricate patterns and dependencies.
Neural Networks with multiple hidden layers, have the capacity to learn
hierarchical representations of data, enabling them to extract high-level
features and solve more sophisticated tasks.
2. Generalization: Neural Networks possess the ability to generalize
from training data and make accurate predictions on unseen data, allowing them to handle real-world scenarios and adopt to new scenarios.
Applications of Deep Learning:
1. Pattern Recognition: Neural Networks excel at recognizing patterns in
data, enabling tasks like image and speech recognition, fraud detection,
and sentiment analysis.
2. Adaptability: Neural Networks can adapt to changing data and learn
from new information, making them suitable for dynamic environments
where patterns evolve over time.
3. Parallel Processing: Neural Networks can perform computations in
parallel, leveraging the power of modern hardware architectures and
accelerating training and inference tasks.
Deep Learning examples:
1. Facial Recognition: Deep Learning power facial recognition systems
used in smartphones and security applications, enabling quick and
accurate identification of individuals.
2. Recommendation Systems: Companies like Netflix and Amazon utilize
Deep Learning to analyze user preferences and recommend personalized
content or products, enhancing the user experience.
3.Autonomous Vehicles: Deep Learning play a critical role in self-driving
cars, processing sensor data and making real-time decisions to navigate
roads safely.
Companies Using Deep Learning:
1. Google: Google utilizes Deep Learning extensively in various
applications, including Google Search, language translation, and image
recognition.
2. Facebook: Facebook employs Deep Learning for tasks such as facial
recognition, content filtering, and targeted advertising, enhancing user
engagement and privacy protection.
3.Tesla: Tesla, an industry leader in autonomous vehicles, relies on
Deep Learning for advanced driver-assistance systems, enabling
their cars to perceive and navigate the environment.
Industries Using Deep Learning:
1. Healthcare: Deep Learning is used in medical imaging for disease
diagnosis, patient monitoring, and drug discovery, aiding in accurate
diagnoses and personalized treatments.
2. Finance: Financial institutions use Deep Learning for credit scoring,
fraud detection, and stock market analysis, enhancing risk management
and decision-making.
3.Manufacturing: Deep Learning facilitate predictive maintenance, quality
control, and demand forecasting in manufacturing, optimizing production
processes and reducing costs.
Industries such as retail, cybersecurity, agriculture, and energy can further
leverage Deep Learning.
They can improve inventory management, detect anomalies in network
traffic, optimize crop yields, and optimize energy consumption, respectively.
Related Technologies:
Understanding Deep learning is enhanced by knowledge of related
technologies such as Neural Network, Convolutional Neural Networks
(CNNs) for image processing, and Recurrent Neural Networks (RNNs) for
sequence-based data, such as text and speech.
When Not to Use Deep Learning:
Deep learning may not be suitable for scenarios with limited data or where
interpretability and explainability are crucial. In cases where simpler models
can achieve comparable performance or when computational resources
are constrained, alternative approaches may be more appropriate.
Future of Deep Learning:
The future of Deep Learning is promising, with advancements in areas
such as explainable AI, lifelong learning, and ethical considerations.
Deep Learning will continue to drive innovation, powering advancements in
robotics, healthcare, personalized virtual assistants, and human-computer
interfaces.
Conclusion:
Deep Learning have emerged as a transformative force, propelling the
capabilities of Artificial Intelligence to new heights.
Inspired by the intricate workings of the human brain, these networks unlock the potential
to solve complex problems, recognize patterns, and make intelligent decisions.
As deep learning continues to evolve and find applications across various
industries, we stand at the threshold of a new era, where intelligent
machines work hand-in-hand with humans, amplifying our capabilities and
shaping a future filled with endless possibilities.
So, join this exhilarating journey of Deep Learning, where the boundaries of
artificial intelligence are pushed, and the extraordinary becomes a reality.