Then comes the training stage, where the
preprocessed data needs to be entered to
the ANN using forward propagation. What
is forward propagation? The entered data in
the input layers are supposed to flow from
the very beginning to the end of the network.
Here, our algorithms randomly choose some
values for the weights. And if you’re wondering
what these “weights” are, they are the
strengths of the bonds between these artificial
neurons. If a certain weight is so close
to zero, then that neuron is not going to be a
fired - inactive neuron.
This depends on the activation function
that we are using. For each neuron, input
will be the summation of the output values
from the previous layer neurons multiplied
with the corresponding weights. After going
through an activation function in the neuron,
the neuron’s output value is calculated
and sent to the next layer neurons. This process
happens with every neuron. Once our
data has gone to the final layer of neurons,
the algorithm predicts our data. Since this is
supervised learning, we know the expected
value.
Then, a loss is calculated using a function
which the ML engineer is supposed to decide
(for example mean square error) and weights
are re-changed using backpropagation. It’s
a bit confusing, isn’t it? But that is how it
happens. The loss function is partially derivated
with respect to each weight and these
weights are continuously updated from the
last layer to the very beginning. Yes, from
Back to the Front.
Then, again forward propagation is done
after backpropagation. After many epochs
(turns, in other words), our loss function converges
to a minimum value while our accuracy
gets higher. And now our ANN is ready to
predict new data. This is the basic process of
a neural network. Nowadays there are many
types of ANN such as CNN, RNN, etc. which
are used for different applications.
These deep learning techniques have become
a common practice in computer vision. Object
classification, object detection are some
common problems, to which the solutions
can easily be found using Deep Learning.
Can’t we just use traditional methods such as
Support Vector Machines (SVMs) over these
heavy deep learning algorithms such as convolutional
neural networks?
Yes, we can. But we should not feed raw
images to the SVMs. We need to extract features
first to use them with SVMs or other
traditional methods. And it is always a good
practice to stick with neural networks if we
have lots of data since we can train them with
better accuracy.
“
Deep learning models have already
infiltrated our world, with its applications
offering a range of breakthroughs in major
industries ranging from health care to the
realms of aerospace and defense. And the
deep learning tools will continue to
revolutionize the lives of the people for the
better. Let us wait and watch
what the future holds…
References
1. Stack vs Concat in PyTorch,TensorFlow & NumPy – Deep Learning
Tensor Ops [Available at: https://www.youtube.com/channel/UC4U
J26WkceqONNF5S26OiVw]
2. It’s Deep Learning Times: A New Frontier of Data [Available at: https://
towardsdatascience.com/its-deep-learning-times-a-new-frontier-ofdata-a1e9ef9fe9a8]
Sahanaka Weerarathna,
Final Year – Department
of Electrical and Electronic
Engineering,
Faculty of Engineering,
University of Peradeniya.
Page 58
University of Peradeniya GAUGE Magazine