Overview
Last updated
Last updated
Before diving into the implementation of it, I would like to mention the terminologies that we are going to follow in the network.
It is a simpler version of a typical Neural Network with two hidden layer with 3 Neurons each and an output layer with one neuron. (bias vector not shown in the diagram for the sake of simplicity.
Here, , and the last layer is the output layer, we can call it as .
can be the input matrix, where it has all the input vectors (dimensions will be discussed later) and (hat symbol on the top of Y is small) is the predicted vector and is the True vector.
We have a loss function is simply log loss in this case, as we are using a single neuron for binary classification.
Even if the number of neurons in the output layer are more than 1, we can use cross-entropy as out loss function. But , Cross entropy for two classes is exactly equivalent to log-loss.
Lastly, the connections between the two layers are associated with some weights, which we will learn using back propagation. Instead of representing them as individual weights, we will compose them into a bigger matrix, just to make it easier for the computation during forward and backward propagation.
In the next chapter, we will try to make sense about the dimensions of input matrix, weight matrices and the output Once you understand the representation of these things, we can easily finish the forward propagation.