# Back propagation

## Prerequisites

Just to make it clear, I need to make sure that you really understood why we are finding derivatives and how SGD is used to update the parameters. These two videos would help.

{% embed url="<https://www.youtube.com/watch?v=IHZwWFHWa-w>" %}
Gradient Descent Intuition
{% endembed %}

And after this, you need to understand how we find the derivatives in our complex function. I suggest you to go through any of these two resources ( video OR notes from cs231n ), preferably both.&#x20;

{% hint style="warning" %}
&#x20;Do not move ahead until you really understood backpropagation and gradient descent. Otherwise you can't really focus on the vectorization part of this.
{% endhint %}

{% embed url="<https://www.youtube.com/watch?v=Ilg3gGewQ5U>" %}
Back propagation
{% endembed %}

The notes will explain back propagation using simple computation graphs.

{% embed url="<http://cs231n.github.io/optimization-2/>" %}

## Representing forward prop as a simple computational graph

Let's look at the figure that explains whole process once again. It somewhat looks like a simple computational graph with few multiplications, additions, sigmoid functions at each layer.

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M13qzxtrVpHzd-4O4vL%2F-M168zVTVK4lLEU2_1Av%2Fimage.png?alt=media\&token=bc2f2cf4-6d19-4029-a40a-300ba9814a3d)

As mentioned in the above **cs231n** article, sigmoid can further divide into operations like, exponential, fractions, etc., So I want you to convince that the whole forward propagation is a big complex computational graph.

Here, instead of showing all those internal operations, we will abstract it with only the high level operations as we know like sigmoid.

### Notation

At this point, you must have understood that we need to find the derivative of variables like W3, B3, A2, ... with respect to Loss function(L) so that we can perform the gradient updation step.

If we are finding derivative loss $$\mathbf {(L)}$$of any intermediate variable like $$\small \mathbf {Z\_3}$$ , ie., $$\large \bold {\frac  {\partial \mathbf L} {\partial Z\_3}} \small \text  { as } \small  \mathbf {dZ\_3}$$, derivative of loss  $$\mathbf {(L)}$$ with respect to $$\small \mathbf {Z\_3}$$ie., $$\large \bold {\frac  {\partial \mathbf L} {\partial A\_3}} \small \text  { as } \small  \mathbf {dA\_3}$$.&#x20;

On the other hand, the derivative of any intermediate variable with respect to any other intermediate variable would be represented normally, Ex. Derivative of $$\small \mathbf {A\_3}$$with respect to $$\small \mathbf {Z\_3}$$as $$\large \bold {\frac  {\partial \mathbf {A\_3}} {\partial Z\_3}}$$ .

## Back propagation

First, we will build an intuition on how we can calculate the gradients, for all the weight and bias matrices starting from final layer to the first layer without actually computing the derivatives. At the end, you will be able to write a general representation to calculate the gradients at every layer so that we can write that in a  simple for loop when you implement it. Just to get familiarize with the process, you can watch this video.

{% embed url="<https://www.youtube.com/watch?v=tIeHLnjs5U8>" %}
Math behind back propagation
{% endembed %}

And then we will deep dive into calculating gradients. Here we will explain how calculating gradients for a single weight attribute and vectorize it for the entire weight matrix at that layer.

### Output Layer

![Computational graph for the Output Layer](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1KlAz-zqxV_hXXXCRy%2F-M1KrM7FVtr4n8Sc1iyE%2Fimage.png?alt=media\&token=0abb7756-f807-4938-ab83-ea2be708db1a)

From this diagram, we can clearly see that $$Z\_3$$ is a function of $$W\_3, A\_2 \ and\  B\_3$$, and $$A\_3$$is a function of $$Z\_3$$. It is worth mentioning about *why sigmoid* as an activation functio&#x6E;*.* As this is the last layer, the outputs should be probabilities for each class. Here, we are dealing with a simple binary classification, and thus sigmoid is self sufficient.&#x20;

If we have more than two classes, we should use softmax as an activation function. At the end, you get to implement that. Whatever the activation function is, it is enough to say that $$A\_3$$ as a function of $$Z\_3$$. As you already know, we can find the gradient of $$L$$w\.r.t $$W\_3, A\_2 \ and\ B\_3$$can be written using chain rule as follows.

$$
\gdef\deriv#1#2{\frac {\partial #1} {\partial #2}}
\gdef\layer{3}
\gdef\prevlayer{2}

\begin{aligned}

\deriv{L}{W\_\layer} &=  \deriv{Z\_\layer}{W\_\layer}  \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}} = \deriv{Z\_\layer}{W\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer
&\implies \boxed {dW\_\layer = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer}
\\\\

\deriv{L}{A\_\prevlayer} &=  \deriv{Z\_\layer}{A\_\prevlayer}  \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}} = \deriv{Z\_\layer}{A\_\prevlayer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{A\_\prevlayer} dZ\_\layer
&\implies  \boxed {dA\_\prevlayer = \deriv{Z\_\layer}{A\_\prevlayer}  dZ\_\layer}

\\\\

\deriv{L}{B\_\layer} &= {\deriv{Z\_\layer}{B\_\layer}   \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}}} = \deriv{Z\_\layer}{B\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{B\_\layer} dZ\_\layer
&\implies  \boxed {dB\_\layer= \deriv{Z\_\layer}{B\_\layer}  dZ\_\layer}  \\
\end{aligned}
$$

Take your time and understand the above derivatives as this is the key to make it more generic to all the layers.&#x20;

### Layer - 2 (Second Hidden Layer)

![Computational graph for the Second Hidden Layer](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1Kt2ZCiibdnsFxIJss%2F-M1Kz75c99mSSqBDdd9V%2Fimage.png?alt=media\&token=743c8280-ee86-4fae-977a-506d6e8a905f)

And the gradients for the intermediate variables at this layer would look like as follows. And a little sneak peek, we can easily find the derivative term $$dZ\_2$$as we already found $$dA\_2$$previously.

$$
\gdef\deriv#1#2{\frac {\partial #1} {\partial #2}}
\gdef\layer{2}
\gdef\prevlayer{1}

\begin{aligned}

\deriv{L}{W\_\layer} &=  \deriv{Z\_\layer}{W\_\layer}  \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}} = \deriv{Z\_\layer}{W\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer
&\implies \boxed {dW\_\layer = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer}
\\\\

\deriv{L}{A\_\prevlayer} &=  \deriv{Z\_\layer}{A\_\prevlayer}  \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}} = \deriv{Z\_\layer}{A\_\prevlayer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{A\_\prevlayer} dZ\_\layer
&\implies  \boxed {dA\_\prevlayer = \deriv{Z\_\layer}{A\_\prevlayer}  dZ\_\layer}

\\\\

\deriv{L}{B\_\layer} &= {\deriv{Z\_\layer}{B\_\layer}   \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}}} = \deriv{Z\_\layer}{B\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{B\_\layer} dZ\_\layer
&\implies  \boxed {dB\_\layer= \deriv{Z\_\layer}{B\_\layer}  dZ\_\layer}  \\
\end{aligned}
$$

### Layer - 1(First Hidden Layer)

![Computational graph for the First Hidden Layer](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1KziApbw8WQ0XY9cFQ%2F-M1L-LMt3wsWgmMbZw_u%2Fimage.png?alt=media\&token=837b6a34-4057-485d-acd5-faee4cb7bc6b)

This is the first hidden layer graph. This is the end for propagating backward to compute the gradients. As you have already done this step for previous layers, It it easy to write the equations to compute the gradients.&#x20;

$$
\gdef\deriv#1#2{\frac {\partial #1} {\partial #2}}
\gdef\layer{1}
\gdef\prevlayer{0}

\begin{aligned}

\deriv{L}{W\_\layer} &=  \deriv{Z\_\layer}{W\_\layer}  \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}} = \deriv{Z\_\layer}{W\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer
&\implies \boxed {dW\_\layer = \deriv{Z\_\layer}{W\_\layer}  dZ\_\layer}
\\\\

\deriv{L}{B\_\layer} &= {\deriv{Z\_\layer}{B\_\layer}   \overbrace{\deriv{A\_\layer}{Z\_\layer} \deriv{L}{A\_\layer}}^{\text {chain rule}}} = \deriv{Z\_\layer}{B\_\layer}  \deriv{L}{Z\_\layer} = \deriv{Z\_\layer}{B\_\layer} dZ\_\layer
&\implies  \boxed {dB\_\layer= \deriv{Z\_\layer}{B\_\layer}  dZ\_\layer}  \\

\end{aligned}
$$

Weird!, why didn't we calculate the gradients for $$A\_0$$? It is the input and it is a constant and thus need not to learn while training.

{% hint style="info" %}
There are times where we might want to learn the input as well. For an example, during word embedding, we input random vector for each word via and we will learn those weights OR vectors for each word through back propagation. Just wanted you to know.
{% endhint %}

By now, you should get a general Idea on how to generalise the gradient computation at each layer and you should be able to code that out.

## Computing Gradients&#x20;

From above equations, it is clear that we need to find all the $$dZ\_l\ 's, where\ l=layer$$. And it is quite different for the output layer and the intermediate layers. Because, at the final layer, the $$Z$$connects to loss function and at the intermediate layers, $$Z$$is connected to the activation at that layer. So the generalisation for the gradients of $$Z$$is same for all the layers except for the Output layer. You might be confused by reading this. But it will be clear as we go. I want to mention this beforehand just to give you a heads up. Don't worry if you couldn't understand it, it will be clear as we go.&#x20;

First we will look into the derivatives in first hidden layer instead of output layer. It is better because it generalises for layer that has multiple neurons in both prev and current layer. In our case, we have only one neuron in the output layer. So, we go from left to right.

While coding it out, there is dependency in the gradient computation. so it has to be backwards. Here, just for the sake of explaining things, we go from left to right. Hope this is clear enough.

### Layer - 1(First Hidden Layer)

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1ZUxs5VwF4rOySm92-%2F-M1_a-yAjJMkxK7irNxo%2Fimage.png?alt=media\&token=52bfc1bf-8087-462b-877c-03d4e147ab3d)

{% hint style="info" %}
In the above diagram, it is important to mention few things.

* $$dZ\_1$$is what we have to find.
* $$dA1$$assumed to be computed previously while we compute the gradients in second hidden layer.
* $$\* \rightarrow$$element-wise multiplication.
  {% endhint %}

We still need to find $$\Large {\frac{\partial Z\_1}{\partial W\_1}}$$and $$\Large {\frac{\partial Z\_1}{\partial B\_1}}$$ in order to find $$dW\_1$$and $$dB\_1$$ as $$\boxed {dW\_1 = \frac{\partial Z\_1}{\partial W\_1}dZ\_1}$$and $$\boxed {dB\_1 = \frac{\partial Z\_1}{\partial B\_1}dZ\_1}$$.&#x20;

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1VSh8OrMSMmRypzDOl%2F-M1Vog5v1OLnEfN0Ubh_%2Fimage.png?alt=media\&token=154d0132-633b-426b-86ee-baff956d861f)

$$
\gdef\layer{1}
\gdef\layersize{3}
\gdef\prevlayer{0}
\gdef\prevlayersize{2}
\gdef\w#1#2#3{w^#1\_{#2#3}}
\gdef\a#1#2{a^#1\_#2}
\gdef\b#1#2{b^#1\_#2}
\gdef\z#1#2{z^#1\_#2}

% - derivative
\gdef\d#1#2 {\frac {\partial#1} {\partial#2} }

% - Matrix
\gdef\bmat#1{\begin{bmatrix}#1\end{bmatrix}}
\gdef\mat#1{\begin{matrix}#1\end{matrix}}

% - colors
\gdef\redhash{#D0021B} \gdef\red{\color{\redhash}}
\gdef\brownhash{#8B572A}  \gdef\brown{\color{\brownhash}}
\gdef\greenhash{#7ED321} \gdef\green{\color{\greenhash}}
\gdef\bluehash{#4A90E2}  \gdef\blue{\color{\bluehash}}

% - size
\large
% -------------------
\begin{aligned}

Z\_\layer &=
\bmat{{\red \z11}\\\\{\green \z12}\\\\{\blue \z13}}
\= \bmat{{\red \w\layer11}{\brown x\_1} + {\red \w\layer21}{\brown x\_2} + {\red \b\layer1}\\\\
{\green\w\layer12}{\brown x\_1} + {\green \w\layer22}{\brown x\_2} + {\green \b\layer2}\\\\
{\blue \w\layer13}{\brown x\_1} + {\blue \w\layer23}{\brown x\_2} + {\blue \b\layer3}}

\text{, When finding the derivatives of }\\\\
&\text{ weight components, try to focus on all the elements that it is }\\
&\text{dependent on, and then sum them up. here it is just one component }\\
&\text{of Z vector. But when you deal with batch input, you might want } \\
&\text{to remember this.}

\\\\
dW\_\layer
&= \bmat{\red d\w\layer11 \hspace{1em}  d\w\layer21 \\\\
\green d\w\layer12 \hspace{1em}  d\w\layer22 \\\\
\blue d\w\layer13 \hspace{1em}  d\w\layer23 }

\= \bmat{\red {\Large \d{\z\layer1}{\w\layer11}}d\z\layer1 \hspace{1em}  {\Large \d{\z\layer1}{\w\layer21}} d\z\layer1 \\\\
\green {\Large \d{\z\layer2}{\w\layer12}}d\z\layer2 \hspace{1em}  {\Large \d{\z\layer2}{\w\layer22}} d\z\layer2 \\\\
\blue {\Large \d{\z\layer3}{\w\layer13}}d\z\layer3 \hspace{1em}  {\Large \d{\z\layer3}{\w\layer23}} d\z\layer3

```
    } 
```

\= \bmat{\brown x\_1 \red d\z\layer1 \hspace{1em}  \brown x\_2 \red d\z\layer1 \\\\
\brown x\_1 \green d\z\layer2 \hspace{1em} \brown x\_2 \green d\z\layer2 \\\\
\brown x\_1 \blue d\z\layer3 \hspace{1em}  \brown x\_2 \blue d\z\layer3
} \\\\

% -------------------

&=\bmat{\red d\z\layer1 \brown x\_1 \hspace{1em}  \red d\z\layer1 \brown x\_2  \\\\
\green d\z\layer2 \brown x\_1  \hspace{1em} \green d\z\layer2 \brown x\_2 \\\\
\blue d\z\layer3 \brown x\_1 \hspace{1em}  \blue d\z\layer3 \brown x\_2
}

\= \bmat{\red d\z\layer1 \hspace{1em} \cdots \hspace{1em} \\\\
\green d\z\layer2 \hspace{1em} \cdots \hspace{1em} \\\\
\blue d\z\layer3 \hspace{1em} \cdots \hspace{1em} }
\bmat{\brown x\_1 \hspace{1em} x\_2 \\\\
\vdots \hspace{1em} \vdots \\\\
\vdots \hspace{1em} \vdots
}

\= \bmat{\red d\z\layer1 \\\\
\green d\z\layer2 \\\\
\blue d\z\layer3}*{\layersize \times 1}
\bmat{\brown x\_1 \hspace{1em} x\_2}*{1\times\prevlayersize}

\\\\
&= {dZ\_\layer}*{*{\ (\layersize \times 1)}} {(A\_{\prevlayer \_{\ (\prevlayersize \times 1)}})^T}

% -------------------
% dB\_1 derivative
% -------------------
\\\\
dB\_\layer
&= \bmat{\red d\b\layer1 \\\\
\green d\b\layer2 \\\\
\blue d\b\layer3 }

\= \bmat{\red {\Large \d{\z\layer1}{\b\layer1}}d\z\layer1   \\\\
\green {\Large \d{\z\layer2}{\b\layer2}}d\z\layer2\\\\
\blue {\Large \d{\z\layer3}{\b\layer3}}d\z\layer3

```
    } 
```

\= \bmat{\red d\z\layer1   \\\\
\green d\z\layer2\\\\
\blue d\z\layer3

```
    } 
```

\= {dZ\_\layer}*{*{\ (\layersize \times 1)}}

\end{aligned}
% -------------------
$$

{% hint style="info" %}

* $$\cdots$$in the matrices represents matrix broadcasting. It happens within numpy or tensorflow to do the matrix multiplication.
* The shapes that are provided considering only one input example. If you pass more than one single point, the shapes will change.&#x20;
* Even for a batch of input, the final vectorization remains same. except that you need to divide the matrix multiplication by the size of the input. In order to understand that, try to work out the forward propagation with batch of inputs and backward propagation with the same. &#x20;
  {% endhint %}

### Layer - 2 (Second Hidden Layer)

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1ZUxs5VwF4rOySm92-%2F-M1_bYX8AsLU9VNe1Jg0%2Fimage.png?alt=media\&token=9a21841e-f024-45fb-b1a0-8b7584082b15)

We need to find $$\Large {\frac{\partial Z\_2}{\partial W\_2}}$$and $$\Large {\frac{\partial Z\_2}{\partial B\_2}}$$ in order to find $$dW\_2$$and $$dB\_2$$ as $$\boxed {dW\_2 = \frac{\partial Z\_2}{\partial W\_2}dZ\_2}$$and $$\boxed {dB\_2 = \frac{\partial Z\_2}{\partial B\_2}dZ\_2}$$. As mentioned previously, $$dA\_2$$is assumed to be pre-computed. ie., it would've been computed at the Output layer (next layer in the sequence) itself.

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1_ej9h8iUbqhoJQgaS%2F-M1dzhhmByDo9BFGiin5%2Fimage.png?alt=media\&token=e818ab03-8851-4267-a4c7-5e2febc076c6)

$$dW\_2\ and\ dB\_2$$will be calculated exactly as above. But, I would strongly recommend to do it again by yourself to verify the same.&#x20;

$$
\large \boxed {dW\_2 = dZ\_{2\_{\ (3 \times 1)}}\ {(A\_{1 \_{\ (3 \times 1)}})^T} } \hspace{2em}

\large \boxed {dB\_2 = dZ\_{2\_{\ (3 \times 1)}}} \hspace{2em}

\large \boxed {dA\_1=\ ?}
$$

$$
\gdef\layer{2}
\gdef\layersize{3}
\gdef\prevlayersize{3}
\gdef\prevlayer{1}
\gdef\w#1#2{w^{\layer}*{#1#2}}
\gdef\a#1{a^{\prevlayer}*#1}
\gdef\b#1{b^{\layer}*#1}
\gdef\z#1{z^{\layer}*#1}

% - derivative
\gdef\d#1#2 {\frac {\partial#1} {\partial#2} }

% - Matrix
\gdef\bmat#1{\begin{bmatrix}#1\end{bmatrix}}
\gdef\mat#1{\begin{matrix}#1\end{matrix}}

% - colors
\gdef\redhash{#D0021B} \gdef\red{\color{\redhash}}
\gdef\brownhash{#8B572A}  \gdef\brown{\color{\brownhash}}
\gdef\greenhash{#7ED321} \gdef\green{\color{\greenhash}}
\gdef\bluehash{#4A90E2}  \gdef\blue{\color{\bluehash}}

% - size
\large
% -------------------
\begin{aligned}
% -------------------
Z\_\layer &=
\bmat{{\red \z1}\\\\{\green \z2}\\\\{\blue \z3}}
\= \bmat{{\red \w11}{\brown \a1} + {\red \w21}{\brown \a2} + {\red \w31}{\brown \a3} + {\red
\b11}\\\\
{\green\w12}{\brown \a1} + {\green \w22}{\brown \a2} + {\green\w32}{\brown \a3} + {\green \b2}\\\\
{\blue \w13}{\brown \a1} + {\blue \w23}{\brown \a2} + {\blue \w33}{\brown \a3} + {\blue \b3}}

\\\\
&\text{We will find the derivatives for individual components of }A\_1. \\
&\text{As there are multiple variables depends on a single component, }\\
&\text{we will just sum them up. }Ex: a^\prevlayer\_1 \text { depends on }\w11, \w\layer12 \ and\ \w\layer13. \\
&\text{So, while computing gradiets, we will just sum them up.}

% -------------------

\\\\

% -------------------
dA\_\prevlayer
&= {\brown \bmat{d\a1 \\\ d\a2 \\\ d\a3}}

\= \bmat{ \brown {\Large \d{\z1}{\a1}} \red d\z1 +
\brown{\Large  \d{\z2}{\a1}} \green d\z2 +
\brown{\Large  \d{\z3}{\a1}} \blue d\z3  \\\\

```
       \brown {\Large \d{\z1}{\a2}} \red d\z1 +
       \brown{\Large  \d{\z2}{\a2}} \green d\z2 +
       \brown{\Large  \d{\z3}{\a2}} \blue d\z3  \\\\

       \brown {\Large \d{\z1}{\a3}} \red d\z1 +
       \brown{\Large  \d{\z2}{\a3}} \green d\z2 +
       \brown{\Large  \d{\z3}{\a3}} \blue d\z3  \\\\

     } 
```

\= \bmat{ {\red \w11 d\z1} + {\green \w12 d\z2} + {\blue \w13 d\z3}\\\\
{\red \w21 d\z1} + {\green \w22 d\z2} + {\blue \w23 d\z3}\\\\
{\red \w31 d\z1} + {\green \w32 d\z2} + {\blue \w33 d\z3}\\\\
} \\\\

&=\bmat{\red \w11 & \green \w12 & \blue \w13 \\\\
\red \w21 & \green \w22 & \blue \w23 \\\\
\red \w31 & \green \w32 & \blue \w33 }
\bmat{\red \z1 \\\ \green \z2 \\\ \blue \z3}

\= ({W\_\layer}*{*{\ (\layersize \times \prevlayersize)}})^T {(Z\_\layer)*{*{(\layersize \times 1)}}}

\end{aligned}
% -------------------
$$

Lastly, we need to calculate a different kind of gradient at the final layer, ie.,$$dZ3$$.

### Output Layer

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1ecQFGDEXTXHxKUgVP%2F-M1edATkrUq5PJofHw3q%2Fimage.png?alt=media\&token=7873fe4b-f849-43dd-ac77-320f39bb0a67)

Unlike hidden layers, the derivative for $${dZ\_3}$$is little different. Because $$A\_3$$is connected to loss function directly. So the derivative that we found previously doesn't hold.&#x20;

$$
\gdef\redhash{#D0021B} \gdef\red{\color{\redhash}}
\gdef\greenhash{#417505} \gdef\green{\color{\greenhash}}

\gdef\d#1#2{\frac{\partial#1} {\partial#2}}
\gdef\layer{1}
\gdef\prevlayer{0}
\gdef\black{\textcolor{black}}

\begin{aligned}
\red{dZ\_3} &= \red {\d{A\_3}{Z\_3}}{\d{L}{A\_3}} \\\\

&=  {\red \d{A\_3}{Z\_3} \frac {\partial}{\partial A\_3}}
\begin{Bmatrix}
-log({\green{A\_3}}) \hspace{2.5em} \text{ if, } {Y = 1} \\
-log({\green{1-A\_3}}) \hspace{1em} \text{ if, } {Y = 0}
\end{Bmatrix} \\\\

&=  {\red \d{A\_3}{Z\_3} }
\begin{Bmatrix}
-{\Large\green{\frac{1}{A\_3}}} \hspace{2.5em} \text{ if, } {Y = 1} \\
{\Large\green{\frac{1}{1-A\_3}}} \hspace{1.3em} \text{ if, } {Y = 0}
\end{Bmatrix} \\\\

&=  {\green {A\_3(1-A\_3)} }
\begin{Bmatrix}
-{\Large\green{\frac{1}{A\_3}}} \hspace{2.5em} \text{ if, } {Y = 1} \\
{\Large\green{\frac{1}{1-A\_3}}} \hspace{1.3em} \text{ if, } {Y = 0}
\end{Bmatrix}  \\\\
&= \begin{Bmatrix}
{\green -(1-A\_3)} \hspace{2em} \text{if, } Y=1\\
{\green  A\_3} \hspace{5.3em} \text{if, } Y=0\
\end{Bmatrix} \\\\

&= \begin{Bmatrix}
{\green A\_3-1} \hspace{2em} \text{if, } Y=1\\
{\green  A\_3} \hspace{3.8em} \text{if, } Y=0\
\end{Bmatrix} \\\\

&= \green A\_3-Y

\end{aligned}
$$

The gradients are in the red and inputs are in green. We are trying to compute $$\red {dZ\_3}$$ie., Derivative w\.r.t Loss, we find the intermediate gradients and multiply them ( chain rule of differentiation). The remaining derivatives for $$W\_3,A\_2, \text{and}\ B\_3$$are same for this layer too and are mentioned below.

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M1et7BM5SjSp7ugmvmk%2F-M1etPjtohLDyWgK5iW3%2Fimage.png?alt=media\&token=65c80359-ee3b-4f5d-8929-ca16c90dde96)

$$
\large \boxed {dW\_3 = dZ\_{3\_{\ (1 \times 1)}}\ {(A\_{2 \_{\ (3 \times 1)}})^T} } \hspace{0.5em}

\large \boxed {dB\_3 = dZ\_{3\_{\ (1 \times 1)}}} \hspace{0.5em}

\large \boxed { dA\_2=( {W\_{3 *{\ (1 \times 3)}} )^T}  dZ*{3\_{\ (1 \times 1)}} } \hspace{0.5em}
$$

### Pseudo code

```python
class DNN():
   ...   
   ...
   ...   
   ...
   
def forward(self, X):
   """
   X : input
   returns out(last layer), cache
   """
   
def compute_gradients(self, out, cache):
   """
   out : output of last layer(output layer) ie., A_3
   returns: dictionary that has all the gradients for all
   the variables
   """
   grads = {}
   
   # output layer
   cur_layer = 3
   grads[f'Z{cur_layer}'] = ...  # grads['Z3']
   # compute grads for 'W3', 'B3' and 'A2' as well
   
   # gradients for the other layers
   # loop from last hidden layer to first hidden layer
   # compute and store the gradients in the `grads` dictionary.
   # ex: For second hidden layer, we need
   #     1.`A2` and `dA2`(computed in prev loop) --> to calculate `dZ2`,
   #     2.`dZ2` and `A1` --> to calculate `dW2` 
   #     3.`dZ2` --> to calculate `dB2`
   #     4.`W2` and `Z2` --> To calculate `dA1`(useful in next loop)
   # so, during forward propagation, we need A2(output), A1(input),
   # W2 and Z2
   
   # for the first hidden layer, computing `A0`(input) is not required.
   return grads
   
   ...   
   ...
   ...   
   ...
```

{% hint style="warning" %}
For batch input, the gradients will change as follows

* $$dB\_3$$has multiple columns instead of single column. Each column represents gradient for single training example. If the input size is $$m$$, there would be $$m$$columns. In order to get the cumulative gradient, we need to sum it up and then divide by $$m$$to get the average gradient.
* $$dW\_3$$has cumulative gradients in each entry while calculating. The shape doesn't change. So, we just need to divide in order to get the average gradient.&#x20;

You will understand this if you work out some example with more than one input. In this case the input size is $$2 \times m$$, where $$m$$is the number of input examples.
{% endhint %}

You need to write the program that works for batch input rather than single input. Even though I explained about the changes in the formula, It's better to work it out with an example before implementing it.

## Task

1. Take a batch of input of shape (2 x m), where m is the number of input samples and work out the backward and forward propagation and change the gradient formulas wherever necessary.
2. Observe all the gradient formulas, and identify the required matrices at each layer to compute gradients in that layer and take a note of them.&#x20;
3. Modify the forward propagation to store all the intermediate matrices which you have observed in above step and store them. (You can use dictionaries to store the variables with keys `variable+layer_numer` Ex: Use `W1` to store the weight matrix at first hidden layer. Use `A2`as the key to store the outputs from second hidden layer etc.,.)
4. Write a function `gradients` that will calculate gradients for all those weight and bias matrices at all the layers. The cache that you have stored during the forward propagation will come in handy here.
