# Overview

Before diving into the implementation of it, I would like to mention the terminologies that we are going to follow in the network.&#x20;

It is a simpler version of  a typical Neural Network with two hidden layer with 3 Neurons each and an output layer with one neuron. (bias vector not shown in the diagram for the sake of simplicity.

![](https://3125871907-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LskDNqFNx04llzI1sLA%2F-M0hXWQk4PdPaBijpY_O%2F-M0heNDOxNFP2dT_1Hx-%2Fimage.png?alt=media\&token=1ec0e0e8-82fe-4c38-a79f-d267eb58d054)

Here, $$\bold I \text{ is the input layer}$$, $$\bold H\_1 \text{ and }\bold H\_2 \text{ are the hidden layers }$$ and the last layer is the output layer, we can call it as $$\bold O$$ .&#x20;

$$\bold X$$ can be the input matrix, where it has all the input vectors (dimensions will be discussed later) and $$\bold  {\hat{Y}}$$ (hat symbol on the top of Y is small) is the predicted vector and $$\bold Y$$ is the True vector.

We have a loss function $$\bold {L(\hat Y,  Y)}$$ is simply log loss in this case, as we are using a single neuron for binary classification.&#x20;

{% hint style="info" %}
Even if the number of neurons in the output layer are more than 1, we can use cross-entropy as out loss function. But , Cross entropy for two classes is exactly equivalent to log-loss.
{% endhint %}

Lastly, the connections between the two layers are associated with some weights, which we will learn using back propagation. Instead of representing them as individual weights, we will compose them into a bigger matrix, just to make it easier for the computation during forward and backward propagation.

In the next chapter, we will try to make sense about the dimensions of input matrix, weight matrices and the output Once you understand the representation of these things, we can easily finish the forward propagation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ramsane.gitbook.io/deep-learning/multi-layered-perceptron-1/dnn-from-scratch/overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
