Here we are going to discuss a Neural Network with a Single Neuron how does a single neuron neural network work? what are the structure of a neural network and the workings of a neural network?

A single-neuron neural network represents a circle and this circle consists of two basic parts first part receives the input and computes the input function and the second part computes the output functionality.

**Anatomy of a Single Neuron**

Imagine a neuron as a circle, representing its basic structure. This circle can be divided into two fundamental components: the input processing section and the output computation section.

**Input Processing Section**The first segment of the neuron is responsible for accepting inputs. These inputs are combined with corresponding weight values, signifying the strength of the connection. Mathematically, this can be written as:

z = X1W1 + X2W2 + X3W3 + … + XnWn

Here, Xi represents the input, and Wi signifies the weight associated with that input. This summation forms the linear function, representing the weighted sum of inputs.**Output Computation Section:**After computing the linear function, the output of this section is obtained by applying a non-linear function. In most neural networks, the sigmoid function is commonly used as this non-linear activation function. The sigmoid function calculates the output into a range between 0 and 1, enabling the neuron to mimic biological neuron behavior.

The sigmoid function is defined as:

σ(z) = 1 / (1 + e^(-z))

This function takes the linearly computed value z as input and produces a value between 0 and 1.

**Single Neuron Handling Multiple Inputs**

Expanding the scope from a single input to multiple inputs doesn’t alter the fundamental computation of a neuron. Whether dealing with one or many inputs, the neuron’s objective remains constant: to generate a single output.

The linear function’s computation remains linear in terms of weight values, and it’s extended to handle multiple inputs:

**z = X1W1 + X2W2 + X3W3 + … + XnWn**

This calculated z is then passed through the sigmoid activation function to produce the neuron’s final output.

**Constructing a Neural Network**

When creating a neural network, multiple neurons are interconnected to establish a network capable of more sophisticated computations. In a simple neural network diagram, every input connects to each neuron in a layer. Hidden layers, situated between the input and output layers, enhance the network’s capacity to capture intricate patterns.

In such a network, the output layer takes inputs from the neurons in the hidden layers. The outputs from the hidden layers undergo the linear function and sigmoid activation just like in a single neuron, and the final results are combined to generate the ultimate output.

The input part will compute a **Linear** **function** and the output part will compute a **Non-Linear** **function.** There is a weight value on the edge that connects the input value to a particular neuron.

X_{1} -> X_{ 1 }first_{ }denotes the input

W_{1} -> W_{1 }first weight

The linear function computes W_{1} x X_{1 }and after computing the linear function it will apply the nonlinear function using sigmoid which is defined as :

So, if the output of the linear function W_{1} x X_{1 }= z then we will drive the value of the sigmoid function.

The whole function is computed by our single neuron using one input in this example. Now we are going to understand how a single neuron handles multiple input values and multiple waits.

The fundamental nature of the computation of neurons will never change due to the increment of the number of input values. It will generate a single output even if multiple inputs are there.

Here an important thing to understand is that the linear function computed by a neuron will be linear in its wait values.

**Computation of Linear Function: **

z = X_{1}W_{1}+X_{2}W_{2}+X_{3}W_{3}+…… + X_{n}W_{n }

_{And the non-linear function will apply.}

Here the value of **z **computed by linear function can be used in the sigmoid function to find out the output by applying a nonlinear function. The nonlinear function is also called an activation function.

**Now we are going to create a complete neural network from scratch:**

As we can see in the above diagram every input will be connected with every node in a neural network. A neural network may contain multiple hidden layers. The output layer will receive input from the hidden layer’s neurons and computer the value of z by applying the linear function and then computing the nonlinear function to generate the output.

z ‘= W x (σz_{1}) + W’ x (σz_{2}) + W” x (σz_{3})

W x (σz_{1}), W’ x (σz_{2}), and W” x (σz_{3}) all are inputs for the output layer.