Starting with a weight = 0 and bias = 0 on a single neuron:

Setting weight = 1.0 while keeping bias to 0:

Setting bias to 0.5:

To flip the line horizontally, negate the weight from 1.0 to -1.0:

Bias applied on a neuron, an input, and a ReLU activation function

Now let’s introduce the second neuron with values weight = 1.0 and bias = 1.0:

This caused a vertical shift of the activation function. Note that the bias moves the line vertically.

Let’s negate the second neuron’s weight by 2 (i.e. -2.0):

And now to compute the output: