Our neural network, which is not densely connected, has 2 hidden layers with 8 neurons each. There’s only a one-to-one correspondence between each neuron in the input, layer 1, layer 2, and the output. We will use both hidden layers to fit the line to the sine wave function.

We start by assigning baseline values–giving us this output.

Setting the hidden layers’ weight values to 1.0 results in this linear line (i.e. weights tend to affect the slope of the line).

Increasing the weights increases the slope even more.

Increasing layer 2 ’s bias by a half nudges the entire line up by a half.

Flipping layer 1 ’s weight results in flipping the line (*explanation after the illustration).

ReLU note: y = 0 if x <= 0 else x that’s why the line does not descend past 0



Moving on. Let’s flip layer 2 ’s weight to flip the line vertically.

To move that whole line upward by a half, we set the bias of layer 2 of the bottommost neuron to 0.5.

At this stage we have completed the first section of the process. The slope of the leftmost line, more or less, follows the contour of that section of the sine wave.

The aim is to get to this stage where every section follow the form of the sine wave.