pyml.neural_network.layer.activation.linear.Linear#

class Linear[source]#

Bases: _Activation

Linear activation function

The derivative of the linear activation function \(f(x) = x\) is \(f'(x) = 1\). Hence, it’s always one, which is why we can just pass through the given dvalues from the previous layer during the backpropagation step.

Methods

__init__

backward

Computes the backward step

forward

Computes a forward pass

predictions

Converts outputs to predictions

set_adjacent_layers

Set adjacent layers which are needed for the model to iterate through the layers.

backward(dvalues)[source]#

Computes the backward step

Since the derivative of a linear function is always one, we can keep the values.

Return type:

None

Parameters:

dvalues (numpy.ndarray) – Derived gradient from the previous layer (reversed order).

forward(inputs)[source]#

Computes a forward pass

The output of a linear layer is the input.

Return type:

None

Parameters:

inputs (numpy.ndarray) – Input values from previous neural layer.

predictions(outputs)[source]#

Converts outputs to predictions

Since this is a linear activation function, there is no need to convert any outputs.

Return type:

ndarray

Parameters:

outputs (numpy.ndarray) – Output computed by the linear activation function

Returns:

Returns same values as passed to this method

Return type:

numpy.ndarray

set_adjacent_layers(previous_layer, next_layer)#

Set adjacent layers which are needed for the model to iterate through the layers.

Parameters:
  • previous_layer (_Layer) – Layer that is previous to this layer.

  • next_layer (_Layer) – Layer that is subsequent to this layer.