pyml.neural_network.layer.activation.relu.ReLU#

class ReLU[source]#

Bases: _Activation

Rectified linear unit (ReLU activation function)

The ReLU function is defined as follows: \(f(x)=x^{+}=\max(0,x)={\frac {x+|x|}{2}}={\begin{cases}x&{\text{if }}x>0, \\ 0& {\text{otherwise}}.\end{cases}}\)

Methods

__init__

backward

Computes the backward step

forward

Computes a forward pass

predictions

Converts outputs to predictions

set_adjacent_layers

Set adjacent layers which are needed for the model to iterate through the layers.

backward(dvalues)[source]#

Computes the backward step

The derivative of the softmax function will be calculated as follows: \(f'(x)={\begin{cases}1&{\text{if }}x>0,\\ 0&{\text{if }}x<0.\end{cases}}\).

Return type:

None

Parameters:

dvalues (numpy.ndarray) – Derived loss from the previous layers (reversed order).

forward(inputs)[source]#

Computes a forward pass

Return type:

None

Parameters:

inputs (numpy.ndarray) – Input values from previous neural layer.

predictions(outputs)[source]#

Converts outputs to predictions

Returns the outputs computed by itself without any changes. However, in practice this activation function is rarely used as a final output function - neither for regression nor for classification.

Return type:

ndarray

Parameters:

outputs (np.ndarray) – Outputs computed by the final activation function

Returns:

Returns same values as passed to this method

Return type:

np.ndarray

set_adjacent_layers(previous_layer, next_layer)#

Set adjacent layers which are needed for the model to iterate through the layers.

Parameters:
  • previous_layer (_Layer) – Layer that is previous to this layer.

  • next_layer (_Layer) – Layer that is subsequent to this layer.