pyml.neural_network.layer.transformation.dense.Dense#

class Dense(input_size, output_size, weight_regularizer_l1=0, weight_regularizer_l2=0, bias_regularizer_l1=0, bias_regularizer_l2=0, alpha=0.01)[source]#

Bases: _Transformation, _TrainableTransformation

Dense (Fully Connected) Layer for Neural Networks.

This layer represents a fully connected (dense) layer in a neural network, where each input neuron is connected to each output neuron. The layer supports L1 and L2 weight and bias regularization.

The output for the j-th is computed as follows \({\text{output}}_{j}=\sum \limits _{i=1}^{n}x_{i}w_{ij}\).

The dense layer allows to apply regularization, in particular L1 and L2. Both are regularization techniques aiming to prevent overfitting. L1 Regularization, also called Lasso Regression (in context of regressions), shrinks paramaters towards zero, making some features obsolete. This can be interpreted as a kind of feature selection. L2 Regularization, also called Ridge Regression (in context of regressions), shrinks the size of parameters, but not making them zero. In contrast to L1, L2 does not make the features obsolete, but reduces their impact.

Parameters:
  • input_size (int) – Input size for this layer

  • output_size (int) – Output size equals the number of neurons per layer

  • weight_regularizer_l1 (float, optional) – L1-Regularizer strength for the weights. If set to zero, no L1 regularization for the weights will be applied, by default 0

  • weight_regularizer_l2 (float, optional) – L2-Regularizer strength for the weights. If set to zero, no L2 regularization for the weights will be applied, by default 0

  • bias_regularizer_l1 (float, optional) – L1-Regularizer strength for the bias. If set to zero, no L1 regularization for the bias will be applied, by default 0

  • bias_regularizer_l2 (float, optional) – L2-Regularizer strength for the bias. If set to zero, no L2 regularization for the bias will be applied, by default 0

  • alpha (float, optional) – Decreases the value size of the weights during initialization to improve training, by default 0.01

Variables:
  • weights (numpy.ndarray) – Weight matrix for the connections between input and output neurons.

  • biases (numpy.ndarray) – Bias vector for the output neurons.

  • inputs (numpy.ndarray) – The input values received during the forward pass.

  • output (numpy.ndarray) – The output computed during the forward pass.

  • dweights (numpy.ndarray) – Gradient of the loss with respect to weights.

  • dbiases (numpy.ndarray) – Gradient of the loss with respect to biases.

Methods

__init__

backward

Computes the backward step

forward

Computes a forward pass

get_parameters

Return parameters

set_adjacent_layers

Set adjacent layers which are needed for the model to iterate through the layers.

set_parameters

Sets the parameters for this layer

backward(dvalues)[source]#

Computes the backward step

Return type:

None

Parameters:

dvalues (numpy.ndarray) – Derived gradient from the previous layer (reversed order).

forward(inputs)[source]#

Computes a forward pass

The formula can be vectorized to improve performance: \(output = inputs \cdot weights + biases\).

Return type:

None

Parameters:

inputs (numpy.ndarray) – Input values from previous layer.

get_parameters()[source]#

Return parameters

Return type:

tuple[ndarray, ndarray]

Returns:

Returns on first index the weights and on second index the biases

Return type:

tuple[numpy.ndarray, numpy.ndarray]

set_adjacent_layers(previous_layer, next_layer)#

Set adjacent layers which are needed for the model to iterate through the layers.

Parameters:
  • previous_layer (_Layer) – Layer that is previous to this layer.

  • next_layer (_Layer) – Layer that is subsequent to this layer.

set_parameters(weights, biases)[source]#

Sets the parameters for this layer

Return type:

None

Parameters:
  • weights (numpy.ndarray) – Weights of this layer.

  • biases (numpy.ndarray) – Biases of this layer.