pyml.neural_network.layer.transformation.dense.Dense#
- class Dense(input_size, output_size, weight_regularizer_l1=0, weight_regularizer_l2=0, bias_regularizer_l1=0, bias_regularizer_l2=0, alpha=0.01)[source]#
Bases:
_Transformation
,_TrainableTransformation
Dense (Fully Connected) Layer for Neural Networks.
This layer represents a fully connected (dense) layer in a neural network, where each input neuron is connected to each output neuron. The layer supports L1 and L2 weight and bias regularization.
The output for the j-th is computed as follows \({\text{output}}_{j}=\sum \limits _{i=1}^{n}x_{i}w_{ij}\).
The dense layer allows to apply regularization, in particular L1 and L2. Both are regularization techniques aiming to prevent overfitting. L1 Regularization, also called Lasso Regression (in context of regressions), shrinks paramaters towards zero, making some features obsolete. This can be interpreted as a kind of feature selection. L2 Regularization, also called Ridge Regression (in context of regressions), shrinks the size of parameters, but not making them zero. In contrast to L1, L2 does not make the features obsolete, but reduces their impact.
- Parameters:
input_size (int) – Input size for this layer
output_size (int) – Output size equals the number of neurons per layer
weight_regularizer_l1 (float, optional) – L1-Regularizer strength for the weights. If set to zero, no L1 regularization for the weights will be applied, by default 0
weight_regularizer_l2 (float, optional) – L2-Regularizer strength for the weights. If set to zero, no L2 regularization for the weights will be applied, by default 0
bias_regularizer_l1 (float, optional) – L1-Regularizer strength for the bias. If set to zero, no L1 regularization for the bias will be applied, by default 0
bias_regularizer_l2 (float, optional) – L2-Regularizer strength for the bias. If set to zero, no L2 regularization for the bias will be applied, by default 0
alpha (float, optional) – Decreases the value size of the weights during initialization to improve training, by default 0.01
- Variables:
weights (numpy.ndarray) – Weight matrix for the connections between input and output neurons.
biases (numpy.ndarray) – Bias vector for the output neurons.
inputs (numpy.ndarray) – The input values received during the forward pass.
output (numpy.ndarray) – The output computed during the forward pass.
dweights (numpy.ndarray) – Gradient of the loss with respect to weights.
dbiases (numpy.ndarray) – Gradient of the loss with respect to biases.
Methods
__init__
Computes the backward step
Computes a forward pass
Return parameters
Set adjacent layers which are needed for the model to iterate through the layers.
Sets the parameters for this layer
- backward(dvalues)[source]#
Computes the backward step
- Return type:
- Parameters:
dvalues (numpy.ndarray) – Derived gradient from the previous layer (reversed order).
- forward(inputs)[source]#
Computes a forward pass
The formula can be vectorized to improve performance: \(output = inputs \cdot weights + biases\).
- Return type:
- Parameters:
inputs (numpy.ndarray) – Input values from previous layer.
- set_adjacent_layers(previous_layer, next_layer)#
Set adjacent layers which are needed for the model to iterate through the layers.
- Parameters:
previous_layer (_Layer) – Layer that is previous to this layer.
next_layer (_Layer) – Layer that is subsequent to this layer.