pyml.neural_network.optimizer.sgd.SGD#
- class SGD(learning_rate=1, decay=0, momentum=0)[source]#
Bases:
_Optimizer
Stochastic Gradient Descent (SGD) Optimizer.
This optimizer performs stochastic gradient descent with optional momentum and learning rate decay.
- Parameters:
- Raises:
OutsideSpecifiedRange – If momentum value is outside the range [0, 1].
Methods
__init__
Updates the iteration counter after each layer update
Update the current learning rate based on decay.
Update the weights and biases of the given layer using SGD.
- pre_update_parameters()#
Update the current learning rate based on decay.
This method calculates and updates the current learning rate based on the decay factor and the number of iterations performed.
- Return type:
- update_parameters(layer)[source]#
Update the weights and biases of the given layer using SGD.
This method updates the weights and biases of the specified layer using stochastic gradient descent with optional momentum.
- Return type:
- Parameters:
layer (_Transformation) – The layer to update.
Note
If the layer does not have momentum arrays for weights and biases, this method initializes them and performs updates using momentum. Otherwise, updates are performed without momentum.