pyml.neural_network.optimizer.sgd.SGD#

class SGD(learning_rate=1, decay=0, momentum=0)[source]#

Bases: _Optimizer

Stochastic Gradient Descent (SGD) Optimizer.

This optimizer performs stochastic gradient descent with optional momentum and learning rate decay.

Parameters:
  • learning_rate (float, optional) – The initial learning rate, by default 1

  • decay (float, optional) – The learning rate decay factor, by default 0

  • momentum (float, optional) – The momentum factor for gradient updates, by default 0

Raises:

OutsideSpecifiedRange – If momentum value is outside the range [0, 1].

Methods

__init__

post_update_parameters

Updates the iteration counter after each layer update

pre_update_parameters

Update the current learning rate based on decay.

update_parameters

Update the weights and biases of the given layer using SGD.

post_update_parameters()#

Updates the iteration counter after each layer update

Return type:

None

pre_update_parameters()#

Update the current learning rate based on decay.

This method calculates and updates the current learning rate based on the decay factor and the number of iterations performed.

Return type:

None

update_parameters(layer)[source]#

Update the weights and biases of the given layer using SGD.

This method updates the weights and biases of the specified layer using stochastic gradient descent with optional momentum.

Return type:

None

Parameters:

layer (_Transformation) – The layer to update.

Note

If the layer does not have momentum arrays for weights and biases, this method initializes them and performs updates using momentum. Otherwise, updates are performed without momentum.