pyml.neural_network.optimizer.rmsprop.RMSProp#

class RMSProp(learning_rate=0.001, decay=0.0, epsilon=1e-07, rho=0.9)[source]#

Bases: _Optimizer

RMSProp Optimizer for Neural Networks.

This optimizer uses the Root Mean Square Propagation (RMSProp) algorithm to adapt the learning rate of each parameter based on the historical squared gradients.

Parameters:
  • learning_rate (float, optional) – The initial learning rate, by default 0.001

  • decay (float, optional) – Learning rate decay factor, by default 0

  • epsilon (float, optional) – Small value added to the denominator to prevent division by zero, by default 1e-7

  • rho (float, optional) – Exponential moving average factor for squared gradients, by default 0.9

Methods

__init__

post_update_parameters

Updates the iteration counter after each layer update

pre_update_parameters

Update the current learning rate based on decay.

update_parameters

Update the parameters of the given layer using the RMSProp optimization algorithm.

post_update_parameters()#

Updates the iteration counter after each layer update

Return type:

None

pre_update_parameters()#

Update the current learning rate based on decay.

This method calculates and updates the current learning rate based on the decay factor and the number of iterations performed.

Return type:

None

update_parameters(layer)[source]#

Update the parameters of the given layer using the RMSProp optimization algorithm.

Return type:

None

Parameters:

layer (_Transformation) – The layer to update.

Note

If the layer does not have cache arrays for weights and biases, this method initializes them with zeros. It then updates the cache with squared gradients using the RMSProp algorithm and performs parameter updates accordingly.