pyml.neural_network.optimizer.adagrad.Adagrad#
- class Adagrad(learning_rate=1.0, decay=0.0, epsilon=1e-07)[source]#
Bases:
_Optimizer
Adagrad Optimizer for Neural Networks.
This optimizer adapts the learning rate of each parameter based on the historical gradient information. It uses squared gradients to scale the learning rate for each parameter separately.
- Parameters:
Methods
__init__
Updates the iteration counter after each layer update
Update the current learning rate based on decay.
Update the parameters of the given layer using the Adagrad optimization algorithm.
- pre_update_parameters()#
Update the current learning rate based on decay.
This method calculates and updates the current learning rate based on the decay factor and the number of iterations performed.
- Return type:
- update_parameters(layer)[source]#
Update the parameters of the given layer using the Adagrad optimization algorithm.
- Return type:
- Parameters:
layer (_Transformation) – The layer to update.
Note
If the layer does not have cache arrays for weights and biases, this method initializes them with zeros. It then updates the cache with squared gradients and performs parameter updates using Adagrad.