mxnet.optimizer.Adamax¶
-
class
mxnet.optimizer.
Adamax
(learning_rate=0.002, beta1=0.9, beta2=0.999, **kwargs)[source]¶ The AdaMax optimizer.
It is a variant of Adam based on the infinity norm available at http://arxiv.org/abs/1412.6980 Section 7.
The optimizer updates the weight by:
grad = clip(grad * rescale_grad + wd * weight, clip_gradient) m = beta1 * m_t + (1 - beta1) * grad u = maximum(beta2 * u, abs(grad)) weight -= lr / (1 - beta1**t) * m / u
This optimizer accepts the following parameters in addition to those accepted by
Optimizer
.- Parameters
beta1 (float, optional) – Exponential decay rate for the first moment estimates.
beta2 (float, optional) – Exponential decay rate for the second moment estimates.
-
__init__
(learning_rate=0.002, beta1=0.9, beta2=0.999, **kwargs)[source]¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
([learning_rate, beta1, beta2])Initialize self.
create_optimizer
(name, **kwargs)Instantiates an optimizer with a given name and kwargs.
create_state
(index, weight)Creates auxiliary state for a given weight.
create_state_multi_precision
(index, weight)Creates auxiliary state for a given weight, including FP32 high precision copy if original weight is FP16.
register
(klass)Registers a new optimizer.
set_learning_rate
(lr)Sets a new learning rate of the optimizer.
set_lr_mult
(args_lr_mult)Sets an individual learning rate multiplier for each parameter.
set_lr_scale
(args_lrscale)[DEPRECATED] Sets lr scale.
set_wd_mult
(args_wd_mult)Sets an individual weight decay multiplier for each parameter.
update
(index, weight, grad, state)Updates the given parameter using the corresponding gradient and state.
update_multi_precision
(index, weight, grad, …)Updates the given parameter using the corresponding gradient and state.
Attributes
learning_rate
opt_registry