rai_toolbox.optim.ClampedGradientOptimizer#

class rai_toolbox.optim.ClampedGradientOptimizer(params=None, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, clamp_min=None, clamp_max=None, defaults=None, param_ndim=None, **inner_opt_kwargs)[source]#

A gradient-tranforming optimizer that clamps the elements of a gradient to fall within user-specified bounds prior to using InnerOp.step to update the corresponding parameter.

Examples

Let’s clamp each element of the parameter’s gradient to [-1, 3] prior to performing a step with SGD using a learning rate of 1.0.

>>> import torch as tr
>>> from rai_toolbox.optim import ClampedGradientOptimizer
>>> x = tr.ones(2, requires_grad=True)
>>> optim = ClampedGradientOptimizer(params=[x], lr=1.0, clamp_min=-1.0, clamp_max=3.0)
>>> x.backward(gradient=tr.tensor([-0.5, 10]))
>>> optim.step()
>>> x.grad
tensor([-0.5000,  3.0000])
>>> x
tensor([ 1.5000, -2.0000], requires_grad=True)
__init__(params=None, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, clamp_min=None, clamp_max=None, defaults=None, param_ndim=None, **inner_opt_kwargs)#
Parameters:
paramsSequence[Tensor] | Iterable[Mapping[str, Any]]

Iterable of parameters or dicts defining parameter groups.

InnerOptType[Optimizer] | Partial[Optimizer], optional (default=`torch.nn.optim.SGD`)

The optimizer that updates the parameters after their gradients have been transformed.

epsilonfloat

Specifies the size of the L2-space ball that all parameters will be projected into, post optimization step.

clamp_min: Optional[float]

Lower-bound of the range to be clamped to. Must be specified if clamp_max is None.

clamp_max: Optional[float]

Upper-bound of the range to be clamped to. Must be specified if clamp_min is None.

grad_scalefloat, optional (default=1.0)

Multiplies each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

grad_biasfloat, optional (default=0.0)

Added to each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

defaultsOptional[Dict[str, Any]]

Specifies default parameters for all parameter groups.

param_ndimOptional[int]

Controls how _pre_step_transform_ and _post_step_transform_ are broadcast onto a given parameter. This has no effect for ClampedGradientOptimizer and ClampedParameterOptimizer.

**inner_opt_kwargsAny

Named arguments used to initialize InnerOpt.

Methods

__init__([params, InnerOpt, clamp_min, ...])

Parameters: