rai_toolbox.optim.ClampedParameterOptimizer#

class rai_toolbox.optim.ClampedParameterOptimizer(params=None, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, clamp_min=None, clamp_max=None, defaults=None, param_ndim=None, **inner_opt_kwargs)[source]#

A parameter optimizer that clamps the elements of a parameter to fall within user-specified bounds after InnerOpt.step() has updated the parameter

Examples

Let’s perform a step with SGD using a learning rate of 1.0 to each of our parameters and then clamp their parameters to [-1.0, 3.0].

>>> import torch as tr
>>> from rai_toolbox.optim import ClampedParameterOptimizer
>>> x = tr.ones(2, requires_grad=True)
>>> optim = ClampedParameterOptimizer(params=[x], lr=1.0, clamp_min=-1.0, clamp_max=3.0)
>>> x.backward(gradient=tr.tensor([0.5, -10.0]))
>>> optim.step()
>>> x
tensor([0.5000, 3.0000], requires_grad=True)
__init__(params=None, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, clamp_min=None, clamp_max=None, defaults=None, param_ndim=None, **inner_opt_kwargs)#
Parameters:
paramsSequence[Tensor] | Iterable[Mapping[str, Any]]

Iterable of parameters or dicts defining parameter groups.

InnerOptType[Optimizer] | Partial[Optimizer], optional (default=`torch.nn.optim.SGD`)

The optimizer that updates the parameters after their gradients have been transformed.

epsilonfloat

Specifies the size of the L2-space ball that all parameters will be projected into, post optimization step.

clamp_min: Optional[float]

Lower-bound of the range to be clamped to. Must be specified if clamp_max is None.

clamp_max: Optional[float]

Upper-bound of the range to be clamped to. Must be specified if clamp_min is None.

grad_scalefloat, optional (default=1.0)

Multiplies each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

grad_biasfloat, optional (default=0.0)

Added to each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

defaultsOptional[Dict[str, Any]]

Specifies default parameters for all parameter groups.

param_ndimOptional[int]

Controls how _pre_step_transform_ and _post_step_transform_ are broadcast onto a given parameter. This has no effect for ClampedGradientOptimizer and ClampedParameterOptimizer.

**inner_opt_kwargsAny

Named arguments used to initialize InnerOpt.

Methods

__init__([params, InnerOpt, clamp_min, ...])

Parameters: