rai_toolbox.optim.LinfProjectedOptim#

class rai_toolbox.optim.LinfProjectedOptim(params, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, epsilon=<required parameter>, param_ndim=None, grad_scale=1.0, grad_bias=0.0, defaults=None, **inner_opt_kwargs)[source]#

A gradient-tranforming optimizer that constrains the updated parameter values to fall within \([-\epsilon, \epsilon]\).

A step with this optimizer takes the elementwise sign of a parameter’s gradient prior to using InnerOp.step to update the corresponding parameter. The updated parameter is then clamped elementwise to \([-\epsilon, \epsilon]\).

__init__(params, InnerOpt=<class 'torch.optim.sgd.SGD'>, *, epsilon=<required parameter>, param_ndim=None, grad_scale=1.0, grad_bias=0.0, defaults=None, **inner_opt_kwargs)[source]#
Parameters:
paramsSequence[Tensor] | Iterable[Mapping[str, Any]]

Iterable of parameters or dicts defining parameter groups.

InnerOptType[Optimizer] | Partial[Optimizer], optional (default=`torch.nn.optim.SGD`)

The optimizer that updates the parameters after their gradients have been transformed.

epsilonfloat

Specifies the size of the L2-space ball that all parameters will be projected into, post optimization step.

param_ndimOptional[int]

Clamp is performed elementwise, and thus param_ndim need not be adjusted.

grad_scalefloat, optional (default=1.0)

Multiplies each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

grad_biasfloat, optional (default=0.0)

Added to each gradient in-place after the in-place transformation is performed. This can be specified per param-group.

defaultsOptional[Dict[str, Any]]

Specifies default parameters for all parameter groups.

**inner_opt_kwargsAny

Named arguments used to initialize InnerOpt.

Examples

Let’s use LinfProjectedOptim along with a standard SGD-step with a learning rate of 1.0. After the step, each parameter will have its values clamped to \([-1.8, 1.8]\).

>>> import torch as tr
>>> from rai_toolbox.optim import L2ProjectedOptim

Creating a parameter for our optimizer to update, and our optimizer. We specify epsilon=1.8 so that the parameters are projected to the desired domain.

>>> x = tr.tensor([-1.0, 0.5], requires_grad=True)
>>> optim = LinfProjectedOptim([x], epsilon=1.8, InnerOpt=tr.optim.SGD, lr=1.0)

Performing a simple calculation with x and performing backprop to create a gradient.

>>> (tr.tensor([2.0, -2.0]) * x).sum().backward()
>>> x.grad # the un-normed gradient
tensor([2., -2.])

Performing a step with our optimizer transforms the gradient in-place, updates the parameter using SGD([x], lr=1.0).step(), and then projects the parameter into the constraint set.

>>> optim.step()
>>> x.grad # the normalized gradient
tensor([1.0, -1.0])
>>> x  # the updated parameter
tensor([-1.8000,  1.5000], requires_grad=True)

Methods

__init__(params[, InnerOpt, epsilon, ...])

Parameters: