BFGS#

class pyrost.BFGS(loss, x0, grad=None, epsilon=0.0001, c1=0.0001, c2=0.9, xtol=1e-14, beta=0.9, line_search='minpack')#

Minimize a function of one parameter using the the Broyden-Fletcher- Goldfarb-Shanno (BFGS) algorithm.

Parameters
  • loss (Callable[[ndarray], float]) – Objective function to be minimized.

  • x0 (ndarray) – Initial guess.

  • grad (Optional[Callable[[ndarray], float]]) – Gradient of the objective function.

  • epsilon – If grad is approximated, use this value for the step size.

  • c1 (float) – Parameter for Armijo condition rule.

  • c2 (float) – Parameter for curvature condition rule.

  • xtol – Relative tolerance for an acceptable step in the line search algorithm.

  • beta (float) – Exponential decay coefficient for inverse Hessian matrix.

  • line_search (str) –

    Choose the implementation of the line search algorithm to enforce strong Wolfe conditions. The following keyword values are allowed:

    • minpack : MINPACK line search algorithm.

    • scipy : SciPy line search algorithm.

grad(x)#

Return the gradient value of the objective function for a given argument.

Parameters

x (ndarray) – Argument value.

Return type

float

Returns

A gradient value of the objective function.

loss(x)#

Return the objective value for a given argument.

Parameters

x (ndarray) – Argument value.

Return type

float

Returns

Objective value

state_dict()#

Returns the state of the optimizer as a dict.

Returns

  • fcount : Number of functions evaluations made.

  • gcount : Number of gradient evaluations made.

  • c1 : Parameter for Armijo condition rule.

  • c2 : Parameter for curvature condition rule.

  • xk : The current point.

  • fval : Objective value of the current point.

  • old_fval : Objective value of the point prior to ‘xk’.

  • gfk : Gradient value of the current point.

  • gnorm : Gradient norm value of the current point.

  • Hk : The current guess of the Hessian matrix.

  • epsilon : If grad is approximated, use this value for the step size.

  • xtol : Relative tolerance for an acceptable step in the line search algorithm.

Return type

A dictionary with all the parameters of the optimizer

step(maxiter=10, amin=1e-100, amax=1e+100)#

Performs a single optimization step.

Parameters
  • maxiter (int) – Maximum number of iteration of the line search algorithm to perform.

  • amin (float) – Minimum step size.

  • amax (float) – Maximum step size.

update_loss(loss, grad=None)#

Update the objective function to minimize.

Parameters
Return type

None