ffsim.optimize

Optimization algorithms.

ffsim.optimize.minimize_linear_method(params_to_vec, hamiltonian, x0, *, maxiter=1000, lindep=1e-08, epsilon=1e-08, ftol=1e-08, gtol=1e-05, regularization=0.0001, variation=0.5, optimize_regularization=True, optimize_variation=True, optimize_kwargs=None, callback=None)[source]

Minimize the energy of a variational ansatz using the linear method.

References:

Parameters:
  • params_to_vec (Callable[[ndarray], ndarray]) – Function representing the wavefunction ansatz. It takes as input a vector of real-valued parameters and outputs the state vector represented by those parameters.

  • hamiltonian (LinearOperator) – The Hamiltonian representing the energy to be minimized.

  • x0 (ndarray) – Initial guess for the parameters.

  • maxiter (int) – Maximum number of optimization iterations to perform.

  • lindep (float) – Linear dependency threshold to use when solving the generalized eigenvalue problem.

  • epsilon (float) – Increment to use for approximating the gradient using finite difference.

  • ftol (float) – Convergence threshold for the objective function value. The optimization stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.

  • gtol (float) – Convergence threshold for the gradient. The optimization stops when max{|g_i| i = 1, …, n} <= gtol.

  • regularization (float) – Hyperparameter controlling regularization of the energy matrix. Its value must be positive. A larger value results in greater regularization.

  • variation (float) – Hyperparameter controlling the size of parameter variations used in the linear expansion of the wavefunction. Its value must be strictly between 0 and 1. A larger value results in larger variations.

  • optimize_regularization (bool) – Whether to optimize the regularization hyperparameter in each iteration. Optimizing hyperparameters incurs more function and energy evaluations in each iteration, but may improve convergence. The optimization is performed using scipy.optimize.minimize.

  • optimize_variation (bool) – Whether to optimize the variation hyperparameter in each iteration. Optimizing hyperparameters incurs more function and energy evaluations in each iteration, but may improve convergence. The optimization is performed using scipy.optimize.minimize.

  • optimize_kwargs (dict | None) –

    Arguments to use when calling scipy.optimize.minimize to optimize hyperparameters. The call is constructed as

    scipy.optimize.minimize(f, x0, **optimize_kwargs)
    

    If not specified, the default value of dict(method=”L-BFGS-B”) will be used.

  • callback (Optional[Callable[[OptimizeResult], Any]]) –

    A callable called after each iteration. It is called with the signature

    callback(intermediate_result: OptimizeResult)
    

    where intermediate_result is a scipy.optimize.OptimizeResult with attributes x and fun, the present values of the parameter vector and objective function. For all iterations except for the last, it also contains the jac attribute holding the present value of the gradient, as well as regularization and variation attributes holding the present values of the regularization and variation hyperparameters.

Return type:

OptimizeResult

Returns:

The optimization result represented as a scipy.optimize.OptimizeResult object. Note the following definitions of selected attributes:

  • x: The final parameters of the optimization.

  • fun: The energy associated with the final parameters. That is, the expectation value of the Hamiltonian with respect to the state vector corresponding to the parameters.

  • nfev: The number of times the params_to_vec function was called.

  • nlinop: The number of times the hamiltonian linear operator was applied to a vector.