qubiter.adv_applications.MeanHamilMinimizer module¶
-
class
qubiter.adv_applications.MeanHamilMinimizer.
MeanHamilMinimizer
(emp_mhamil, targ_mhamil, all_var_nums, init_var_num_to_rads, print_hiatus=1, verbose=False)[source]¶ Bases:
qubiter.adv_applications.CostMinimizer.CostMinimizer
This class is a child of class CostMinimizer. It’s purpose is to minimize a cost function which equals the mean value of a Hamiltonian. We refer to this task as the Mean Hamiltonian Minimization Problem.
The qc history of this problem started with quantum chemists planning to use on a qc the phase estimation algo invented by Kitaev? (an algo that is also implemented in Qubiter) to estimate the energy levels ( eigenvalues) of simple molecules, initially H2. Then a bunch of people realized, heck, rather than trying to estimate the eigenvalues of a Hamiltonian by estimating the phase changes it causes, we can estimate those eigenvalues more efficiently by estimating the mean value of that Hamiltonian as measured empirically on a qc. Basically, just the Rayleigh-Ritz method, one of the oldest tricks in the book. One of the first papers to propose this mean idea is https://arxiv.org/abs/1304.3061 Their algo is commonly referred to by the ungainly name VQE (Variational Quantum Eigensolver). VQE was originally applied to do quantum chemistry with a qc. But now Rigetti and others have renamed it hybrid quantum-classical quantum computing and pointed out that it’s an algo that has wide applicability, not just to quantum chemistry.
The idea behind hybrid quantum-classical is very simple. One has a classical box CBox and a quantum box QBox. The gates of QBox depend on N gate parameters. QBox sends info to CBox. CBox sends back to QBox N new gate parameters that will lower some cost function. This feedback process between CBox and QBox continues until the cost is minimized. The cost function is the mean value of a Hamiltonian which is estimated empirically from data obtained from the qc which resides inside the QBox.
To minimize a function of N continuous parameters, one can use some methods like simulated annealing and Powell that do not require calculating derivatives, or one can use methods that do use derivatives. Another possible separation is between methods that don’t care which local minimum they find, as long as they find one of them, and those methods that try to find the best local minimum of them all, the so called global minimum. Yet another separation is between methods that allow constraints and those that don’t.
Among the methods that do use derivatives, the so called gradient based methods only use the 1st derivative, whereas other methods use both first (Jacobian) and second (Hessian) derivatives. The performance of those that use both 1st and 2nd derivatives degrades quickly as N grows. Besides, calculating 2nd derivatives is very expensive. Hence, methods that use the 2nd derivatives are practically useless in the neural network field where N is usually very large. In that field, gradient based methods rule.
A method that uses no derivatives is Powell. A gradient based method that is designed to have a fast convergence rate is the Conjugate Gradient (CG) method. Another gradient based method is back-propagation (BP). BP can be implemented as distributed computing much more easily than other gradient based methods so it is favored by the most popular computer programs for doing distributed AI, such as PyTorch and TensorFlow.
Qubiter can perform minimization using various minlibs (minimization software libraries) such as ‘scipy’, ‘autograd’, ‘tflow’, ‘pytorch’. It can also use various devices (aka simulators or backends), either virtual or real, to do the minimization. For example, tensorflow is a minlib and SEO_simulator_tf is a backend that is native to qubiter and uses tensorflow. By a native device, we mean one that uses Qubiter native simulators like SEO_simulator and SEO_simulator_tf.
Non-scipy minlibs implement backprop.
The ‘scipy’ minlib is a wrapper for the scipy function scipy.optimize.minimize. This scipy umbrella method implements many minimization methods, including Powell and CG.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
Variables: - all_var_nums (list[int]) – this is a list of distinct ints that identify each continuous variable (i.e., parameter, non-functional placeholder variable) on which the cost function depends. The ordering corresponds to the ordering of self.init_x_val
- emp_mhamil (MeanHamil) – Empirical mean Hamiltonian, used to evaluate cost
- init_x_val (nm.ndarray) – this array gives the initial values in radians for the cost function being minimized. The ordering corresponds to the ordering of self.all_var_nums
- targ_mhamil (MeanHamil) – Target mean Hamiltonian, used to evaluate targ_cost
-
__init__
(emp_mhamil, targ_mhamil, all_var_nums, init_var_num_to_rads, print_hiatus=1, verbose=False)[source]¶ Constructor
Parameters:
-
cost_fun
(x_val)[source]¶ This method wraps self.emp_mhamil.get_mean_val(). This method will also print out, whenever it is called, a report of the current values of x and cost (and targ_cost if it is available).
Parameters: x_val (np.ndarray) – Returns: Return type: float
-
find_min
(minlib, **kwargs)[source]¶ This method finds minimum of cost function. It allows user to choose among several possible minlibs, namely, ‘scipy’, ‘autograd’, ‘tflow’, ‘pytorch’. minlib parameters can be passed in via kwargs.
kwargs (keyword arguments) minlib = scipy
the keyword args of scipy.optimize.minimize- minlib = autograd, tflow
- num_inter : float
- number of iterations (an iteration is every time call cost_fun)
- descent_rate : float
- positive float, constant that multiplies gradient of cost function being minimized. Often denoted as eta
Parameters: - minlib (str) –
- kwargs (dict) –
Returns: OptimizeResult is a class (basically an enum) defined in scipy.optimize to hold the output results of scipy.optimize.minimize
Return type: OptimizeResult | None
-
targ_cost_fun
(x_val)[source]¶ Returns the cost, predicted from theory, rather than estimated from data as in cost_fun(). This method mimics the method cost_fun(), but that one wraps self.emp_mhamil.get_mean_val(). This one wraps self.targ_mhamil.get_mean_val().
Parameters: x_val (np.ndarray) – Returns: Return type: float