Eric Fellheimer and Mark Rudner
This case is perhaps too simple to use as a test case, though, as the Hamiltonian is independent of time. The second test was to have PSiQCoPATH simulate the behavior of a spin-1/2 magnetic moment in a time-varying magnetic field. The magnetic field varied in time according to the equation
, where
and
are unit vectors in the
and
directions, respectively. The Hamiltonian for this system is
![]() |
(1) |
![]() |
(2) |
In the case where
, the motion is very nearly adiabatic. That is, the spin direction very closely follows the field direction. As
increases, the trajectory acquires increasingly large cycloid-like wiggles. Although this behavior was not expected beforehand, in retrospect it is easy to understand.
[height=2in]cone.jpg
|
The key to this understanding is a mapping that we discovered between this problem and the trajectory of a point on the rim of a cone rolling on a flat surface (see figure ). This mapping comes from the fact that a magnetic field generates rotations about its direction with frequency
. In our case, this rotation axis is itself rotating with frequency
in the
-plane.
The rolling motion of a cone on flat surface consists of two combined rotations - the cone rotates about its own symmetry axis with frequency and about the vertical axis through its tip with frequency
. Under the condition of rolling without slippage, the combined effect of these two angular velocities is a net angular velocity along the line of contact between the cone and the surface. As the cone rolls, the direction of this instantaneous axis of rotation rotates about the vertical direction with frequency
.
Thus the instantaneous axis of rotation in the cone problem has exactly the same behavior as the instantaneous axis of rotation (the magnetic field) in our spin problem. At a given instant, all points on the cone are rotating about the instantaneous axis of rotation, just as at any given instant the spin is precessing about the instantaneous direction of the magnetic field.
The half-angle of the cone corresponding to a particular choice of and
in the quantum spin problem can be found by simple trigonometry, and is given by the relation
![]() |
(3) |
Aside from the intellectual interest of this result, it also turns out to be one of the rare cases of a Hamiltonian with non-trivial time dependence for which we have an exact answer to compare with the simulation. The agreement with our results appears to be quite good, though we have not explored it in quantitative detail.
Once this method validation was complete, we used PSiQCoPATH to simulate the solution of a few instances of NP-Complete problems using the method of quantum computation by adiabatic evolution described in the introduction and the reference given there. The particular problem for which we had easy access the proper Hamiltonians was the so-called exact cover problem. Exact cover is a version of satisfiability involving 3 bit clauses of the form
![]() |
(4) |
This problem is described in detail in the paper by Farhi et al. As in that paper, we use a linear interpolation between the initial and final Hamiltonians of the form
![]() |
(5) |
![]() |
(6) |
We would like to thank Daniel Nagaj for supplying us with these Hamiltonians. An example of our results for a 6 qubit instance of Exact Cover are shown in figure .
[height=2.5in]EC6.jpg
|
These plots were generated by a Matlab script written by us to parse the PSiQCoPATH output files and perform the desired analysis. The script diagonalizes the system's Hamiltonian at each output time step and transforms the evolved state at the corresponding time step into this eigenbasis. The eigenstate population is equal to the square magnitude of the
component of the evolved state in the instantaneous eigenbasis. For
, we see that the probability of finishing in the ground state, i.e. of obtaining the correct solution to the problem, is approximately 60%. When
, this probability is very nearly 1.
In figure , the energy levels (eigenvalues) of the instantaneous Hamiltonian are plotted over the course of the evolution. Notice that the position of the minimum energy gap is precisely where the ground state population gets ``lost'' in the fast run. This is what is expected from the considerations of the adiabatic theorem, and makes for a nice confirmation of the theory.
In the end we were only able to test up to 8 qubits. This is really not enough to make progress over the current state of the art in research on this topic, but with the improvements described in the section on future work we should be able to scale up to much higher dimension. All results were obtained from full time evolution operator calculations. Although this calculation is in a sense overkill for what we have used it for in the analysis, the full time evolution matrix could be used to find the success probability in the case where the initial state is actually a ``mixed-state'' due to thermal noise and/or uncertain preparation. This is an interesting situation to look at from a practical point of in view as this is a more realistic picture of what the situation will be like in real physical implementations.
With the distributed data approach of the matrix-vector multiplication single state evolution algorithm, we should be able to reach even larger systems. Memory usage is significantly lower in that case, and the distribution across processors should allow us to handle much larger matrices without having to reach beyond the cache/fast memory. This code did not become operational until after the tests described, so we only have detailed results for the full time evolution operator calculations.
Throughout these runs we also kept track of PSiQCoPATH's performance in terms of running time. Running time as a function of simulation length and number of processors is plotted in figure . The trends are very nearly linear in both
and
, confirming our projected scaling rules.
We recently realized that it is not necessary to ever have all of the incremental time evolution operators stored at one time. In general, the number of output timesteps requested by the user is much less than the number of actual timesteps performed in the evolution (by a factor of perhaps 1000). Rather than storing all of the incremental operators, all we really need is that fraction of them corresponding to the much coarser output time step.
A much more efficent procedure would be to partially combine the first two steps in the following way. Let be ratio of the total number of time steps to the number of output steps requested. Instead of storing every
, we really only need to store each
=
. We can build up
by multiplying by each successive incremental time evolution operator as it is generated. Once
time steps have been calculated and combined, that
can be stored in memory and the next one started. In this way, the memory requirements of the program will be cut roughly by a factor of
.
As an additional benefit, the final local update step will also be shortened by a factor of
. Overall this leads to a speedup of approximately
in the projected running time.
Throughout the course, most of the skeletal code we were given made use of very little object-oriented code. We made it a point to try to incorporate much of the ideology of object-oriented programming into our system. Throughout the implementation, there were several aspects of this programming style which worked, and several which did not work, in a parallel computation setting.
At a high level, the big advantage of object-oriented programming is the power of abstraction.