next up previous
Next: B) `Green's function matching' Up: Categories of existing numerical Previous: Categories of existing numerical

A) `Basis diagonalization'

Class A has the advantage that the $E$-independent basis preserves the linearity of the eigenproblem, giving a matrix eigenequation which returns many solutions $(E_\mu,{\mathbf x}_\mu)$ at once. This is the conventional `basis diagonalization' approach. Generally all the eigenstates from 1 up to a maximum useable state are returned. Clearly $N$ cannot be less than this number of states. For instance in the case of a $d$ dimensional billiard, this means $N$ must scale like $k^d$ if states at wavenumber $k$ are sought. This is clearly a huge limitation.

The basis can be chosen as the analytically-known eigenstates of a simpler Hamiltonian $\hat{H}_0$; this I call Class A1. The resulting basis is therefore orthonormal and complete (in the $N\rightarrow\infty$ limit) and if $\hat{H}_0$ is `close' to $\hat H$ then $N$ will not need to be much higher than $n_{{\mbox{\tiny E}}}$, where $n_{{\mbox{\tiny E}}}$ is the typical quantum number of the desired states at energy $E$.

Alternatively the basis is chosen to be convenient in position space (or momentum space, or a mixture of both). The basis is effectively complete (up to energies of interest) because it entirely covers the domain $\mathcal{D}$. This I call Class A2. The advantage of these localized basis functions is that the resulting $H$ matrix is sparse, allowing much faster extraction of eigensolutions. Various smoothnesses of such basis functions are possible, ranging from lattices (corresponding to piece-wise linear `pyramidal' functions) through other Finite Element [87,19,182,56] basis functions and higher-order spline functions [70], to gaussian packets (coherent states, or the Distributed Gaussian Basis [DGB][57]). The smoother a basis is, the faster the convergence with $N$ can be for a given energy of interest. However, smooth basis functions are more complicated to construct (especially if definite BCs are required), to evaluate, and result in less sparse matrices. Lattice methods (often known as `finite differencing')[161,18] generate sparse matrices which can be diagonalized much faster than dense ones, but errors converge only like a power law $\sim N^{-1/d}$. One smooth basis with useful sparsity properties is the Discrete-Variable Representation (DVR) [17,86,180]. Most methods involve a compromise. The optimal basis set choice for smooth-potential problems appears to be a covering of phase space by gaussian packets, in which case $N$ need only be a couple of times larger than $n_{{\mbox{\tiny E}}}$ (for small dimensions $d$) [57,138]. Because the Wigner function for such problems dies exponentially outside the classically-allowed region of phase space, these phase-space covering methods achieve exponential convergence with $N$, once $N$ is larger than the semiclassical basis size.

Note that if the basis is not orthogonal, as is frequently the case for Class A2 basis sets, the eigenequation $H {\mathbf x} = E {\mathbf x}$ becomes a generalized eigenequation $H {\mathbf x} = E B {\mathbf x}$ where the matrix $B$ gives the basis function overlaps.


next up previous
Next: B) `Green's function matching' Up: Categories of existing numerical Previous: Categories of existing numerical
Alex Barnett 2001-10-03