Full Configuration Interaction theory

Morten Hjorth-Jensen [1, 2]

 

[1] National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
[2] Department of Physics, University of Oslo, N-0316 Oslo, Norway

 

28th Indian-Summer School on Ab Initio Methods in Nuclear Physics


© 2013-2016, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license

Slater determinants as basis states, short reminder

The simplest possible choice for many-body wavefunctions are product wavefunctions. That is

 
$$ \Psi(x_1, x_2, x_3, \ldots, x_A) \approx \phi_1(x_1) \phi_2(x_2) \phi_3(x_3) \ldots $$

 
because we are really only good at thinking about one particle at a time. Such product wavefunctions, without correlations, are easy to work with; for example, if the single-particle states \( \phi_i(x) \) are orthonormal, then the product wavefunctions are easy to orthonormalize.

Similarly, computing matrix elements of operators are relatively easy, because the integrals factorize.

The price we pay is the lack of correlations, which we must build up by using many, many product wavefunctions. (Thus we have a trade-off: compact representation of correlations but difficult integrals versus easy integrals but many states required.)

Slater determinants as basis states, repetition

Because we have fermions, we are required to have antisymmetric wavefunctions, e.g.

 
$$ \Psi(x_1, x_2, x_3, \ldots, x_A) = - \Psi(x_2, x_1, x_3, \ldots, x_A) $$

 
etc. This is accomplished formally by using the determinantal formalism

 
$$ \Psi(x_1, x_2, \ldots, x_A) = \frac{1}{\sqrt{A!}} \det \left | \begin{array}{cccc} \phi_1(x_1) & \phi_1(x_2) & \ldots & \phi_1(x_A) \\ \phi_2(x_1) & \phi_2(x_2) & \ldots & \phi_2(x_A) \\ \vdots & & & \\ \phi_A(x_1) & \phi_A(x_2) & \ldots & \phi_A(x_A) \end{array} \right | $$

 
Product wavefunction + antisymmetry = Slater determinant.

Slater determinants as basis states

 
$$ \Psi(x_1, x_2, \ldots, x_A) = \frac{1}{\sqrt{N!}} \det \left | \begin{array}{cccc} \phi_1(x_1) & \phi_1(x_2) & \ldots & \phi_1(x_A) \\ \phi_2(x_1) & \phi_2(x_2) & \ldots & \phi_2(x_A) \\ \vdots & & & \\ \phi_A(x_1) & \phi_A(x_2) & \ldots & \phi_A(x_A) \end{array} \right | $$

 
Properties of the determinant (interchange of any two rows or any two columns yields a change in sign; thus no two rows and no two columns can be the same) lead to the Pauli principle:

  • No two particles can be at the same place (two columns the same); and
  • No two particles can be in the same state (two rows the same).

Slater determinants as basis states

As a practical matter, however, Slater determinants beyond \( N=4 \) quickly become unwieldy. Thus we turn to the occupation representation or second quantization to simplify calculations.

The occupation representation, using fermion creation and annihilation operators, is compact and efficient. It is also abstract and, at first encounter, not easy to internalize. It is inspired by other operator formalism, such as the ladder operators for the harmonic oscillator or for angular momentum, but unlike those cases, the operators do not have coordinate space representations.

Instead, one can think of fermion creation/annihilation operators as a game of symbols that compactly reproduces what one would do, albeit clumsily, with full coordinate-space Slater determinants.

Quick repetition of the occupation representation

We start with a set of orthonormal single-particle states \( \{ \phi_i(x) \} \). (Note: this requirement, and others, can be relaxed, but leads to a more involved formalism.) Any orthonormal set will do.

To each single-particle state \( \phi_i(x) \) we associate a creation operator \( \hat{a}^\dagger_i \) and an annihilation operator \( \hat{a}_i \).

When acting on the vacuum state \( | 0 \rangle \), the creation operator \( \hat{a}^\dagger_i \) causes a particle to occupy the single-particle state \( \phi_i(x) \):

 
$$ \phi_i(x) \rightarrow \hat{a}^\dagger_i |0 \rangle $$

 

Quick repetition of the occupation representation

But with multiple creation operators we can occupy multiple states:

 
$$ \phi_i(x) \phi_j(x^\prime) \phi_k(x^{\prime \prime}) \rightarrow \hat{a}^\dagger_i \hat{a}^\dagger_j \hat{a}^\dagger_k |0 \rangle. $$

 

Now we impose antisymmetry, by having the fermion operators satisfy anticommutation relations:

 
$$ \hat{a}^\dagger_i \hat{a}^\dagger_j + \hat{a}^\dagger_j \hat{a}^\dagger_i = [ \hat{a}^\dagger_i ,\hat{a}^\dagger_j ]_+ = \{ \hat{a}^\dagger_i ,\hat{a}^\dagger_j \} = 0 $$

 
so that

 
$$ \hat{a}^\dagger_i \hat{a}^\dagger_j = - \hat{a}^\dagger_j \hat{a}^\dagger_i $$

 

Quick repetition of the occupation representation

Because of this property, automatically \( \hat{a}^\dagger_i \hat{a}^\dagger_i = 0 \), enforcing the Pauli exclusion principle. Thus when writing a Slater determinant using creation operators,

 
$$ \hat{a}^\dagger_i \hat{a}^\dagger_j \hat{a}^\dagger_k \ldots |0 \rangle $$

 
each index \( i,j,k, \ldots \) must be unique.

Full Configuration Interaction Theory

We have defined the ansatz for the ground state as

 
$$ |\Phi_0\rangle = \left(\prod_{i\le F}\hat{a}_{i}^{\dagger}\right)|0\rangle, $$

 
where the index \( i \) defines different single-particle states up to the Fermi level. We have assumed that we have \( N \) fermions. A given one-particle-one-hole (\( 1p1h \)) state can be written as

 
$$ |\Phi_i^a\rangle = \hat{a}_{a}^{\dagger}\hat{a}_i|\Phi_0\rangle, $$

 
while a \( 2p2h \) state can be written as

 
$$ |\Phi_{ij}^{ab}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i|\Phi_0\rangle, $$

 
and a general \( NpNh \) state as

 
$$ |\Phi_{ijk\dots}^{abc\dots}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_{c}^{\dagger}\dots\hat{a}_k\hat{a}_j\hat{a}_i|\Phi_0\rangle. $$

 

Full Configuration Interaction Theory

We can then expand our exact state function for the ground state as

 
$$ |\Psi_0\rangle=C_0|\Phi_0\rangle+\sum_{ai}C_i^a|\Phi_i^a\rangle+\sum_{abij}C_{ij}^{ab}|\Phi_{ij}^{ab}\rangle+\dots =(C_0+\hat{C})|\Phi_0\rangle, $$

 
where we have introduced the so-called correlation operator

 
$$ \hat{C}=\sum_{ai}C_i^a\hat{a}_{a}^{\dagger}\hat{a}_i +\sum_{abij}C_{ij}^{ab}\hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i+\dots $$

 
Since the normalization of \( \Psi_0 \) is at our disposal and since \( C_0 \) is by hypothesis non-zero, we may arbitrarily set \( C_0=1 \) with corresponding proportional changes in all other coefficients. Using this so-called intermediate normalization we have

 
$$ \langle \Psi_0 | \Phi_0 \rangle = \langle \Phi_0 | \Phi_0 \rangle = 1, $$

 
resulting in

 
$$ |\Psi_0\rangle=(1+\hat{C})|\Phi_0\rangle. $$

 

Full Configuration Interaction Theory

We rewrite

 
$$ |\Psi_0\rangle=C_0|\Phi_0\rangle+\sum_{ai}C_i^a|\Phi_i^a\rangle+\sum_{abij}C_{ij}^{ab}|\Phi_{ij}^{ab}\rangle+\dots, $$

 
in a more compact form as

 
$$ |\Psi_0\rangle=\sum_{PH}C_H^P\Phi_H^P=\left(\sum_{PH}C_H^P\hat{A}_H^P\right)|\Phi_0\rangle, $$

 
where \( H \) stands for \( 0,1,\dots,n \) hole states and \( P \) for \( 0,1,\dots,n \) particle states. Our requirement of unit normalization gives

 
$$ \langle \Psi_0 | \Phi_0 \rangle = \sum_{PH}|C_H^P|^2= 1, $$

 
and the energy can be written as

 
$$ E= \langle \Psi_0 | \hat{H} |\Phi_0 \rangle= \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}. $$

 

Full Configuration Interaction Theory

Normally

 
$$ E= \langle \Psi_0 | \hat{H} |\Phi_0 \rangle= \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}, $$

 
is solved by diagonalization setting up the Hamiltonian matrix defined by the basis of all possible Slater determinants. A diagonalization is equivalent to finding the variational minimum of

 
$$ \langle \Psi_0 | \hat{H} |\Phi_0 \rangle-\lambda \langle \Psi_0 |\Phi_0 \rangle, $$

 
where \( \lambda \) is a variational multiplier to be identified with the energy of the system. The minimization process results in

 
$$ \delta\left[ \langle \Psi_0 | \hat{H} |\Phi_0 \rangle-\lambda \langle \Psi_0 |\Phi_0 \rangle\right]= $$

 

 
$$ \sum_{P'H'}\left\{\delta[C_H^{*P}]\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}+ C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle \delta[C_{H'}^{P'}]- \lambda( \delta[C_H^{*P}]C_{H'}^{P'}+C_H^{*P}\delta[C_{H'}^{P'}]\right\} = 0. $$

 
Since the coefficients \( \delta[C_H^{*P}] \) and \( \delta[C_{H'}^{P'}] \) are complex conjugates it is necessary and sufficient to require the quantities that multiply with \( \delta[C_H^{*P}] \) to vanish.

Full Configuration Interaction Theory

This leads to

 
$$ \sum_{P'H'}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}-\lambda C_H^{P}=0, $$

 
for all sets of \( P \) and \( H \).

If we then multiply by the corresponding \( C_H^{*P} \) and sum over \( PH \) we obtain

 
$$ \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}-\lambda\sum_{PH}|C_H^P|^2=0, $$

 
leading to the identification \( \lambda = E \). This means that we have for all \( PH \) sets

 
$$ \begin{equation} \sum_{P'H'}\langle \Phi_H^P | \hat{H} -E|\Phi_{H'}^{P'} \rangle = 0. \tag{1} \end{equation} $$

 

Full Configuration Interaction Theory

An alternative way to derive the last equation is to start from

 
$$ (\hat{H} -E)|\Psi_0\rangle = (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0, $$

 
and if this equation is successively projected against all \( \Phi_H^P \) in the expansion of \( \Psi \), then the last equation on the previous slide results. As stated previously, one solves this equation normally by diagonalization. If we are able to solve this equation exactly (that is numerically exactly) in a large Hilbert space (it will be truncated in terms of the number of single-particle states included in the definition of Slater determinants), it can then serve as a benchmark for other many-body methods which approximate the correlation operator \( \hat{C} \).

Example of a Hamiltonian matrix

Suppose, as an example, that we have six fermions below the Fermi level. This means that we can make at most \( 6p-6h \) excitations. If we have an infinity of single particle states above the Fermi level, we will obviously have an infinity of say \( 2p-2h \) excitations. Each such way to configure the particles is called a configuration. We will always have to truncate in the basis of single-particle states. This gives us a finite number of possible Slater determinants. Our Hamiltonian matrix would then look like (where each block can have a large dimensionalities):

\( 0p-0h \) \( 1p-1h \) \( 2p-2h \) \( 3p-3h \) \( 4p-4h \) \( 5p-5h \) \( 6p-6h \)
\( 0p-0h \) x x x 0 0 0 0
\( 1p-1h \) x x x x 0 0 0
\( 2p-2h \) x x x x x 0 0
\( 3p-3h \) 0 x x x x x 0
\( 4p-4h \) 0 0 x x x x x
\( 5p-5h \) 0 0 0 x x x x
\( 6p-6h \) 0 0 0 0 x x x

with a two-body force. Why are there non-zero blocks of elements?

Example of a Hamiltonian matrix with a Hartree-Fock basis

If we use a Hartree-Fock basis, this corresponds to a particular unitary transformation where matrix elements of the type \( \langle 0p-0h \vert \hat{H} \vert 1p-1h\rangle =\langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle=0 \) and our Hamiltonian matrix becomes

\( 0p-0h \) \( 1p-1h \) \( 2p-2h \) \( 3p-3h \) \( 4p-4h \) \( 5p-5h \) \( 6p-6h \)
\( 0p-0h \) \( \tilde{x} \) 0 \( \tilde{x} \) 0 0 0 0
\( 1p-1h \) 0 \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) 0 0 0
\( 2p-2h \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) 0 0
\( 3p-3h \) 0 \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) 0
\( 4p-4h \) 0 0 \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \)
\( 5p-5h \) 0 0 0 \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \)
\( 6p-6h \) 0 0 0 0 \( \tilde{x} \) \( \tilde{x} \) \( \tilde{x} \)

Shell-model jargon

If we do not make any truncations in the possible sets of Slater determinants (many-body states) we can make by distributing \( A \) nucleons among \( n \) single-particle states, we call such a calculation for Full configuration interaction theory

If we make truncations, we have different possibilities

  • The standard nuclear shell-model. Here we define an effective Hilbert space with respect to a given core. The calculations are normally then performed for all many-body states that can be constructed from the effective Hilbert spaces. This approach requires a properly defined effective Hamiltonian
  • We can truncate in the number of excitations. For example, we can limit the possible Slater determinants to only \( 1p-1h \) and \( 2p-2h \) excitations. This is called a configuration interaction calculation at the level of singles and doubles excitations, or just CISD.
  • We can limit the number of excitations in terms of the excitation energies. If we do not define a core, this defines normally what is called the no-core shell-model approach.

What happens if we have a three-body interaction and a Hartree-Fock basis?

FCI and the exponential growth

Full configuration interaction theory calculations provide in principle, if we can diagonalize numerically, all states of interest. The dimensionality of the problem explodes however quickly.

The total number of Slater determinants which can be built with say \( N \) neutrons distributed among \( n \) single particle states is

 
$$ \left (\begin{array}{c} n \\ N\end{array} \right) =\frac{n!}{(n-N)!N!}. $$

 

As an example, for a model space which comprises the first four major harmonic oscillator shells only, that is the \( 0s \), \( 0p \), \( 1s0d \) and \( 1p0f \) shells we have \( 40 \) single particle states for neutrons and protons. For the eight neutrons of oxygen-16 we would then have

 
$$ \left (\begin{array}{c} 40 \\ 8\end{array} \right) =\frac{40!}{(32)!8!}\sim 8\times 10^{7}, $$

 
possible Slater determinants. Multiplying this with the number of proton Slater determinants we end up with approximately \( d\sim 10^{15} \) possible Slater determinants and a Hamiltonian matrix of dimension \( 10^{15}\times 10^{15} \), an intractable problem if we wish to diagonalize the Hamiltonian matrix.

Exponential wall

This number can be reduced if we look at specific symmetries only. However, the dimensionality explodes quickly!

  • For Hamiltonian matrices of dimensionalities which are smaller than \( d\sim 10^5 \), we would use so-called direct methods for diagonalizing the Hamiltonian matrix
  • For larger dimensionalities iterative eigenvalue solvers like Lanczos' method are used. The most efficient codes at present can handle matrices of \( d\sim 10^{10} \).

A non-practical way of solving the eigenvalue problem

To see this, we look at the contributions arising from

 
$$ \langle \Phi_H^P | = \langle \Phi_0| $$

 
in Eq. (1), that is we multiply with \( \langle \Phi_0 | \) from the left in

 
$$ (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0. $$

 
If we assume that we have a two-body operator at most, Slater's rule gives then an equation for the correlation energy in terms of \( C_i^a \) and \( C_{ij}^{ab} \) only. We get then

 
$$ \langle \Phi_0 | \hat{H} -E| \Phi_0\rangle + \sum_{ai}\langle \Phi_0 | \hat{H} -E|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H} -E|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}=0, $$

 
or

 
$$ E-E_0 =\Delta E=\sum_{ai}\langle \Phi_0 | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}, $$

 
where the energy \( E_0 \) is the reference energy and \( \Delta E \) defines the so-called correlation energy. The single-particle basis functions could be the results of a Hartree-Fock calculation or just the eigenstates of the non-interacting part of the Hamiltonian.

A non-practical way of solving the eigenvalue problem

To see this, we look at the contributions arising from

 
$$ \langle \Phi_H^P | = \langle \Phi_0| $$

 
in Eq. (1), that is we multiply with \( \langle \Phi_0 | \) from the left in

 
$$ (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0. $$

 

A non-practical way of solving the eigenvalue problem

If we assume that we have a two-body operator at most, Slater's rule gives then an equation for the correlation energy in terms of \( C_i^a \) and \( C_{ij}^{ab} \) only. We get then

 
$$ \langle \Phi_0 | \hat{H} -E| \Phi_0\rangle + \sum_{ai}\langle \Phi_0 | \hat{H} -E|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H} -E|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}=0, $$

 
or

 
$$ E-E_0 =\Delta E=\sum_{ai}\langle \Phi_0 | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}, $$

 
where the energy \( E_0 \) is the reference energy and \( \Delta E \) defines the so-called correlation energy. The single-particle basis functions could be the results of a Hartree-Fock calculation or just the eigenstates of the non-interacting part of the Hamiltonian.

Rewriting the FCI equation

In our notes on Hartree-Fock calculations, we have already computed the matrix \( \langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle \) and \( \langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab}\rangle \). If we are using a Hartree-Fock basis, then the matrix elements \( \langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle=0 \) and we are left with a correlation energy given by

 
$$ E-E_0 =\Delta E^{HF}=\sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}. $$

 

Rewriting the FCI equation

Inserting the various matrix elements we can rewrite the previous equation as

 
$$ \Delta E=\sum_{ai}\langle i| \hat{f}|a \rangle C_{i}^{a}+ \sum_{abij}\langle ij | \hat{v}| ab \rangle C_{ij}^{ab}. $$

 
This equation determines the correlation energy but not the coefficients \( C \).

Rewriting the FCI equation, does not stop here

We need more equations. Our next step is to set up

 
$$ \langle \Phi_i^a | \hat{H} -E| \Phi_0\rangle + \sum_{bj}\langle \Phi_i^a | \hat{H} -E|\Phi_{j}^{b} \rangle C_{j}^{b}+ \sum_{bcjk}\langle \Phi_i^a | \hat{H} -E|\Phi_{jk}^{bc} \rangle C_{jk}^{bc}+ \sum_{bcdjkl}\langle \Phi_i^a | \hat{H} -E|\Phi_{jkl}^{bcd} \rangle C_{jkl}^{bcd}=0, $$

 
as this equation will allow us to find an expression for the coefficents \( C_i^a \) since we can rewrite this equation as

 
$$ \langle i | \hat{f}| a\rangle +\langle \Phi_i^a | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{bj\ne ai}\langle \Phi_i^a | \hat{H}|\Phi_{j}^{b} \rangle C_{j}^{b}+ \sum_{bcjk}\langle \Phi_i^a | \hat{H}|\Phi_{jk}^{bc} \rangle C_{jk}^{bc}+ \sum_{bcdjkl}\langle \Phi_i^a | \hat{H}|\Phi_{jkl}^{bcd} \rangle C_{jkl}^{bcd}=EC_i^a. $$

 

Rewriting the FCI equation, please stop here

We see that on the right-hand side we have the energy \( E \). This leads to a non-linear equation in the unknown coefficients. These equations are normally solved iteratively ( that is we can start with a guess for the coefficients \( C_i^a \)). A common choice is to use perturbation theory for the first guess, setting thereby

 
$$ C_{i}^{a}=\frac{\langle i | \hat{f}| a\rangle}{\epsilon_i-\epsilon_a}. $$

 

Rewriting the FCI equation, more to add

The observant reader will however see that we need an equation for \( C_{jk}^{bc} \) and \( C_{jkl}^{bcd} \) as well. To find equations for these coefficients we need then to continue our multiplications from the left with the various \( \Phi_{H}^P \) terms.

For \( C_{jk}^{bc} \) we need then

 
$$ \langle \Phi_{ij}^{ab} | \hat{H} -E| \Phi_0\rangle + \sum_{kc}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{k}^{c} \rangle C_{k}^{c}+ $$

 

 
$$ \sum_{cdkl}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{kl}^{cd} \rangle C_{kl}^{cd}+\sum_{cdeklm}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{klm}^{cde} \rangle C_{klm}^{cde}+\sum_{cdefklmn}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{klmn}^{cdef} \rangle C_{klmn}^{cdef}=0, $$

 
and we can isolate the coefficients \( C_{kl}^{cd} \) in a similar way as we did for the coefficients \( C_{i}^{a} \).

Rewriting the FCI equation, more to add

A standard choice for the first iteration is to set

 
$$ C_{ij}^{ab} =\frac{\langle ij \vert \hat{v} \vert ab \rangle}{\epsilon_i+\epsilon_j-\epsilon_a-\epsilon_b}. $$

 
At the end we can rewrite our solution of the Schroedinger equation in terms of \( n \) coupled equations for the coefficients \( C_H^P \). This is a very cumbersome way of solving the equation. However, by using this iterative scheme we can illustrate how we can compute the various terms in the wave operator or correlation operator \( \hat{C} \). We will later identify the calculation of the various terms \( C_H^P \) as parts of different many-body approximations to full CI. In particular, we can relate this non-linear scheme with Coupled Cluster theory and many-body perturbation theory.

Summarizing FCI and bringing in approximative methods

If we can diagonalize large matrices, FCI is the method of choice since:

  • It gives all eigenvalues, ground state and excited states
  • The eigenvectors are obtained directly from the coefficients \( C_H^P \) which result from the diagonalization
  • We can easily compute expectation values of other operators, as well as transition probabilities
  • Correlations are easy to understand in terms of contributions to a given operator beyond the Hartree-Fock contribution. This is the standard approach in many-body theory.

Definition of the correlation energy

The correlation energy is defined as, with a two-body Hamiltonian,

 
$$ \Delta E=\sum_{ai}\langle i| \hat{f}|a \rangle C_{i}^{a}+ \sum_{abij}\langle ij | \hat{v}| ab \rangle C_{ij}^{ab}. $$

 
The coefficients \( C \) result from the solution of the eigenvalue problem. The energy of say the ground state is then

 
$$ E=E_{ref}+\Delta E, $$

 
where the so-called reference energy is the energy we obtain from a Hartree-Fock calculation, that is

 
$$ E_{ref}=\langle \Phi_0 \vert \hat{H} \vert \Phi_0 \rangle. $$

 

FCI equation and the coefficients

However, as we have seen, even for a small case like the four first major shells and a nucleus like oxygen-16, the dimensionality becomes quickly intractable. If we wish to include single-particle states that reflect weakly bound systems, we need a much larger single-particle basis. We need thus approximative methods that sum specific correlations to infinite order.

Popular methods are

All these methods start normally with a Hartree-Fock basis as the calculational basis.

Building a many-body basis

Here we will discuss how we can set up a single-particle basis which we can use in the various parts of our projects, from the simple pairing model to infinite nuclear matter. We will use here the simple pairing model to illustrate in particular how to set up a single-particle basis. We will also use this do discuss standard FCI approaches like:

  1. Standard shell-model basis in one or two major shells
  2. Full CI in a given basis and no truncations
  3. CISD and CISDT approximations
  4. No-core shell model and truncation in excitation energy

Building a many-body basis

An important step in an FCI code is to construct the many-body basis.

While the formalism is independent of the choice of basis, the effectiveness of a calculation will certainly be basis dependent.

Furthermore there are common conventions useful to know.

First, the single-particle basis has angular momentum as a good quantum number. You can imagine the single-particle wavefunctions being generated by a one-body Hamiltonian, for example a harmonic oscillator. Modifications include harmonic oscillator plus spin-orbit splitting, or self-consistent mean-field potentials, or the Woods-Saxon potential which mocks up the self-consistent mean-field. For nuclei, the harmonic oscillator, modified by spin-orbit splitting, provides a useful language for describing single-particle states.

Building a many-body basis

Each single-particle state is labeled by the following quantum numbers:

  • Orbital angular momentum \( l \)
  • Intrinsic spin \( s \) = 1/2 for protons and neutrons
  • Angular momentum \( j = l \pm 1/2 \)
  • \( z \)-component \( j_z \) (or \( m \))
  • Some labeling of the radial wavefunction, typically \( n \) the number of nodes in the radial wavefunction, but in the case of harmonic oscillator one can also use the principal quantum number \( N \), where the harmonic oscillator energy is \( (N+3/2)\hbar \omega \).

In this format one labels states by \( n(l)_j \), with \( (l) \) replaced by a letter: \( s \) for \( l=0 \), \( p \) for \( l=1 \), \( d \) for \( l=2 \), \( f \) for \( l=3 \), and thenceforth alphabetical.

Building a many-body basis

In practice the single-particle space has to be severely truncated. This truncation is typically based upon the single-particle energies, which is the effective energy from a mean-field potential.

Sometimes we freeze the core and only consider a valence space. For example, one may assume a frozen \( {}^{4}\mbox{He} \) core, with two protons and two neutrons in the \( 0s_{1/2} \) shell, and then only allow active particles in the \( 0p_{1/2} \) and \( 0p_{3/2} \) orbits.

Another example is a frozen \( {}^{16}\mbox{O} \) core, with eight protons and eight neutrons filling the \( 0s_{1/2} \), \( 0p_{1/2} \) and \( 0p_{3/2} \) orbits, with valence particles in the \( 0d_{5/2}, 1s_{1/2} \) and \( 0d_{3/2} \) orbits.

Sometimes we refer to nuclei by the valence space where their last nucleons go. So, for example, we call \( {}^{12}\mbox{C} \) a \( p \)-shell nucleus, while \( {}^{26}\mbox{Al} \) is an \( sd \)-shell nucleus and \( {}^{56}\mbox{Fe} \) is a \( pf \)-shell nucleus.

Building a many-body basis

There are different kinds of truncations.

  • For example, one can start with `filled' orbits (almost always the lowest), and then allow one, two, three... particles excited out of those filled orbits. These are called 1p-1h, 2p-2h, 3p-3h excitations.
  • Alternately, one can state a maximal orbit and allow all possible configurations with particles occupying states up to that maximum. This is called full configuration.
  • Finally, for particular use in nuclear physics, there is the energy truncation, also called the \( N\hbar\Omega \) or \( N_{max} \) truncation.

Building a many-body basis

Here one works in a harmonic oscillator basis, with each major oscillator shell assigned a principal quantum number \( N=0,1,2,3,... \). The \( N\hbar\Omega \) or \( N_{max} \) truncation: Any configuration is given an noninteracting energy, which is the sum of the single-particle harmonic oscillator energies. (Thus this ignores spin-orbit splitting.)

Excited state are labeled relative to the lowest configuration by the number of harmonic oscillator quanta.

This truncation is useful because if one includes all configuration up to some \( N_{max} \), and has a translationally invariant interaction, then the intrinsic motion and the center-of-mass motion factor. In other words, we can know exactly the center-of-mass wavefunction.

In almost all cases, the many-body Hamiltonian is rotationally invariant. This means it commutes with the operators \( \hat{J}^2, \hat{J}_z \) and so eigenstates will have good \( J,M \). Furthermore, the eigenenergies do not depend upon the orientation \( M \).

Therefore we can choose to construct a many-body basis which has fixed \( M \); this is called an \( M \)-scheme basis.

Alternately, one can construct a many-body basis which has fixed \( J \), or a \( J \)-scheme basis.

Building a many-body basis

The Hamiltonian matrix will have smaller dimensions (a factor of 10 or more) in the \( J \)-scheme than in the \( M \)-scheme. On the other hand, as we'll show in the next slide, the \( M \)-scheme is very easy to construct with Slater determinants, while the \( J \)-scheme basis states, and thus the matrix elements, are more complicated, almost always being linear combinations of \( M \)-scheme states. \( J \)-scheme bases are important and useful, but we'll focus on the simpler \( M \)-scheme.

The quantum number \( m \) is additive (because the underlying group is Abelian): if a Slater determinant \( \hat{a}_i^\dagger \hat{a}^\dagger_j \hat{a}^\dagger_k \ldots | 0 \rangle \) is built from single-particle states all with good \( m \), then the total

 
$$ M = m_i + m_j + m_k + \ldots $$

 
This is not true of \( J \), because the angular momentum group SU(2) is not Abelian.

Building a many-body basis

The upshot is that

  • It is easy to construct a Slater determinant with good total \( M \);
  • It is trivial to calculate \( M \) for each Slater determinant;
  • So it is easy to construct an \( M \)-scheme basis with fixed total \( M \).

Note that the individual \( M \)-scheme basis states will not, in general, have good total \( J \). Because the Hamiltonian is rotationally invariant, however, the eigenstates will have good \( J \). (The situation is muddied when one has states of different \( J \) that are nonetheless degenerate.)

Building a many-body basis

Example: two \( j=1/2 \) orbits

Index \( n \) \( l \) \( j \) \( m_j \)
1 0 0 1/2 -1/2
2 0 0 1/2 1/2
3 1 0 1/2 -1/2
4 1 0 1/2 1/2

Note that the order is arbitrary.

Building a many-body basis

There are \( \left ( \begin{array}{c} 4 \\ 2 \end{array} \right) = 6 \) two-particle states, which we list with the total \( M \):

Occupied \( M \)
1,2 0
1,3 -1
1,4 0
2,3 0
2,4 1
3,4 0
There are 4 states with \( M= 0 \), and 1 each with \( M = \pm 1 \).

Building a many-body basis

As another example, consider using only single particle states from the \( 0d_{5/2} \) space. They have the following quantum numbers

Index \( n \) \( l \) \( j \) \( m_j \)
1 0 2 5/2 -5/2
2 0 2 5/2 -3/2
3 0 2 5/2 -1/2
4 0 2 5/2 1/2
5 0 2 5/2 3/2
6 0 2 5/2 5/2

Building a many-body basis

There are \( \left ( \begin{array}{c} 6 \\ 2 \end{array} \right) = 15 \) two-particle states, which we list with the total \( M \):

Occupied \( M \) Occupied \( M \) Occupied \( M \)
1,2 -4 2,3 -2 3,5 1
1,3 -3 2,4 -1 3,6 2
1,4 -2 2,5 0 4,5 2
1,5 -1 2,6 1 4,6 3
1,6 0 3,4 0 5,6 4

There are 3 states with \( M= 0 \), 2 with \( M = 1 \), and so on.