Week 39: Full configuration interaction theory

Morten Hjorth-Jensen [1, 2]
[1] Department of Physics and Center for Computing in Science Education, University of Oslo, Norway
[2] Department of Physics and Astronomy and Facility for Rare Isotope Beams, Michigan State University, USA

Week 39, September 223-27


Week 39, September 23-27, 2024

Full Configuration Interaction Theory

We have defined the ansatz for the ground state as

$$ |\Phi_0\rangle = \left(\prod_{i\le F}\hat{a}_{i}^{\dagger}\right)|0\rangle, $$

where the index \( i \) defines different single-particle states up to the Fermi level. We have assumed that we have \( N \) fermions.

One-particle-one-hole state

A given one-particle-one-hole (\( 1p1h \)) state can be written as

$$ |\Phi_i^a\rangle = \hat{a}_{a}^{\dagger}\hat{a}_i|\Phi_0\rangle, $$

while a \( 2p2h \) state can be written as

$$ |\Phi_{ij}^{ab}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i|\Phi_0\rangle, $$

and a general \( NpNh \) state as

$$ |\Phi_{ijk\dots}^{abc\dots}\rangle = \hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_{c}^{\dagger}\dots\hat{a}_k\hat{a}_j\hat{a}_i|\Phi_0\rangle. $$

Full Configuration Interaction Theory

We can then expand our exact state function for the ground state as

$$ |\Psi_0\rangle=C_0|\Phi_0\rangle+\sum_{ai}C_i^a|\Phi_i^a\rangle+\sum_{abij}C_{ij}^{ab}|\Phi_{ij}^{ab}\rangle+\dots =(C_0+\hat{C})|\Phi_0\rangle, $$

where we have introduced the so-called correlation operator

$$ \hat{C}=\sum_{ai}C_i^a\hat{a}_{a}^{\dagger}\hat{a}_i +\sum_{abij}C_{ij}^{ab}\hat{a}_{a}^{\dagger}\hat{a}_{b}^{\dagger}\hat{a}_j\hat{a}_i+\dots $$

Intermediate normalization

Since the normalization of \( \Psi_0 \) is at our disposal and since \( C_0 \) is by hypothesis non-zero, we may arbitrarily set \( C_0=1 \) with corresponding proportional changes in all other coefficients. Using this so-called intermediate normalization we have

$$ \langle \Psi_0 | \Phi_0 \rangle = \langle \Phi_0 | \Phi_0 \rangle = 1, $$

resulting in

$$ |\Psi_0\rangle=(1+\hat{C})|\Phi_0\rangle. $$

Full Configuration Interaction Theory

We rewrite

$$ |\Psi_0\rangle=C_0|\Phi_0\rangle+\sum_{ai}C_i^a|\Phi_i^a\rangle+\sum_{abij}C_{ij}^{ab}|\Phi_{ij}^{ab}\rangle+\dots, $$

in a more compact form as

$$ |\Psi_0\rangle=\sum_{PH}C_H^P\Phi_H^P=\left(\sum_{PH}C_H^P\hat{A}_H^P\right)|\Phi_0\rangle, $$

where \( H \) stands for \( 0,1,\dots,n \) hole states and \( P \) for \( 0,1,\dots,n \) particle states.

Compact expression of correlated part

We have introduced the operator \( \hat{A}_H^P \) which contains an equal number of creation and annihilation operators.

Our requirement of unit normalization gives

$$ \langle \Psi_0 | \Phi_0 \rangle = \sum_{PH}|C_H^P|^2= 1, $$

and the energy can be written as

$$ E= \langle \Psi_0 | \hat{H} |\Psi_0 \rangle= \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}. $$

Full Configuration Interaction Theory

Normally

$$ E= \langle \Psi_0 | \hat{H} |\Psi_0 \rangle= \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}, $$

is solved by diagonalization setting up the Hamiltonian matrix defined by the basis of all possible Slater determinants. A diagonalization

is equivalent to finding the variational minimum of

$$ \langle \Psi_0 | \hat{H} |\Psi_0 \rangle-\lambda \langle \Psi_0 |\Psi_0 \rangle, $$

where \( \lambda \) is a variational multiplier to be identified with the energy of the system.

Minimization

The minimization process results in

$$ \delta\left[ \langle \Psi_0 | \hat{H} |\Psi_0 \rangle-\lambda \langle \Psi_0 |\Psi_0 \rangle\right]=0, $$

and since the coefficients \( \delta[C_H^{*P}] \) and \( \delta[C_{H'}^{P'}] \) are complex conjugates it is necessary and sufficient to require the quantities that multiply with \( \delta[C_H^{*P}] \) to vanish. Varying the latter coefficients we have then

$$ \sum_{P'H'}\left\{\delta[C_H^{*P}]\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}- \lambda( \delta[C_H^{*P}]C_{H'}^{P'}]\right\} = 0. $$

Full Configuration Interaction Theory

This leads to

$$ \sum_{P'H'}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}-\lambda C_H^{P}=0, $$

for all sets of \( P \) and \( H \).

If we then multiply by the corresponding \( C_H^{*P} \) and sum over \( PH \) we obtain

$$ \sum_{PP'HH'}C_H^{*P}\langle \Phi_H^P | \hat{H} |\Phi_{H'}^{P'} \rangle C_{H'}^{P'}-\lambda\sum_{PH}|C_H^P|^2=0, $$

leading to the identification \( \lambda = E \).

Full Configuration Interaction Theory

An alternative way to derive the last equation is to start from

$$ (\hat{H} -E)|\Psi_0\rangle = (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0, $$

and if this equation is successively projected against all \( \Phi_H^P \) in the expansion of \( \Psi \), then the last equation on the previous slide results. As stated previously, one solves this equation normally by diagonalization. If we are able to solve this equation exactly (that is numerically exactly) in a large Hilbert space (it will be truncated in terms of the number of single-particle states included in the definition of Slater determinants), it can then serve as a benchmark for other many-body methods which approximate the correlation operator \( \hat{C} \).

FCI and the exponential growth

Full configuration interaction theory calculations provide in principle, if we can diagonalize numerically, all states of interest. The dimensionality of the problem explodes however quickly.

The total number of Slater determinants which can be built with say \( N \) neutrons distributed among \( n \) single particle states is

$$ \left (\begin{array}{c} n \\ N\end{array} \right) =\frac{n!}{(n-N)!N!}. $$

For a model space which comprises the first for major shells only \( 0s \), \( 0p \), \( 1s0d \) and \( 1p0f \) we have \( 40 \) single particle states for neutrons and protons. For the eight neutrons of oxygen-16 we would then have

$$ \left (\begin{array}{c} 40 \\ 8\end{array} \right) =\frac{40!}{(32)!8!}\sim 10^{9}, $$

and multiplying this with the number of proton Slater determinants we end up with approximately with a dimensionality \( d \) of \( d\sim 10^{18} \).

Exponential wall

This number can be reduced if we look at specific symmetries only. However, the dimensionality explodes quickly!

  • For Hamiltonian matrices of dimensionalities which are smaller than \( d\sim 10^5 \), we would use so-called direct methods for diagonalizing the Hamiltonian matrix
  • For larger dimensionalities iterative eigenvalue solvers like Lanczos' method are used. The most efficient codes at present can handle matrices of \( d\sim 10^{10} \).

A non-practical way of solving the eigenvalue problem

To see this, we look at the contributions arising from

$$ \langle \Phi_H^P | = \langle \Phi_0|, $$

that is we multiply with \( \langle \Phi_0 | \) from the left in

$$ (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0. $$

Using the Condon-Slater rule

If we assume that we have a two-body operator at most, using the Condon-Slater rule gives then an equation for the correlation energy in terms of \( C_i^a \) and \( C_{ij}^{ab} \) only. We get then

$$ \langle \Phi_0 | \hat{H} -E| \Phi_0\rangle + \sum_{ai}\langle \Phi_0 | \hat{H} -E|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H} -E|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}=0, $$

or

$$ E-E_0 =\Delta E=\sum_{ai}\langle \Phi_0 | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}, $$

where the energy \( E_0 \) is the reference energy and \( \Delta E \) defines the so-called correlation energy. The single-particle basis functions could be the results of a Hartree-Fock calculation or just the eigenstates of the non-interacting part of the Hamiltonian.

A non-practical way of solving the eigenvalue problem

To see this, we look at the contributions arising from

$$ \langle \Phi_H^P | = \langle \Phi_0|, $$

that is we multiply with \( \langle \Phi_0 | \) from the left in

$$ (\hat{H} -E)\sum_{P'H'}C_{H'}^{P'}|\Phi_{H'}^{P'} \rangle=0. $$

A non-practical way of solving the eigenvalue problem

If we assume that we have a two-body operator at most, Slater's rule gives then an equation for the correlation energy in terms of \( C_i^a \) and \( C_{ij}^{ab} \) only. We get then

$$ \langle \Phi_0 | \hat{H} -E| \Phi_0\rangle + \sum_{ai}\langle \Phi_0 | \hat{H} -E|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H} -E|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}=0. $$

Slight rewrite

Which we can rewrite

$$ E-E_0 =\Delta E=\sum_{ai}\langle \Phi_0 | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}, $$

where the energy \( E_0 \) is the reference energy and \( \Delta E \) defines the so-called correlation energy. The single-particle basis functions could be the results of a Hartree-Fock calculation or just the eigenstates of the non-interacting part of the Hamiltonian.

Rewriting the FCI equation

In our discussions of the Hartree-Fock method planned for week 39, we are going to compute the elements \( \langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle \) and \( \langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab}\rangle \). If we are using a Hartree-Fock basis, then these quantities result in \( \langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle=0 \) and we are left with a correlation energy given by

$$ E-E_0 =\Delta E^{HF}=\sum_{abij}\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab} \rangle C_{ij}^{ab}. $$

Rewriting the FCI equation

Inserting the various matrix elements we can rewrite the previous equation as

$$ \Delta E=\sum_{ai}\langle i| \hat{f}|a \rangle C_{i}^{a}+ \sum_{abij}\langle ij | \hat{v}| ab \rangle C_{ij}^{ab}. $$

This equation determines the correlation energy but not the coefficients \( C \).

Rewriting the FCI equation, does not stop here

We need more equations. Our next step is to set up

$$ \langle \Phi_i^a | \hat{H} -E| \Phi_0\rangle + \sum_{bj}\langle \Phi_i^a | \hat{H} -E|\Phi_{j}^{b} \rangle C_{j}^{b}+ \sum_{bcjk}\langle \Phi_i^a | \hat{H} -E|\Phi_{jk}^{bc} \rangle C_{jk}^{bc}+ \sum_{bcdjkl}\langle \Phi_i^a | \hat{H} -E|\Phi_{jkl}^{bcd} \rangle C_{jkl}^{bcd}=0. $$

Finding the coefficients

This equation will allow us to find an expression for the coefficents \( C_i^a \) since we can rewrite this equation as

$$ \langle i | \hat{f}| a\rangle +\langle \Phi_i^a | \hat{H}|\Phi_{i}^{a} \rangle C_{i}^{a}+ \sum_{bj\ne ai}\langle \Phi_i^a | \hat{H}|\Phi_{j}^{b} \rangle C_{j}^{b}+ \sum_{bcjk}\langle \Phi_i^a | \hat{H}|\Phi_{jk}^{bc} \rangle C_{jk}^{bc}+ \sum_{bcdjkl}\langle \Phi_i^a | \hat{H}|\Phi_{jkl}^{bcd} \rangle C_{jkl}^{bcd}=EC_i^a. $$

Rewriting the FCI equation

We see that on the right-hand side we have the energy \( E \). This leads to a non-linear equation in the unknown coefficients. These equations are normally solved iteratively ( that is we can start with a guess for the coefficients \( C_i^a \)). A common choice is to use perturbation theory for the first guess, setting thereby

$$ C_{i}^{a}=\frac{\langle i | \hat{f}| a\rangle}{\epsilon_i-\epsilon_a}. $$

Rewriting the FCI equation, more to add

The observant reader will however see that we need an equation for \( C_{jk}^{bc} \) and \( C_{jkl}^{bcd} \) as well. To find equations for these coefficients we need then to continue our multiplications from the left with the various \( \Phi_{H}^P \) terms.

For \( C_{jk}^{bc} \) we need then

$$ \langle \Phi_{ij}^{ab} | \hat{H} -E| \Phi_0\rangle + \sum_{kc}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{k}^{c} \rangle C_{k}^{c}+ $$ $$ \sum_{cdkl}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{kl}^{cd} \rangle C_{kl}^{cd}+\sum_{cdeklm}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{klm}^{cde} \rangle C_{klm}^{cde}+\sum_{cdefklmn}\langle \Phi_{ij}^{ab} | \hat{H} -E|\Phi_{klmn}^{cdef} \rangle C_{klmn}^{cdef}=0, $$

and we can isolate the coefficients \( C_{kl}^{cd} \) in a similar way as we did for the coefficients \( C_{i}^{a} \).

Rewriting the FCI equation, more to add

A standard choice for the first iteration is to set

$$ C_{ij}^{ab} =\frac{\langle ij \vert \hat{v} \vert ab \rangle}{\epsilon_i+\epsilon_j-\epsilon_a-\epsilon_b}. $$

At the end we can rewrite our solution of the Schroedinger equation in terms of \( n \) coupled equations for the coefficients \( C_H^P \). This is a very cumbersome way of solving the equation. However, by using this iterative scheme we can illustrate how we can compute the various terms in the wave operator or correlation operator \( \hat{C} \). We will later identify the calculation of the various terms \( C_H^P \) as parts of different many-body approximations to full CI. In particular, we can relate this non-linear scheme with Coupled Cluster theory and many-body perturbation theory.

Summarizing FCI and bringing in approximative methods

If we can diagonalize large matrices, FCI is the method of choice since:

  • It gives all eigenvalues, ground state and excited states
  • The eigenvectors are obtained directly from the coefficients \( C_H^P \) which result from the diagonalization
  • We can compute easily expectation values of other operators, as well as transition probabilities
  • Correlations are easy to understand in terms of contributions to a given operator beyond the Hartree-Fock contribution.

Definition of the correlation energy

The correlation energy is defined as, with a two-body Hamiltonian,

$$ \Delta E=\sum_{ai}\langle i| \hat{f}|a \rangle C_{i}^{a}+ \sum_{abij}\langle ij | \hat{v}| ab \rangle C_{ij}^{ab}. $$

The coefficients \( C \) result from the solution of the eigenvalue problem.

Ground state energy

The energy of say the ground state is then

$$ E=E_{ref}+\Delta E, $$

where the so-called reference energy is the energy we obtain from a Hartree-Fock calculation, that is

$$ E_{ref}=\langle \Phi_0 \vert \hat{H} \vert \Phi_0 \rangle. $$

Why Hartree-Fock?

Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are

$$ \hat{h}^{\mathrm{HF}}\psi_{\alpha} = \varepsilon_{\alpha}\psi_{\alpha} $$

with the Hartree-Fock Hamiltonian defined as

$$ \hat{h}^{\mathrm{HF}}=\hat{t}+\hat{u}_{\mathrm{ext}}+\hat{u}^{\mathrm{HF}} $$

Why Hartree-Fock?

  1. The term \( \hat{u}^{\mathrm{HF}} \) is a single-particle potential to be determined by the HF algorithm.
  2. The HF algorithm means to choose \( \hat{u}^{\mathrm{HF}} \) in order to have
$$ \langle \hat{H} \rangle = E^{\mathrm{HF}}= \langle \Phi_0 | \hat{H}|\Phi_0 \rangle $$

that is to find a local minimum with a Slater determinant \( \Phi_0 \) being the ansatz for the ground state.

  1. The variational principle ensures that \( E^{\mathrm{HF}} \ge E_0 \), with \( E_0 \) the exact ground state energy.

Why Hartree-Fock theory

We will show that the Hartree-Fock Hamiltonian \( \hat{h}^{\mathrm{HF}} \) equals our definition of the operator \( \hat{f} \) discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element

$$ \langle p |\hat{h}^{\mathrm{HF}}| q \rangle =\langle p |\hat{f}| q \rangle=\langle p|\hat{t}+\hat{u}_{\mathrm{ext}}|q \rangle +\sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}, $$

meaning that

$$ \langle p|\hat{u}^{\mathrm{HF}}|q\rangle = \sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}. $$

Why Hartree-Fock theory

The so-called Hartree-Fock potential \( \hat{u}^{\mathrm{HF}} \) brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level \( F \). It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential \( \hat{u}_{\mathrm{ext}} \) which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.

Variational Calculus and Lagrangian Multipliers

The calculus of variations involves problems where the quantity to be minimized or maximized is an integral.

In the general case we have an integral of the type

$$ E[\Phi]= \int_a^b f(\Phi(x),\frac{\partial \Phi}{\partial x},x)dx, $$

where \( E \) is the quantity which is sought minimized or maximized.

Variational Calculus and Lagrangian Multipliers

The problem is that although \( f \) is a function of the variables \( \Phi \), \( \partial \Phi/\partial x \) and \( x \), the exact dependence of \( \Phi \) on \( x \) is not known. This means again that even though the integral has fixed limits \( a \) and \( b \), the path of integration is not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes the functional \( E[\Phi] \) stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. Our task is therefore to find the minimum of \( E[\Phi] \) so that its variation \( \delta E \) is zero subject to specific constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. The constraints can be treated via the technique of Lagrangian multipliers

Variational Calculus and Lagrangian Multipliers

Let us specialize to the expectation value of the energy for one particle in three-dimensions. This expectation value reads

$$ E=\int dxdydz \psi^*(x,y,z) \hat{H} \psi(x,y,z), $$

with the constraint

$$ \int dxdydz \psi^*(x,y,z) \psi(x,y,z)=1, $$

and a Hamiltonian

$$ \hat{H}=-\frac{1}{2}\nabla^2+V(x,y,z). $$

Variational Calculus and Lagrangian Multipliers

We will, for the sake of notational convenience, skip the variables \( x,y,z \) below, and write for example \( V(x,y,z)=V \).

The integral involving the kinetic energy can be written as, with the function \( \psi \) vanishing strongly for large values of \( x,y,z \) (given here by the limits \( a \) and \( b \)),

$$ \int_a^b dxdydz \psi^* \left(-\frac{1}{2}\nabla^2\right) \psi dxdydz = \psi^*\nabla\psi|_a^b+\int_a^b dxdydz\frac{1}{2}\nabla\psi^*\nabla\psi. $$

We will drop the limits \( a \) and \( b \) in the remaining discussion. Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain

$$ \delta E = \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi\right)\right\} = 0. $$

Variational Calculus and Lagrangian Multipliers

The constraint appears in integral form as

$$ \int dxdydz \psi^* \psi=\mathrm{constant}, $$

and multiplying with a Lagrangian multiplier \( \lambda \) and taking the variational minimum we obtain the final variational equation

$$ \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi\right)\right\} = 0. $$

Variational Calculus and Lagrangian Multipliers

We introduce the function \( f \)

$$ f = \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi= \frac{1}{2}(\psi^*_x\psi_x+\psi^*_y\psi_y+\psi^*_z\psi_z)+V\psi^*\psi-\lambda\psi^*\psi, $$

where we have skipped the dependence on \( x,y,z \) and introduced the shorthand \( \psi_x \), \( \psi_y \) and \( \psi_z \) for the various derivatives.

For \( \psi^* \) the Euler-Lagrange equations yield

$$ \frac{\partial f}{\partial \psi^*}- \frac{\partial }{\partial x}\frac{\partial f}{\partial \psi^*_x}-\frac{\partial }{\partial y}\frac{\partial f}{\partial \psi^*_y}-\frac{\partial }{\partial z}\frac{\partial f}{\partial \psi^*_z}=0, $$

which results in

$$ -\frac{1}{2}(\psi_{xx}+\psi_{yy}+\psi_{zz})+V\psi=\lambda \psi. $$

Variational Calculus and Lagrangian Multipliers

We can then identify the Lagrangian multiplier as the energy of the system. The last equation is nothing but the standard Schroedinger equation and the variational approach discussed here provides a powerful method for obtaining approximate solutions of the wave function.

Derivation of Hartree-Fock equations in coordinate space

Let us denote the ground state energy by \( E_0 \). According to the variational principle we have

$$ E_0 \le E[\Phi] = \int \Phi^*\hat{H}\Phi d\mathbf{\tau} $$

where \( \Phi \) is a trial function which we assume to be normalized

$$ \int \Phi^*\Phi d\mathbf{\tau} = 1, $$

where we have used the shorthand \( d\mathbf{\tau}=dx_1dx_2\dots dx_N \).

Derivation of Hartree-Fock equations in coordinate space

In the Hartree-Fock method the trial function is a Slater determinant which can be rewritten as

$$ \Psi(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \frac{1}{\sqrt{N!}}\sum_{P} (-)^PP\psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N)=\sqrt{N!}\hat{A}\Phi_H, $$

where we have introduced the anti-symmetrization operator \( \hat{A} \) defined by the summation over all possible permutations p of two fermions.

Derivation of Hartree-Fock equations in coordinate space

It is defined as

$$ \hat{A} = \frac{1}{N!}\sum_{p} (-)^p\hat{P}, $$

with the the Hartree-function given by the simple product of all possible single-particle function

$$ \Phi_H(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N). $$

Derivation of Hartree-Fock equations in coordinate space

Our functional is written as

$$ E[\Phi] = \sum_{\mu=1}^N \int \psi_{\mu}^*(x_i)\hat{h}_0(x_i)\psi_{\mu}(x_i) dx_i + \frac{1}{2}\sum_{\mu=1}^N\sum_{\nu=1}^N \left[ \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j)dx_idx_j- \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j)dx_idx_j\right] $$

The more compact version reads

$$ E[\Phi] = \sum_{\mu}^N \langle \mu | \hat{h}_0 | \mu\rangle+ \frac{1}{2}\sum_{\mu\nu}^N\left[\langle \mu\nu |\hat{v}|\mu\nu\rangle-\langle \nu\mu |\hat{v}|\mu\nu\rangle\right]. $$

Derivation of Hartree-Fock equations in coordinate space

Since the interaction is invariant under the interchange of two particles it means for example that we have

$$ \langle \mu\nu|\hat{v}|\mu\nu\rangle = \langle \nu\mu|\hat{v}|\nu\mu\rangle, $$

or in the more general case

$$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle = \langle \nu\mu|\hat{v}|\tau\sigma\rangle. $$

Derivation of Hartree-Fock equations in coordinate space

The direct and exchange matrix elements can be brought together if we define the antisymmetrized matrix element

$$ \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}= \langle \mu\nu|\hat{v}|\mu\nu\rangle-\langle \mu\nu|\hat{v}|\nu\mu\rangle, $$

or for a general matrix element

$$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \mu\nu|\hat{v}|\sigma\tau\rangle-\langle \mu\nu|\hat{v}|\tau\sigma\rangle. $$

Derivation of Hartree-Fock equations in coordinate space

It has the symmetry property

$$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= -\langle \mu\nu|\hat{v}|\tau\sigma\rangle_{AS}=-\langle \nu\mu|\hat{v}|\sigma\tau\rangle_{AS}. $$

The antisymmetric matrix element is also hermitian, implying

$$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \sigma\tau|\hat{v}|\mu\nu\rangle_{AS}. $$

Derivation of Hartree-Fock equations in coordinate space

With these notations we rewrite the Hartree-Fock functional as

$$ \begin{equation} \int \Phi^*\hat{H_I}\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^N\sum_{\nu=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{H2Expectation2} \end{equation} $$

Adding the contribution from the one-body operator \( \hat{H}_0 \) to \eqref{H2Expectation2} we obtain the energy functional

$$ \begin{equation} E[\Phi] = \sum_{\mu=1}^N \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{FunctionalEPhi} \end{equation} $$

Derivation of Hartree-Fock equations in coordinate space

In our coordinate space derivations below we will spell out the Hartree-Fock equations in terms of their integrals.

If we generalize the Euler-Lagrange equations to more variables and introduce \( N^2 \) Lagrange multipliers which we denote by \( \epsilon_{\mu\nu} \), we can write the variational equation for the functional of \( E \)

$$ \delta E - \sum_{\mu\nu}^N \epsilon_{\mu\nu} \delta \int \psi_{\mu}^* \psi_{\nu} = 0. $$

For the orthogonal wave functions \( \psi_{i} \) this reduces to

$$ \delta E - \sum_{\mu=1}^N \epsilon_{\mu} \delta \int \psi_{\mu}^* \psi_{\mu} = 0. $$

Derivation of Hartree-Fock equations in coordinate space

Variation with respect to the single-particle wave functions \( \psi_{\mu} \) yields then

$$ \sum_{\mu=1}^N \int \delta\psi_{\mu}^*\hat{h_0}(x_i)\psi_{\mu} dx_i + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \left[ \int \delta\psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\psi_{\mu}\psi_{\nu} dx_idx_j- \int \delta\psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\psi_{\nu}\psi_{\mu} dx_idx_j \right]+ $$ $$ \sum_{\mu=1}^N \int \psi_{\mu}^*\hat{h_0}(x_i)\delta\psi_{\mu} dx_i + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \left[ \int \psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\delta\psi_{\mu}\psi_{\nu} dx_idx_j- \int \psi_{\mu}^*\psi_{\nu}^*\hat{v}(r_{ij})\psi_{\nu}\delta\psi_{\mu} dx_idx_j \right]- \sum_{{\mu}=1}^N E_{\mu} \int \delta\psi_{\mu}^* \psi_{\mu}dx_i - \sum_{{\mu}=1}^N E_{\mu} \int \psi_{\mu}^* \delta\psi_{\mu}dx_i = 0. $$

Derivation of Hartree-Fock equations in coordinate space

Although the variations \( \delta\psi \) and \( \delta\psi^* \) are not independent, they may in fact be treated as such, so that the terms dependent on either \( \delta\psi \) and \( \delta\psi^* \) individually may be set equal to zero. To see this, simply replace the arbitrary variation \( \delta\psi \) by \( i\delta\psi \), so that \( \delta\psi^* \) is replaced by \( -i\delta\psi^* \), and combine the two equations. We thus arrive at the Hartree-Fock equations

$$ \begin{equation} \left[ -\frac{1}{2}\nabla_i^2+ \sum_{\nu=1}^N\int \psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\nu}(x_j)dx_j \right]\psi_{\mu}(x_i) - \left[ \sum_{{\nu}=1}^N \int\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_j) dx_j\right] \psi_{\nu}(x_i) = \epsilon_{\mu} \psi_{\mu}(x_i). \label{eq:hartreefockcoordinatespace} \end{equation} $$

Derivation of Hartree-Fock equations in coordinate space

Notice that the integration \( \int dx_j \) implies an integration over the spatial coordinates \( \mathbf{r_j} \) and a summation over the spin-coordinate of fermion \( j \). We note that the factor of \( 1/2 \) in front of the sum involving the two-body interaction, has been removed. This is due to the fact that we need to vary both \( \delta\psi_{\mu}^* \) and \( \delta\psi_{\nu}^* \). Using the symmetry properties of the two-body interaction and interchanging \( \mu \) and \( \nu \) as summation indices, we obtain two identical terms.

Derivation of Hartree-Fock equations in coordinate space

The two first terms in the last equation are the one-body kinetic energy and the electron-nucleus potential. The third or direct term is the averaged electronic repulsion of the other electrons. As written, the term includes the self-interaction of electrons when \( \mu=\nu \). The self-interaction is cancelled in the fourth term, or the exchange term. The exchange term results from our inclusion of the Pauli principle and the assumed determinantal form of the wave-function. Equation \eqref{eq:hartreefockcoordinatespace}, in addition to the kinetic energy and the attraction from the atomic nucleus that confines the motion of a single electron, represents now the motion of a single-particle modified by the two-body interaction. The additional contribution to the Schroedinger equation due to the two-body interaction, represents a mean field set up by all the other bystanding electrons, the latter given by the sum over all single-particle states occupied by \( N \) electrons.

Derivation of Hartree-Fock equations in coordinate space

The Hartree-Fock equation is an example of an integro-differential equation. These equations involve repeated calculations of integrals, in addition to the solution of a set of coupled differential equations. The Hartree-Fock equations can also be rewritten in terms of an eigenvalue problem. The solution of an eigenvalue problem represents often a more practical algorithm and the solution of coupled integro-differential equations. This alternative derivation of the Hartree-Fock equations is given below.

Analysis of Hartree-Fock equations in coordinate space

A theoretically convenient form of the Hartree-Fock equation is to regard the direct and exchange operator defined through

$$ \begin{equation*} V_{\mu}^{d}(x_i) = \int \psi_{\mu}^*(x_j) \hat{v}(r_{ij})\psi_{\mu}(x_j) dx_j \end{equation*} $$

and

$$ \begin{equation*} V_{\mu}^{ex}(x_i) g(x_i) = \left(\int \psi_{\mu}^*(x_j) \hat{v}(r_{ij})g(x_j) dx_j \right)\psi_{\mu}(x_i), \end{equation*} $$

respectively.

Analysis of Hartree-Fock equations in coordinate space

The function \( g(x_i) \) is an arbitrary function, and by the substitution \( g(x_i) = \psi_{\nu}(x_i) \) we get

$$ \begin{equation*} V_{\mu}^{ex}(x_i) \psi_{\nu}(x_i) = \left(\int \psi_{\mu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_j) dx_j\right)\psi_{\mu}(x_i). \end{equation*} $$

Analysis of Hartree-Fock equations in coordinate space

We may then rewrite the Hartree-Fock equations as

$$ \hat{h}^{HF}(x_i) \psi_{\nu}(x_i) = \epsilon_{\nu}\psi_{\nu}(x_i), $$

with

$$ \hat{h}^{HF}(x_i)= \hat{h}_0(x_i) + \sum_{\mu=1}^NV_{\mu}^{d}(x_i) - \sum_{\mu=1}^NV_{\mu}^{ex}(x_i), $$

and where \( \hat{h}_0(i) \) is the one-body part. The latter is normally chosen as a part which yields solutions in closed form. The harmonic oscilltor is a classical problem thereof. We normally rewrite the last equation as

$$ \hat{h}^{HF}(x_i)= \hat{h}_0(x_i) + \hat{u}^{HF}(x_i). $$

Hartree-Fock by varying the coefficients of a wave function expansion

Another possibility is to expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new Hartree-Fock single-particle basis by performing a unitary transformation on our previous basis (labelled with greek indices) as

$$ \begin{equation} \psi_p^{HF} = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}. \label{eq:newbasis} \end{equation} $$

In this case we vary the coefficients \( C_{p\lambda} \). If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis \( \phi_{\lambda} \) is orthogonal.

Hartree-Fock by varying the coefficients of a wave function expansion

It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have

$$ \hat{h}_0\phi_{\lambda}=\epsilon_{\lambda}\phi_{\lambda}. $$

The single-particle wave functions \( \phi_{\lambda}(\mathbf{r}) \), defined by the quantum numbers \( \lambda \) and \( \mathbf{r} \) are defined as the overlap

$$ \phi_{\lambda}(\mathbf{r}) = \langle \mathbf{r} | \lambda \rangle . $$

Hartree-Fock by varying the coefficients of a wave function expansion

In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc).

We stated that a unitary transformation keeps the orthogonality. To see this consider first a basis of vectors \( \mathbf{v}_i \),

$$ \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} $$

Hartree-Fock by varying the coefficients of a wave function expansion

We assume that the basis is orthogonal, that is

$$ \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$

An orthogonal or unitary transformation

$$ \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, $$

preserves the dot product and orthogonality since

$$ \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$

Hartree-Fock by varying the coefficients of a wave function expansion

This means that if the coefficients \( C_{p\lambda} \) belong to a unitary or orthogonal trasformation (using the Dirac bra-ket notation)

$$ \vert p\rangle = \sum_{\lambda} C_{p\lambda}\vert\lambda\rangle, $$

orthogonality is preserved, that is \( \langle \alpha \vert \beta\rangle = \delta_{\alpha\beta} \) and \( \langle p \vert q\rangle = \delta_{pq} \).

This propertry is extremely useful when we build up a basis of many-body Stater determinant based states.

Note also that although a basis \( \vert \alpha\rangle \) contains an infinity of states, for practical calculations we have always to make some truncations.

Hartree-Fock by varying the coefficients of a wave function expansion

Before we develop the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations. This applies also to our previous discussion on full configuration interaction theory.

Consider the following determinant

$$ \left| \begin{array}{cc} \alpha_1b_{11}+\alpha_2sb_{12}& a_{12}\\ \alpha_1b_{21}+\alpha_2b_{22}&a_{22}\end{array} \right|=\alpha_1\left|\begin{array}{cc} b_{11}& a_{12}\\ b_{21}&a_{22}\end{array} \right|+\alpha_2\left| \begin{array}{cc} b_{12}& a_{12}\\b_{22}&a_{22}\end{array} \right| $$

Hartree-Fock by varying the coefficients of a wave function expansion

We can generalize this to an \( n\times n \) matrix and have

$$ \left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & \sum_{k=1}^n c_k b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & \sum_{k=1}^n c_k b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2} & \dots & \sum_{k=1}^n c_k b_{nk} &\dots & a_{nn}\end{array} \right|= \sum_{k=1}^n c_k\left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots\\ \dots & \dots & \dots & \dots & \dots & \dots\\ a_{n1}& a_{n2} & \dots & b_{nk} &\dots & a_{nn}\end{array} \right| . $$

This is a property we will use in our Hartree-Fock discussions.

Hartree-Fock by varying the coefficients of a wave function expansion

We can generalize the previous results, now with all elements \( a_{ij} \) being given as functions of linear combinations of various coefficients \( c \) and elements \( b_{ij} \),

$$ \left| \begin{array}{cccccc} \sum_{k=1}^n b_{1k}c_{k1}& \sum_{k=1}^n b_{1k}c_{k2} & \dots & \sum_{k=1}^n b_{1k}c_{kj} &\dots & \sum_{k=1}^n b_{1k}c_{kn}\\ \sum_{k=1}^n b_{2k}c_{k1}& \sum_{k=1}^n b_{2k}c_{k2} & \dots & \sum_{k=1}^n b_{2k}c_{kj} &\dots & \sum_{k=1}^n b_{2k}c_{kn}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots &\dots \\ \sum_{k=1}^n b_{nk}c_{k1}& \sum_{k=1}^n b_{nk}c_{k2} & \dots & \sum_{k=1}^n b_{nk}c_{kj} &\dots & \sum_{k=1}^n b_{nk}c_{kn}\end{array} \right|=det(\mathbf{C})det(\mathbf{B}), $$

where \( det(\mathbf{C}) \) and \( det(\mathbf{B}) \) are the determinants of \( n\times n \) matrices with elements \( c_{ij} \) and \( b_{ij} \) respectively. This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting \( n=2 \).

Hartree-Fock by varying the coefficients of a wave function expansion

With our definition of the new basis in terms of an orthogonal basis we have

$$ \psi_p(x) = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x). $$

If the coefficients \( C_{p\lambda} \) belong to an orthogonal or unitary matrix, the new basis is also orthogonal. Our Slater determinant in the new basis \( \psi_p(x) \) is written as

$$ \frac{1}{\sqrt{N!}} \left| \begin{array}{ccccc} \psi_{p}(x_1)& \psi_{p}(x_2)& \dots & \dots & \psi_{p}(x_N)\\ \psi_{q}(x_1)&\psi_{q}(x_2)& \dots & \dots & \psi_{q}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{t}(x_1)&\psi_{t}(x_2)& \dots & \dots & \psi_{t}(x_N)\end{array} \right|=\frac{1}{\sqrt{N!}} \left| \begin{array}{ccccc} \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_1)& \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_N)\\ \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_N)\end{array} \right|, $$

which is nothing but \( det(\mathbf{C})det(\Phi) \), with \( det(\Phi) \) being the determinant given by the basis functions \( \phi_{\lambda}(x) \).

Hartree-Fock by varying the coefficients of a wave function expansion

In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi (\( F \)) level given by the labels \( ijkl\dots \le F \) for so-called single-hole states and \( abcd\dots > F \) for so-called particle states. For general single-particle states we employ the labels \( pqrs\dots \).

In Eq. \eqref{FunctionalEPhi}, restated here

$$ E[\Phi] = \sum_{\mu=1}^N \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}, $$

we found the expression for the energy functional in terms of the basis function \( \phi_{\lambda}(\mathbf{r}) \). We then varied the above energy functional with respect to the basis functions \( |\mu \rangle \).

Hartree-Fock by varying the coefficients of a wave function expansion

Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. \eqref{eq:newbasis}. We can then rewrite the energy functional as

$$ \begin{equation} E[\Phi^{HF}] = \sum_{i=1}^N \langle i | h | i \rangle + \frac{1}{2}\sum_{ij=1}^N\langle ij|\hat{v}|ij\rangle_{AS}, \label{FunctionalEPhi2} \end{equation} $$

where \( \Phi^{HF} \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis}.

Hartree-Fock by varying the coefficients of a wave function expansion

Using Eq. \eqref{eq:newbasis} we can rewrite Eq. \eqref{FunctionalEPhi2} as

$$ \begin{equation} E[\Psi] = \sum_{i=1}^N \sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | h | \beta \rangle + \frac{1}{2}\sum_{ij=1}^N\sum_{{\alpha\beta\gamma\delta}} C^*_{i\alpha}C^*_{j\beta}C_{i\gamma}C_{j\delta}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}. \label{FunctionalEPhi3} \end{equation} $$

Hartree-Fock by varying the coefficients of a wave function expansion

We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that since \( \langle i | j \rangle = \delta_{i,j} \) and \( \langle \alpha | \beta \rangle = \delta_{\alpha,\beta} \), the coefficients \( C_{i\gamma} \) obey the relation

$$ \langle i | j \rangle=\delta_{i,j}=\sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | \beta \rangle= \sum_{\alpha} C^*_{i\alpha}C_{i\alpha}, $$

which allows us to define a functional to be minimized that reads

$$ \begin{equation} F[\Phi^{HF}]=E[\Phi^{HF}] - \sum_{i=1}^N\epsilon_i\sum_{\alpha} C^*_{i\alpha}C_{i\alpha}. \label{_auto1} \end{equation} $$

Hartree-Fock by varying the coefficients of a wave function expansion

Minimizing with respect to \( C^*_{i\alpha} \), remembering that the equations for \( C^*_{i\alpha} \) and \( C_{i\alpha} \) can be written as two independent equations, we obtain

$$ \frac{d}{dC^*_{i\alpha}}\left[ E[\Phi^{HF}] - \sum_{j}\epsilon_j\sum_{\alpha} C^*_{j\alpha}C_{j\alpha}\right]=0, $$

which yields for every single-particle state \( i \) and index \( \alpha \) (recalling that the coefficients \( C_{i\alpha} \) are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) the following Hartree-Fock equations

$$ \sum_{\beta} C_{i\beta}\langle \alpha | h | \beta \rangle+ \sum_{j=1}^N\sum_{\beta\gamma\delta} C^*_{j\beta}C_{j\delta}C_{i\gamma}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}=\epsilon_i^{HF}C_{i\alpha}. $$

Hartree-Fock by varying the coefficients of a wave function expansion

We can rewrite this equation as (changing dummy variables)

$$ \sum_{\beta} \left\{\langle \alpha | h | \beta \rangle+ \sum_{j}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}\right\}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. $$

Note that the sums over greek indices run over the number of basis set functions (in principle an infinite number).

Hartree-Fock by varying the coefficients of a wave function expansion

Defining

$$ h_{\alpha\beta}^{HF}=\langle \alpha | h | \beta \rangle+ \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}, $$

we can rewrite the new equations as

$$ \begin{equation} \sum_{\beta}h_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. \label{eq:newhf} \end{equation} $$

The latter is nothing but a standard eigenvalue problem. Compared with Eq. \eqref{eq:hartreefockcoordinatespace}, we see that we do not need to compute any integrals in an iterative procedure for solving the equations. It suffices to tabulate the matrix elements \( \langle \alpha | h | \beta \rangle \) and \( \langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS} \) once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerical.

Hartree-Fock algorithm

Our Hartree-Fock matrix is thus

$$ \hat{h}_{\alpha\beta}^{HF}=\langle \alpha | \hat{h}_0 | \beta \rangle+ \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$

The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients \( C_{j\gamma}=\delta_{j,\gamma} \) and solving the equations by diagonalization till the new single-particle energies \( \epsilon_i^{\mathrm{HF}} \) do not change anymore by a prefixed quantity.

Hartree-Fock by varying the coefficients of a wave function expansion

Normally we assume that the single-particle basis \( |\beta\rangle \) forms an eigenbasis for the operator \( \hat{h}_0 \), meaning that the Hartree-Fock matrix becomes

$$ \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$

The Hartree-Fock eigenvalue problem

$$ \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, $$

can be written out in a more compact form as

$$ \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}. $$

Hartree-Fock by varying the coefficients of a wave function expansion

The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the coefficients \( C_{i\alpha} \). We label the coefficients as \( C_{i\alpha}^{(n)} \), where the subscript \( n \) stands for iteration \( n \). To set up the algorithm we can proceed as follows:

$$ \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{j = 1}^N\sum_{\gamma\delta} C_{j\gamma}^{(0)}C_{j\delta}^{(0)}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$

Hartree-Fock by varying the coefficients of a wave function expansion

Solving the Hartree-Fock eigenvalue problem yields then new eigenvectors \( C_{i\alpha}^{(1)} \) and eigenvalues \( \epsilon_i^{HF(1)} \).

$$ \sum_{j = 1}^N\sum_{\gamma\delta} C_{j\gamma}^{(1)}C_{j\delta}^{(1)}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$

The diagonalization with the new Hartree-Fock potential yields new eigenvectors and eigenvalues. This process is continued till for example

$$ \frac{\sum_{p} |\epsilon_i^{(n)}-\epsilon_i^{(n-1)}|}{m} \le \lambda, $$

where \( \lambda \) is a user prefixed quantity (\( \lambda \sim 10^{-8} \) or smaller) and \( p \) runs over all calculated single-particle energies and \( m \) is the number of single-particle states.

Using the density matrix

The equations are often rewritten in terms of a so-called density matrix, which is defined as

$$ \begin{equation} \rho_{\gamma\delta}=\sum_{i=1}^{N}\langle\gamma|i\rangle\langle i|\delta\rangle = \sum_{i=1}^{N}C_{i\gamma}C^*_{i\delta}. \label{_auto2} \end{equation} $$

It means that we can rewrite the Hartree-Fock Hamiltonian as

$$ \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{\gamma\delta} \rho_{\gamma\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. $$

It is convenient to use the density matrix since we can precalculate in every iteration the product of two eigenvector components \( C \).

Code example

import numpy as np
class HartreeFock:
    def __init__(self, num_electrons, num_orbitals):
        self.num_electrons = num_electrons
        self.num_orbitals = num_orbitals
	# You would need to replace these by proper integrals
        self.h = np.random.rand(num_orbitals, num_orbitals)  # One-electron integrals
        self.coulomb = np.random.rand(num_orbitals, num_orbitals, num_orbitals, num_orbitals)  # Two-electron integrals
    def build_fock_matrix(self, density_matrix):
        fock_matrix = self.h.copy()
        for i in range(self.num_orbitals):
            for j in range(self.num_orbitals):
                fock_matrix[i, j] += np.sum(density_matrix * self.coulomb[i, j])
        return fock_matrix
    def build_density_matrix(self, coefficients):
        density_matrix = np.zeros((self.num_orbitals, self.num_orbitals))
        for i in range(self.num_electrons):
            density_matrix += np.outer(coefficients[:, i], coefficients[:, i])
        return density_matrix
    def diagonalize(self, fock_matrix):
        energy, coefficients = np.linalg.eigh(fock_matrix)
        return energy, coefficients
    def run(self, max_iter=100, tol=1e-6):
        coeffs = np.zeros((self.num_orbitals, self.num_electrons))
        density_matrix = np.zeros((self.num_orbitals, self.num_orbitals))
        for iteration in range(max_iter):
            fock_matrix = self.build_fock_matrix(density_matrix)
            energies, coeffs = self.diagonalize(fock_matrix)
            
            new_density_matrix = self.build_density_matrix(coeffs)
            if np.linalg.norm(new_density_matrix - density_matrix) < tol:
                print(f"Converged in {iteration} iterations")
                break
            density_matrix = new_density_matrix
        return energies, coeffs
# Example usage
if __name__ == "__main__":
    num_electrons = 2
    num_orbitals = 4
    hf = HartreeFock(num_electrons, num_orbitals)
    energies, coefficients = hf.run()
    print("Final energies:", energies)
    print("Final coefficients:", coefficients)

Analysis of Hartree-Fock equations and Koopman's theorem

We can rewrite the ground state energy by adding and subtracting \( \hat{u}^{HF}(x_i) \)

$$ E_0^{HF} =\langle \Phi_0 | \hat{H} | \Phi_0\rangle = \sum_{i\le F} \langle i | \hat{h}_0 +\hat{u}^{HF}| j\rangle+ \frac{1}{2}\sum_{i\le F}\sum_{j \le F}\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F} \langle i |\hat{u}^{HF}| i\rangle, $$

which results in

$$ E_0^{HF} = \sum_{i\le F} \varepsilon_i^{HF} + \frac{1}{2}\sum_{i\le F}\sum_{j \le F}\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F} \langle i |\hat{u}^{HF}| i\rangle. $$

Our single-particle states \( ijk\dots \) are now single-particle states obtained from the solution of the Hartree-Fock equations.

Analysis of Hartree-Fock equations and Koopman's theorem

Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy

$$ E_0^{HF} = \sum_{i\le F} \varepsilon_i - \frac{1}{2}\sum_{i\le F}\sum_{j \le F}\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]. $$

This form will be used in our discussion of Koopman's theorem.

Analysis of Hartree-Fock equations and Koopman's theorem

We have

$$ E[\Phi^{\mathrm{HF}}(N)] = \sum_{i=1}^H \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1}^N\langle ij|\hat{v}|ij\rangle_{AS}, $$

where \( \Phi^{\mathrm{HF}}(N) \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis} for \( N \) electrons (same \( Z \)). If we assume that the single-particle wave functions in the new basis do not change when we remove one electron or add one electron, we can then define the corresponding energy for the \( N-1 \) systems as

$$ E[\Phi^{\mathrm{HF}}(N-1)] = \sum_{i=1; i\ne k}^N \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1;i,j\ne k}^N\langle ij|\hat{v}|ij\rangle_{AS}, $$

where we have removed a single-particle state \( k\le F \), that is a state below the Fermi level.

Analysis of Hartree-Fock equations and Koopman's theorem

Calculating the difference

$$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{i=1;i\ne k}^N\langle ik|\hat{v}|ik\rangle_{AS} \frac{1}{2}\sum_{j=1;j\ne k}^N\langle kj|\hat{v}|kj\rangle_{AS}, $$

we obtain

$$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{j=1}^N\langle kj|\hat{v}|kj\rangle_{AS} $$

which is just our definition of the Hartree-Fock single-particle energy

$$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \epsilon_k^{\mathrm{HF}} $$

Analysis of Hartree-Fock equations and Koopman's theorem

Similarly, we can now compute the difference (we label the single-particle states above the Fermi level as \( abcd > F \))

$$ E[\Phi^{\mathrm{HF}}(N+1)]- E[\Phi^{\mathrm{HF}}(N)]= \epsilon_a^{\mathrm{HF}}. $$

These two equations can thus be used to the electron affinity or ionization energies, respectively. Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from \( N \) to \( N+1 \) does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities.

Analysis of Hartree-Fock equations and Koopman's theorem

As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, with the weakest bound one being confined the \( 3s \) single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the ionization energy by using the single-particle energy for the \( 3s \) states, namely \( \epsilon_{3s}^{\mathrm{HF}} \).

From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental observables (here ionization and affinity energies) and the underlying interactions between particles. In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy.

Analysis of Hartree-Fock equations, Koopman's theorem

With similar arguments as in atomic physics, we can now use Hartree-Fock theory to make a link between nuclear forces and separation energies. Changing to nuclear system, we define

$$ E[\Phi^{\mathrm{HF}}(A)] = \sum_{i=1} \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1}\langle ij|\hat{v}|ij\rangle_{AS}, $$

where \( \Phi^{\mathrm{HF}}(A) \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis} for \( A \) nucleons, where \( A=N+Z \), with \( N \) now being the number of neutrons and \( Z \) the number of protons. If we assume again that the single-particle wave functions in the new basis do not change from a nucleus with \( A \) nucleons to a nucleus with \( A-1 \) nucleons, we can then define the corresponding energy for the \( A-1 \) systems as

$$ E[\Phi^{\mathrm{HF}}(A-1)] = \sum_{i=1; i\ne k} \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1;i,j\ne k}\langle ij|\hat{v}|ij\rangle_{AS}, $$

where we have removed a single-particle state \( k\le F \), that is a state below the Fermi level.

Analysis of Hartree-Fock equations and Koopman's theorem

Calculating the difference

$$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{i=1;i\ne k}\langle ik|\hat{v}|ik\rangle_{AS} \frac{1}{2}\sum_{j=1;j\ne k}\langle kj|\hat{v}|kj\rangle_{AS}, $$

which becomes

$$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{j=1}\langle kj|\hat{v}|kj\rangle_{AS} $$

which is just our definition of the Hartree-Fock single-particle energy

$$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \epsilon_k^{\mathrm{HF}} $$

Analysis of Hartree-Fock equations and Koopman's theorem

Similarly, we can now compute the difference (recall that the single-particle states \( abcd > F \))

$$ E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)]= \epsilon_a^{\mathrm{HF}}. $$

If we then recall that the binding energy differences

$$ BE(A)-BE(A-1) \hspace{0.5cm} \mathrm{and} \hspace{0.5cm} BE(A+1)-BE(A), $$

define the separation energies, we see that the Hartree-Fock single-particle energies can be used to define separation energies. We have thus our first link between nuclear forces (included in the potential energy term) and an observable quantity defined by differences in binding energies.

Analysis of Hartree-Fock equations and Koopman's theorem

We have thus the following interpretations (if the single-particle field do not change)

$$ BE(A)-BE(A-1)\approx E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \epsilon_k^{\mathrm{HF}}, $$

and

$$ BE(A+1)-BE(A)\approx E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)] = \epsilon_a^{\mathrm{HF}}. $$

If we use \( {}^{16}\mbox{O} \) as our closed-shell nucleus, we could then interpret the separation energy

$$ BE(^{16}\mathrm{O})-BE(^{15}\mathrm{O})\approx \epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, $$

and

$$ BE(^{16}\mathrm{O})-BE(^{15}\mathrm{N})\approx \epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. $$

Analysis of Hartree-Fock equations and Koopman's theorem

Similalry, we could interpret

$$ BE(^{17}\mathrm{O})-BE(^{16}\mathrm{O})\approx \epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}, $$

and

$$ BE(^{17}\mathrm{F})-BE(^{16}\mathrm{O})\approx\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}. $$

We can continue like this for all \( A\pm 1 \) nuclei where \( A \) is a good closed-shell (or subshell closure) nucleus. Examples are \( {}^{22}\mbox{O} \), \( {}^{24}\mbox{O} \), \( {}^{40}\mbox{Ca} \), \( {}^{48}\mbox{Ca} \), \( {}^{52}\mbox{Ca} \), \( {}^{54}\mbox{Ca} \), \( {}^{56}\mbox{Ni} \), \( {}^{68}\mbox{Ni} \), \( {}^{78}\mbox{Ni} \), \( {}^{90}\mbox{Zr} \), \( {}^{88}\mbox{Sr} \), \( {}^{100}\mbox{Sn} \), \( {}^{132}\mbox{Sn} \) and \( {}^{208}\mbox{Pb} \), to mention some possile cases.

Analysis of Hartree-Fock equations and Koopman's theorem

We can thus make our first interpretation of the separation energies in terms of the simplest possible many-body theory. If we also recall that the so-called energy gap for neutrons (or protons) is defined as

$$ \Delta S_n= 2BE(N,Z)-BE(N-1,Z)-BE(N+1,Z), $$

for neutrons and the corresponding gap for protons

$$ \Delta S_p= 2BE(N,Z)-BE(N,Z-1)-BE(N,Z+1), $$

we can define the neutron and proton energy gaps for \( {}^{16}\mbox{O} \) as

$$ \Delta S_{\nu}=\epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, $$

and

$$ \Delta S_{\pi}=\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. $$

Hartree-Fock in second quantization and stability of HF solution

We wish now to derive the Hartree-Fock equations using our second-quantized formalism and study the stability of the equations. Our ansatz for the ground state of the system is approximated as (this is our representation of a Slater determinant in second quantization)

$$ |\Phi_0\rangle = |c\rangle = a^{\dagger}_i a^{\dagger}_j \dots a^{\dagger}_l|0\rangle. $$

We wish to determine \( \hat{u}^{HF} \) so that \( E_0^{HF}= \langle c|\hat{H}| c\rangle \) becomes a local minimum.

In our analysis here we will need Thouless' theorem, which states that an arbitrary Slater determinant \( |c'\rangle \) which is not orthogonal to a determinant \( | c\rangle ={\displaystyle\prod_{i=1}^{n}} a_{\alpha_{i}}^{\dagger}|0\rangle \), can be written as

$$ |c'\rangle=exp\left\{\sum_{a>F}\sum_{i\le F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle $$

Thouless' theorem

Let us give a simple proof of Thouless' theorem. The theorem states that we can make a linear combination av particle-hole excitations with respect to a given reference state \( \vert c\rangle \). With this linear combination, we can make a new Slater determinant \( \vert c'\rangle \) which is not orthogonal to \( \vert c\rangle \), that is

$$ \langle c|c'\rangle \ne 0. $$

To show this we need some intermediate steps. The exponential product of two operators \( \exp{\hat{A}}\times\exp{\hat{B}} \) is equal to \( \exp{(\hat{A}+\hat{B})} \) only if the two operators commute, that is

$$ [\hat{A},\hat{B}] = 0. $$

Thouless' theorem

If the operators do not commute, we need to resort to the Baker-Campbell-Hauersdorf. This relation states that

$$ \exp{\hat{C}}=\exp{\hat{A}}\exp{\hat{B}}, $$

with

$$ \hat{C}=\hat{A}+\hat{B}+\frac{1}{2}[\hat{A},\hat{B}]+\frac{1}{12}[[\hat{A},\hat{B}],\hat{B}]-\frac{1}{12}[[\hat{A},\hat{B}],\hat{A}]+\dots $$

Thouless' theorem

From these relations, we note that in our expression for \( |c'\rangle \) we have commutators of the type

$$ [a_{a}^{\dagger}a_{i},a_{b}^{\dagger}a_{j}], $$

and it is easy to convince oneself that these commutators, or higher powers thereof, are all zero. This means that we can write out our new representation of a Slater determinant as

$$ |c'\rangle=exp\left\{\sum_{a>F}\sum_{i\le F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}+\left(\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right)^2+\dots\right\}| c\rangle $$

Thouless' theorem

We note that

$$ \prod_{i}\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\sum_{b>F}C_{bi}a_{b}^{\dagger}a_{i}| c\rangle =0, $$

and all higher-order powers of these combinations of creation and annihilation operators disappear due to the fact that \( (a_i)^n| c\rangle =0 \) when \( n > 1 \). This allows us to rewrite the expression for \( |c'\rangle \) as

$$ |c'\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| c\rangle, $$

which we can rewrite as

$$ |c'\rangle=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| a^{\dagger}_{i_1} a^{\dagger}_{i_2} \dots a^{\dagger}_{i_n}|0\rangle. $$

Thouless' theorem

The last equation can be written as

$$ \begin{align} |c'\rangle&=\prod_{i}\left\{1+\sum_{a>F}C_{ai}a_{a}^{\dagger}a_{i}\right\}| a^{\dagger}_{i_1} a^{\dagger}_{i_2} \dots a^{\dagger}_{i_n}|0\rangle=\left(1+\sum_{a>F}C_{ai_1}a_{a}^{\dagger}a_{i_1}\right)a^{\dagger}_{i_1} \label{_auto3}\\ & \times\left(1+\sum_{a>F}C_{ai_2}a_{a}^{\dagger}a_{i_2}\right)a^{\dagger}_{i_2} \dots |0\rangle=\prod_{i}\left(a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}\right)|0\rangle. \label{_auto4} \end{align} $$

New operators

If we define a new creation operator

$$ \begin{equation} b^{\dagger}_{i}=a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}, \label{eq:newb} \end{equation} $$

we have

$$ |c'\rangle=\prod_{i}b^{\dagger}_{i}|0\rangle=\prod_{i}\left(a^{\dagger}_{i}+\sum_{a>F}C_{ai}a_{a}^{\dagger}\right)|0\rangle, $$

meaning that the new representation of the Slater determinant in second quantization, \( |c'\rangle \), looks like our previous ones. However, this representation is not general enough since we have a restriction on the sum over single-particle states in Eq. \eqref{eq:newb}. The single-particle states have all to be above the Fermi level.

Thouless' theorem

The question then is whether we can construct a general representation of a Slater determinant with a creation operator

$$ \tilde{b}^{\dagger}_{i}=\sum_{p}f_{ip}a_{p}^{\dagger}, $$

where \( f_{ip} \) is a matrix element of a unitary matrix which transforms our creation and annihilation operators \( a^{\dagger} \) and \( a \) to \( \tilde{b}^{\dagger} \) and \( \tilde{b} \). These new operators define a new representation of a Slater determinant as

$$ |\tilde{c}\rangle=\prod_{i}\tilde{b}^{\dagger}_{i}|0\rangle. $$

Showing that \( |\tilde{c}\rangle= |c'\rangle \)

We need to show that \( |\tilde{c}\rangle= |c'\rangle \). We need also to assume that the new state is not orthogonal to \( |c\rangle \), that is \( \langle c| \tilde{c}\rangle \ne 0 \). From this it follows that

$$ \langle c| \tilde{c}\rangle=\langle 0| a_{i_n}\dots a_{i_1}\left(\sum_{p=i_1}^{i_n}f_{i_1p}a_{p}^{\dagger} \right)\left(\sum_{q=i_1}^{i_n}f_{i_2q}a_{q}^{\dagger} \right)\dots \left(\sum_{t=i_1}^{i_n}f_{i_nt}a_{t}^{\dagger} \right)|0\rangle, $$

which is nothing but the determinant \( det(f_{ip}) \) which we can, using the intermediate normalization condition, normalize to one, that is

$$ det(f_{ip})=1, $$

meaning that \( f \) has an inverse defined as (since we are dealing with orthogonal, and in our case unitary as well, transformations)

$$ \sum_{k} f_{ik}f^{-1}_{kj} = \delta_{ij}, $$

and

$$ \sum_{j} f^{-1}_{ij}f_{jk} = \delta_{ik}. $$

Thouless' theorem

Using these relations we can then define the linear combination of creation (and annihilation as well) operators as

$$ \sum_{i}f^{-1}_{ki}\tilde{b}^{\dagger}_{i}=\sum_{i}f^{-1}_{ki}\sum_{p=i_1}^{\infty}f_{ip}a_{p}^{\dagger}=a_{k}^{\dagger}+\sum_{i}\sum_{p=i_{n+1}}^{\infty}f^{-1}_{ki}f_{ip}a_{p}^{\dagger}. $$

Defining

$$ c_{kp}=\sum_{i \le F}f^{-1}_{ki}f_{ip}, $$

we can redefine

$$ a_{k}^{\dagger}+\sum_{i}\sum_{p=i_{n+1}}^{\infty}f^{-1}_{ki}f_{ip}a_{p}^{\dagger}=a_{k}^{\dagger}+\sum_{p=i_{n+1}}^{\infty}c_{kp}a_{p}^{\dagger}=b_k^{\dagger}, $$

our starting point.

Thouless' theorem

We have shown that our general representation of a Slater determinant

$$ |\tilde{c}\rangle=\prod_{i}\tilde{b}^{\dagger}_{i}|0\rangle=|c'\rangle=\prod_{i}b^{\dagger}_{i}|0\rangle, $$

with

$$ b_k^{\dagger}=a_{k}^{\dagger}+\sum_{p=i_{n+1}}^{\infty}c_{kp}a_{p}^{\dagger}. $$

Thouless' theorem

This means that we can actually write an ansatz for the ground state of the system as a linear combination of terms which contain the ansatz itself \( |c\rangle \) with an admixture from an infinity of one-particle-one-hole states. The latter has important consequences when we wish to interpret the Hartree-Fock equations and their stability. We can rewrite the new representation as

$$ |c'\rangle = |c\rangle+|\delta c\rangle, $$

where \( |\delta c\rangle \) can now be interpreted as a small variation. If we approximate this term with contributions from one-particle-one-hole (1p-1h) states only, we arrive at

$$ |c'\rangle = \left(1+\sum_{ai}\delta C_{ai}a_{a}^{\dagger}a_i\right)|c\rangle. $$

Thouless' theorem

In our derivation of the Hartree-Fock equations we have shown that

$$ \langle \delta c| \hat{H} | c\rangle =0, $$

which means that we have to satisfy

$$ \langle c|\sum_{ai}\delta C_{ai}\left\{a_{a}^{\dagger}a_i\right\} \hat{H} | c\rangle =0. $$

With this as a background, we are now ready to study the stability of the Hartree-Fock equations. This is the topic for week 40.

© 1999-2024, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license