Many-body Physics; Introduction to the course, notations and definitions

Morten Hjorth-Jensen
Department of Physics and Center for Computing in Science Education, University of Oslo, Norway

Week 34, August 19-23, 2024












Week 34











Introduction to the course

These lecture aim at giving an introduction to the quantum mechanics of many-body systems and the methods relevant for many-body problems in such diverse areas as atomic, molecular, solid-state and nuclear physics, chemistry and materials science. A theoretical understanding of the behavior of quantum-mechanical many-body systems, that is, systems containing many interacting particles, is a considerable challenge in that, normally, no exact solution can be found. Instead, reliable methods are needed for approximate but accurate simulations of such systems.











Content for many-body lecture notes, intro











Content for many-body lecture notes, 2nd quantization











Content for many-body lecture notes, FCI











Content for many-body lecture notes, Mean-filed theories











Content for many-body lecture notes, perturbation theory











Content for many-body lecture notes, Coupled Cluster theory











Content for many-body lecture notes, Green's function theory and other methods

  1. Green's function theory and parquet theory
  2. Monte Carlo methods
  3. Quantum computing
  4. Time-dependent many-body theory
  5. Applications to different systems like the electron gas, Lipkin model, Pairing model, infinite nuclear matter, and more










In the folder https://github.com/ManyBodyPhysics/FYS4480/tree/master/doc/Literature you will find the textbooks we will be following. Weekly reading assignments based on these texts will be sent before each week. In particular we recommend the texts by

  1. Szabo and Ostlund
  2. Shavitt and Bartlett










Teaching mode

This course will be delivered in a hybrid mode, with online and on site lectures and on site exercise sessions. Only lectures are recorded.

  1. Four lectures per week, fall semester, 10 ECTS. The lectures will be recorded and linked to this site and the official University of Oslo website for the course;
  2. Two hours of exercise sessions for work on projects and exercises;
  3. Two projects which are graded and count 30% each of the final grade;
  4. A final oral exam which counts 40% of the final grade
  5. The course is offered as FYS4480 (Master of Science level) and as FYS9480 (PhD level);
    1. Videos of teaching material and Weekly emails with summary of activities will be mailed to all participants;










Notations and definitions

Vectors, matrices and higher-order tensors are always boldfaced, with vectors given by lower case letter letters and matrices and higher-order tensors given by upper case letters.

Unless otherwise stated, the elements \( v_i \) of a vector \( \boldsymbol{v} \) are assumed to be real. That is a vector of length \( n \) is defined as \( \boldsymbol{x}\in \mathbb{R}^{n} \) and if we have a complex vector we have \( \boldsymbol{x}\in \mathbb{C}^{n} \).

For a matrix of dimension \( n\times n \) we have \( \boldsymbol{A}\in \mathbb{R}^{n\times n} \) and the first matrix element starts with row element (row-wise ordering) zero and column element zero.











Some mathematical notations

  1. For all/any \( \forall \)
  2. Implies \( \implies \)
  3. Equivalent \( \equiv \)
  4. Real variable \( \mathbb{R} \)
  5. Integer variable \( \mathbb{I} \)
  6. Complex variable \( \mathbb{C} \)










Vectors

We start by defining a vector \( \boldsymbol{x} \) with \( n \) components, with \( x_0 \) as our first element, as

$$ \boldsymbol{x} = \begin{bmatrix} x_0\\ x_1 \\ x_2 \\ \dots \\ \dots \\ x_{n-1} \end{bmatrix}. $$

and its transpose

$$ \boldsymbol{x}^{T} = \begin{bmatrix} x_0 & x_1 & x_2 & \dots & \dots & x_{n-1} \end{bmatrix}, $$

In case we have a complex vector we define the hermitian conjugate

$$ \boldsymbol{x}^{\dagger} = \begin{bmatrix} x_0^* & x_1^* & x_2^* & \dots & \dots & x_{n-1}^* \end{bmatrix}, $$

With a given vector \( \boldsymbol{x} \), we define the inner product as

$$ \boldsymbol{x}^T \boldsymbol{x} = \sum_{i=0}^{n-1} x_ix_i=x_0^2+x_1^2+\dots + x_{n-1}^2. $$









Hermitian conjugate

The hermitian conjugate of a matrix is obtained by taking the complex conjugate of each element and then taking the transpose of the resulting matrix. Often we will just say the transpose or just the conjugate, it should be clear from the context that we will mainly deal with hermitian quantities and our matrices will in most cases be square matrices.

Unitarity, as we will see below, plays also a central role in this course.











Outer products

In addition to inner products between vectors/states, the outer product plays a central role in many applications. It is defined as

$$ \boldsymbol{x}\boldsymbol{y}^T = \begin{bmatrix} x_0y_0 & x_0y_1 & x_0y_2 & \dots & \dots & x_0y_{n-2} & x_0y_{n-1} \\ x_1y_0 & x_1y_1 & x_1y_2 & \dots & \dots & x_1y_{n-2} & x_1y_{n-1} \\ x_2y_0 & x_2y_1 & x_2y_2 & \dots & \dots & x_2y_{n-2} & x_2y_{n-1} \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ x_{n-2}y_0 & x_{n-2}y_1 & x_{n-2}y_2 & \dots & \dots & x_{n-2}y_{n-2} & x_{n-2}y_{n-1} \\ x_{n-1}y_0 & x_{n-1}y_1 & x_{n-1}y_2 & \dots & \dots & x_{n-1}y_{n-2} & x_{n-1}y_{n-1} \end{bmatrix} $$

The latter defines also our basic matrix layout.











Basic Matrix Features

A general \( n\times n \) matrix is given by

$$ \boldsymbol{A} = \begin{bmatrix} a_{00} & a_{01} & a_{02} & \dots & \dots & a_{0n-2} & a_{0n-1} \\ a_{10} & a_{11} & a_{12} & \dots & \dots & a_{1n-2} & a_{1n-1} \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n-20} & a_{n-21} & a_{n-22} & \dots & \dots & a_{n-2n-2} & a_{n-2n-1} \\ a_{n-10} & a_{n-11} & a_{n-12} & \dots & \dots & a_{n-1n-2} & a_{n-1n-1} \end{bmatrix}, $$

or in terms of its column vectors \( \boldsymbol{a}_i \) as

$$ \boldsymbol{A} = \begin{bmatrix}\boldsymbol{a}_{0} & \boldsymbol{a}_{1} & \boldsymbol{a}_{2} & \dots & \dots & \boldsymbol{a}_{n-2} & \boldsymbol{a}_{n-1}\end{bmatrix}. $$

We can think of a matrix as a diagram of in general \( n \) rowns and \( m \) columns. In the example here we have a square matrix.











The inverse of a matrix

The inverse of a square matrix (if it exists) is defined by

$$ \boldsymbol{A}^{-1} \cdot \boldsymbol{A} = I, $$

where \( \boldsymbol{I} \) is the unit matrix.











Selected Matrix Features

Relations Name matrix elements
\( \boldsymbol{A} = \boldsymbol{A}^{T} \) symmetric \( a_{ij} = a_{ji} \)
\( \boldsymbol{A} = \left (\boldsymbol{A}^{T} \right )^{-1} \) real orthogonal \( \sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij} \)
\( \boldsymbol{A} = \boldsymbol{A}^{ * } \) real matrix \( a_{ij} = a_{ij}^{ * } \)
\( \boldsymbol{A} = \boldsymbol{A}^{\dagger} \) hermitian \( a_{ij} = a_{ji}^{ * } \)
\( \boldsymbol{A} = \left (\boldsymbol{A}^{\dagger} \right )^{-1} \) unitary \( \sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij} \)










Some famous Matrices











Matrix Features

Some equivalent statements for square matrices

For an \( n\times n \) matrix \( \boldsymbol{A} \) the following properties are all equivalent











Important Mathematical Operations

The basic matrix operations that we will deal with are addition and subtraction

$$ \boldsymbol{A}= \boldsymbol{B}\pm\boldsymbol{C} \Longrightarrow a_{ij} = b_{ij}\pm c_{ij}, $$

and scalar-matrix multiplication

$$ \boldsymbol{A}= \gamma\boldsymbol{B} \Longrightarrow a_{ij} = \gamma b_{ij}. $$









Vector-matrix and Matrix-matrix multiplication

We have also vector-matrix multiplications

$$ \boldsymbol{y}=\boldsymbol{Ax} \Longrightarrow y_{i} = \sum_{j=0}^{n-1} a_{ij}x_j, $$

and matrix-matrix multiplications

$$ \boldsymbol{A}=\boldsymbol{BC} \Longrightarrow a_{ij} = \sum_{k=0}^{n-1} b_{ik}c_{kj}, $$

and transpositions of a matrix

$$ \boldsymbol{A}=\boldsymbol{B}^T \Longrightarrow a_{ij} = b_{ji}. $$









Important Mathematical Operations

Similarly, important vector operations that we will deal with are addition and subtraction

$$ \boldsymbol{x}= \boldsymbol{y}\pm\boldsymbol{z} \Longrightarrow x_{i} = y_{i}\pm z_{i}, $$

scalar-vector multiplication

$$ \boldsymbol{x}= \gamma\boldsymbol{y} \Longrightarrow x_{i} = \gamma y_{i}, $$









Other important mathematical operations

and vector-vector multiplication (called Hadamard multiplication)

$$ \boldsymbol{x}=\boldsymbol{yz} \Longrightarrow x_{i} = y_{i}z_i. $$

Finally, as already metnioned, the inner or so-called dot product resulting in a constant

$$ x=\boldsymbol{y}^T\boldsymbol{z} \Longrightarrow x = \sum_{j=0}^{n-1} y_{j}z_{j}, $$

and the outer product, which yields a matrix,

$$ \boldsymbol{A}= \boldsymbol{y}\boldsymbol{z}^T \Longrightarrow a_{ij} = y_{i}z_{j}, $$









Defining basis states and quantum mechanical operators

We extend now to quantum mechanics our definitions of vectors, matrices and more.

We start by defining a state vector \( \boldsymbol{x} \) (meant to represent various quantum mechanical degrees of freedom) with \( n \) components as

$$ \boldsymbol{x} = \begin{bmatrix} x_0\\ x_1 \\ x_2 \\ \dots \\ \dots \\ x_{n-1} \end{bmatrix}. $$









Dirac notation

Throughout these notes we will use the so-called Dirac bra-ket formalism and we will replace the above standard boldfaced notation for a vector with

$$ \boldsymbol{x} = \vert x \rangle = \begin{bmatrix} x_0\\ x_1 \\ x_2 \\ \dots \\ \dots \\ x_{n-1} \end{bmatrix}, $$

and

$$ \boldsymbol{x}^{\dagger} = \langle x \vert = \begin{bmatrix} x_0^* & x_1^* & x_2^* & \dots & \dots & x_{n-1}^* \end{bmatrix}, $$









Inner product in Dirac notation

With a given vector \( \vert x \rangle \), we define the inner product as

$$ \langle x \vert x\rangle = \sum_{i=0}^{n-1} x_i^*x_i=x_0^2+x_1^2+\dots + x_{n-1}^2. $$

For two arbitrary vectors \( \vert x\rangle \) and \( \vert y\rangle \) with the same length, we have the general expression

$$ \langle y \vert x\rangle = \sum_{i=0}^{n-1} y_i^*x_i=y_0^*x_0+y_1^*x_1+\dots + y_{n-1}^*x_{n-1}. $$









The inner product is a real number

Note well that the inner product \( \langle x \vert x\rangle \) is always a real number while for a two different vectors \( \langle y \vert x\rangle \) is in general not equal to \( \langle x \vert y\rangle \), as can be seen from the example in the next slide.

We note in bypassing that \( \vert x\rangle^{\dagger}=\langle x \vert \), \( \langle x\vert^{\dagger}=\vert x\rangle \) and \( (\vert x\rangle^{\dagger})^{\dagger}=\vert x \rangle \).











Examples

Let us assume that \( \vert x \rangle \) is given by

$$ \vert x \rangle = \begin{bmatrix} 1-\imath \\ 2+\imath \end{bmatrix}. $$

The inner product gives us

$$ \langle x\vert x \rangle = (1+\imath)(1-\imath)+(2-\imath)(2+\imath)=7, $$

a real number.











Norm

We can use the norm/inner product to normalize the vector \( \vert x \rangle \) and obtain

$$ \vert x \rangle = \frac{1}{\sqrt{7}}\begin{bmatrix} 1-\imath \\ 2+\imath \end{bmatrix}. $$

As another example, consider the two vectors

$$ \vert x \rangle = \begin{bmatrix} -1 \\ 2\imath \\ 1\end{bmatrix}, $$

and

$$ \vert y \rangle = \begin{bmatrix} 1 \\ 0\imath \\ \imath\end{bmatrix}. $$

We see that the inner products \( \langle x\vert y \rangle = -1+\imath \), which is not the same as \( \langle y\vert x \rangle = -1-\imath \). This leads to the important rule

$$ \langle x\vert y\rangle^* = \langle y \vert x\rangle. $$









Outer products

In addition to inner products between vectors/states, the outer product plays a central role in all of quantum mechanics. It is defined as

$$ \vert x\rangle \langle y \vert = \begin{bmatrix} x_0y_0^* & x_0y_1^* & x_0y_2^* & \dots & \dots & x_0y_{n-2}^* & x_0y_{n-1}^* \\ x_1y_0^* & x_1y_1^* & x_1y_2^* & \dots & \dots & x_1y_{n-2}^* & x_1y_{n-1}^* \\ x_2y_0^* & x_2y_1^* & x_2y_2^* & \dots & \dots & x_2y_{n-2}^* & x_2y_{n-1}^* \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ x_{n-2}y_0^* & x_{n-2}y_1^* & x_{n-2}y_2^* & \dots & \dots & x_{n-2}y_{n-2}^* & x_{n-2}y_{n-1}^* \\ x_{n-1}y_0^* & x_{n-1}y_1^* & x_{n-1}y_2^* & \dots & \dots & x_{n-1}y_{n-2}^* & x_{n-1}y_{n-1}^* \end{bmatrix} $$









Other examples

Assume we have a two-level system where the two states are represented by the state vectors \( \vert \phi_0\rangle \) and \( \vert \phi_1\rangle \), respectively. These states could represent selected or effective degrees of freedom for either a single particle (fermion or boson) or they could represent effective many-body degrees of freedon.

In actual realizations of for example quantum computing we search often for candidate systems where we can use some low-lying states as computational basis states. When doing many-body physics, due to the exploding degrees of freedom, we normally search after effective ways by which we can reduce the involved dimensionalities to a number of degrees of freedom we can handle by a given many-body method.





















Projection operators

We will now relabel the above two states as two orthogonal and normalized basis (ONB) states

$$ \vert \phi_0 \rangle = \vert 0 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, $$

and

$$ \vert \phi_1 \rangle = \vert 1 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. $$









More on these operators

It is straight forward to see that \( \langle 1 \vert 0\rangle=0 \). With these two states we can define the identity operator \( \boldsymbol{I} \) as the sum of the outer products of these two states, namely

$$ \boldsymbol{I} = \sum_{i=0}^{i=1}\vert i\rangle \langle i\vert = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} +\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. $$

We can further define the projection operators

$$ \boldsymbol{P} = \vert 0\rangle \langle 0\vert = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, $$

and

$$ \boldsymbol{Q} = \vert 1\rangle \langle 1\vert = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}. $$









Properties of idempotent operators

We note that \( P^2=P \), \( Q^2=Q \) (the operators are idempotent) and that their determinants are zero, meaning in turn that we cannot use these operators for unitary/orthogonal transformations. However, they play important roles in defining effective Hilbert spaces for many-body studies. Finally, before proceeding we note also that the two matrices commute and we have \( \boldsymbol{P}\boldsymbol{Q}=0 \) and \( \left[ \boldsymbol{P},\boldsymbol{Q}\right]=0 \).











Different operators

The so-called Pauli matrices, and other simple \( 2\times 2 \) matrices, play an important role, ranging from the setup of quantum gates in quantum computing to a rewrite of creation and annihilation operators and other quantum mechanical operators. Let us start with the familiar Pauli matrices and remind ourselves of some of their basic properties.

The Pauli matrices are defined as

$$ \sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, $$ $$ \sigma_y = \begin{bmatrix} 0 & -\imath \\ \imath & 0 \end{bmatrix}, $$

and

$$ \sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}. $$









Properties of Pauli matrices

It is easy to show that the matrices obey the properties (being involutory)

$$ \sigma_x\sigma_x = \sigma_y\sigma_y=\sigma_z\sigma_z = I=\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}, $$

that is their products with themselves result in the identity matrix \( \boldsymbol{I} \). Furthermore, the Pauli matrices are unitary matrices meaning that their inverses are equal to their hermitian conjugated matrices. The determinants of the Pauli matrices are all equal to \( -1 \), as can be easily verified.











Commutation relations

The Pauli matrices obey also the following commutation rules

$$ \left[\sigma_x,\sigma_y\right] = 2\imath \sigma_z. $$

Before we proceed with other matrices and how they can be used to operate on various quantum mechanical states, let us try to define various basis sets and their pertinent notations. We will often refer to these basis states as our computational basis.











Superposition and more

Using the properties of ONBs we can expand a new state in terms of the above states. These states could also form a basis which is an eigenbasis of a selected Hamiltonian (more of this below).

We define now a new state which is a linear expansion in terms of these computational basis states

$$ \vert \psi \rangle = \alpha \vert 0 \rangle + \beta\vert 1 \rangle, $$

where the coefficients \( \alpha = \langle 0 \vert \psi \rangle \) and \( \beta =\langle 1 \vert \psi\rangle \) reresent the overlaps between the computational basis states and the state \( \vert \psi\rangle \). In quantum speech, we say the state is in a superposition of the states \( \vert 0\rangle \) and \( \vert 1\rangle \).











Inner products

Computing the inner product of \( \vert \psi \rangle \) we obtain

$$ \langle \psi \vert \psi \rangle = \vert \alpha \vert ^2\langle 0\vert 0\rangle + \vert \beta \vert ^2\langle 1\vert 1\rangle = \vert \alpha \vert ^2 + \vert \beta \vert ^2 = 1, $$

since the new basis, which is defined in terms of a unitary/orthogonal transformation, preserves the orthogonality and norm of the original computational basis \( \vert 0\rangle \) and \( \vert 1\rangle \). To see this, consider the unitary transformation (show derivation of preserving orthogonality).











Acting with projection operators

If we now act with the projection operators \( \boldsymbol{P} \) and \( \boldsymbol{Q} \) on the state \( \vert \psi\rangle \) we get

$$ \boldsymbol{P}\vert \psi \rangle = \vert 0 \rangle\langle 0\vert (\alpha \vert 0 \rangle + \beta\vert 1 \rangle)=\alpha \vert 0\rangle, $$

that is we project out the \( \vert 0\rangle \) component of the state \( \vert \psi\rangle \) with the coefficient \( \alpha \) while \( \boldsymbol{Q} \) projects out the \( \vert 1\rangle \) component with coefficient \( \beta \) as seen from

$$ \boldsymbol{Q}\vert \psi \rangle = \vert 1 \rangle\langle 1\vert (\alpha \vert 0 \rangle + \beta\vert 1 \rangle)=\beta \vert 1\rangle. $$

The above results can easily be derived by multiplying the pertinent matrices with the vectors \( \vert 0\rangle \) and \( \vert 1\rangle \), respectively.











Density matrix

Using the above linear expansion we can now define the density matrix of the state \( \vert \psi\rangle \) as the outer product

$$ \boldsymbol{\rho}=\vert \psi \rangle\langle \psi \vert = \alpha\alpha^* \vert 0 \rangle\langle 0\vert+\alpha\beta^* \vert 0 \rangle\langle 1\vert+\beta\alpha^* \vert 1 \rangle\langle 0\vert+\beta\beta^* \vert 1 \rangle\langle 1\vert, $$

which leads to

$$ \boldsymbol{\rho}=\begin{bmatrix} \alpha\alpha^* & \alpha\beta^*\\ \beta\alpha^* & \beta\beta^*\end{bmatrix}. $$

Finally, we note that the trace of the density matrix is simply given by unity

$$ \mathrm{tr}\boldsymbol{\rho}=\alpha\alpha^* +\beta\beta^*=1. $$









Tensor products

Consider now two vectors with length \( n=2 \), with elements

$$ \vert x \rangle = \begin{bmatrix} x_0 \\ x_1 \end{bmatrix}, $$

and

$$ \vert y \rangle = \begin{bmatrix} y_0 \\ y_1 \end{bmatrix}. $$

The tensor product of these two vectors is defined as

$$ \vert x \rangle \otimes \vert y \rangle = \vert xy \rangle = \begin{bmatrix} x_0y_0 \\ x_0y_1 \\ x_1y_0 \\ x_1y_1 \end{bmatrix}, $$

which is now a vector of length \( 4 \).











Examples of tensor products

If we now go back to our original two basis states, we can form teh following tensor products

$$ \vert 0 \rangle \otimes \vert 0 \rangle = \begin{bmatrix} 1 \\ 0\end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} =\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}=\vert 00 \rangle, $$ $$ \vert 0 \rangle \otimes \vert 1 \rangle = \begin{bmatrix} 1 \\ 0\end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1\end{bmatrix} =\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}=\vert 01 \rangle. $$









More states

$$ \vert 1 \rangle \otimes \vert 0 \rangle = \begin{bmatrix} 0 \\ 1\end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} =\begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}=\vert 10 \rangle, $$

and finally

$$ \vert 1 \rangle \otimes \vert 1 \rangle = \begin{bmatrix} 0 \\ 1\end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1\end{bmatrix} =\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}=\vert 11 \rangle. $$









Measurements

The probability of a measurement on a quantum system giving a certain result is determined by the weight of the relevant basis state in the state vector. After the measurement, the system is in a state that corresponds to the result of the measurement. The operators and gates discussed below are examples of operations we can perform on specific states.

We consider the state

$$ \vert \psi\rangle = \alpha \vert 0 \rangle +\beta \vert 1 \rangle $$









Definitions of measurements

  1. A measurement can yield only one of the above states, either \( \vert 0\rangle \) or \( \vert 1\rangle \).
  2. The probability of a measurement resulting in \( \vert 0\rangle \) is \( \alpha^*\alpha = \vert \alpha \vert^2 \).
  3. The probability of a measurement resulting in \( \vert 1\rangle \) is \( \beta^*\beta = \vert \beta \vert^2 \).
  4. And we note that the sum of the outcomes gives \( \alpha^*\alpha+\beta^*\beta=1 \) since the two states are normalized.

After the measurement, the state of the system is the state associated with the result of the measurement.

We have already encountered the projection operators \( P \) and \( Q \).











Unitarity

Essentially all matrices we introduced here are so-called unitary matrices. This is an important element in quantum mechanics since the evolution of a closed quantum system is described by operations involving unitary operations only.

We have defined a new state \( \vert \psi_p\rangle \) as a linear expansion in terms of an orthogonal and normalized basis (our computational basis) \( \phi_{\lambda} \)

$$ \begin{equation} \vert \psi_i\rangle = \sum_{j} u_{ij}\vert \phi_{j}\rangle. \label{_auto1} \end{equation} $$









Hamiltonians and basis functions

It is normal to choose a basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have

$$ \hat{h}_0\vert \phi_{i}\rangle=\epsilon_{i}\vert \phi_{i}\rangle. $$

This is normally referred to as a single-particle basis \( \vert\phi_{i}(\mathbf{r})\rangle \), defined by the quantum numbers \( i \) and \( \mathbf{r} \).











Unitary transformations

A unitary transformation is important since it keeps the orthogonality. To see this consider first a basis of vectors \( \mathbf{v}_i \),

$$ \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} $$

We assume that the basis is orthogonal, that is

$$ \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$

An orthogonal or unitary transformation

$$ \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, $$

preserves the dot product and orthogonality since

$$ \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$









Orthogonality preserved

This means that if the coefficients \( u_{p\lambda} \) belong to a unitary or orthogonal transformation (using the Dirac bra-ket notation)

$$ \vert \psi_i\rangle = \sum_{j} u_{ij}\vert \phi_{j}\rangle. $$

orthogonality is preserved.

This property is extremely useful when we build up a basis of many-body determinant based states.

Note also that although a basis \( \left\{\vert \phi_i \rangle\right\} \) contains an infinity of states, for practical calculations we have always to make some truncations.











Example

Assume we have two states represented by

$$ \vert \psi \rangle = \alpha \vert 0 \rangle + \beta \vert 1\rangle=\begin{bmatrix}\alpha \\ \beta \end{bmatrix}, $$

and

$$ \vert \phi \rangle = \gamma \vert 0 \rangle + \delta \vert 1\rangle=\begin{bmatrix}\gamma \\ \delta \end{bmatrix}. $$

We assume that the state \( \vert \phi \rangle \) is obtained through a unitary transformation of \( \vert \psi \rangle \) through a matrix \( \boldsymbol{U} \) with its hermitian conjugate \( \boldsymbol{U}^{\dagger} \) with matrix elements \( u_{ij}^{\dagger}=u_{ji}^* \) and \( \boldsymbol{I}=\boldsymbol{U}\boldsymbol{U}^{\dagger}=\boldsymbol{U}^{\dagger}\boldsymbol{U} \).











Inverse of unitary matrices

Note that this means that the hermitian conjugate of a unitary matrix is equal to its inverse. This has important consequences for what is called reversibility. We say quantum mechanics is a theory which is reversible with a probabilistic determinism. Classical mechanics on the other is reversible in a deterministic way, that is, knowing all initial conditions we can in principle determine the future motion of an object which obeys the laws of motion of classical mechanics.

We have then

$$ \begin{bmatrix}\gamma \\ \delta \end{bmatrix}=\begin{bmatrix}u_{00} & u_{01} \\ u_{10} & u_{11} \end{bmatrix}\begin{bmatrix}\alpha \\ \beta \end{bmatrix}. $$









New basis is also orthogonal

Since our original basis \( \vert \psi\rangle \) is orthogonal and normalized with \( \vert\alpha\vert^2+\vert\beta\vert^2=1 \), the new basis is also orthogonal and normalized, as we can see below here.

Since the inverse of a hermitian matrix is equal to its hermitian conjugate/adjoint), unitary transformations are always reversible.

Why are only unitary transformations allowed? The key lies in the way the inner product tranforms.

To see this we rewrite the new basis from the previous example in its two components as

$$ \vert \phi\rangle_i=\sum_{j}u_{ij}\vert \psi\rangle_j, $$

or in terms of a matrix-vector notatio we have

$$ \vert \phi\rangle=\boldsymbol{U}\vert \psi\rangle, $$









More on orthogonality

We have already assumed that \( \langle \psi \vert \psi \rangle = \vert\alpha\vert^2+\vert\beta\vert^2=1 \).

We have that

$$ \langle \phi\vert_i=\sum_{j}u_{ij}^*\langle \psi\vert_j, $$

or in terms of a matrix-vector notation we have

$$ \langle \phi\vert=\langle \psi\vert\boldsymbol{U}^{\dagger}. $$

Note that the two vectors are row vectors now.

If we stay with this notation we have

$$ \langle \phi\vert\phi\rangle = \langle \psi \boldsymbol{U}^{\dagger}\boldsymbol{U}\vert \psi\rangle = \langle \psi\vert \psi\rangle=1! $$

Unitary transformations are rotations in state space which preserve the length (the square root of the inner product) of the state vector.











Hamiltonian and more definition

Before we proceed we need several definitions. Throughout these lectures we will assume that the interacting part of the Hamiltonian can be approximated by a two-body interaction. This means that our Hamiltonian can be written as the sum of a onebody part, which includes kinetic energy and an eventual external field, and a twobody interaction











Hamiltonian

This means that our Hamiltonian is written as the sum of some onebody part and a twobody part

$$ \begin{equation} \hat{H} = \hat{H}_0 + \hat{H}_I = \sum_{i=1}^N \hat{h}_0(x_i) + \sum_{i < j}^N \hat{v}(r_{ij}), \label{Hnuclei} \end{equation} $$

with

$$ \begin{equation} H_0=\sum_{i=1}^N \hat{h}_0(x_i). \label{hinuclei} \end{equation} $$

The onebody part \( u_{\mathrm{ext}}(x_i) \) is normally approximated by a harmonic oscillator potential or the Coulomb interaction an electron feels from the nucleus. However, other potentials are fully possible, such as one derived from the self-consistent solution of the Hartree-Fock equations to be discussed here.











Hamiltonian is invariant

Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we mainly will deal with fermions however, the total wave function is antisymmetric. Let \( \hat{P} \) be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian,

$$ [\hat{H},\hat{P}] = 0, $$

meaning that \( \Psi_{\lambda}(x_1, x_2, \dots , x_N) \) is an eigenfunction of \( \hat{P} \) as well, that is

$$ \hat{P}_{ij}\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_N)= \beta\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_N), $$

where \( \beta \) is the eigenvalue of \( \hat{P} \). We have introduced the suffix \( ij \) in order to indicate that we permute particles \( i \) and \( j \).











Pauli principle

The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue \( \beta = -1 \).

In our case we assume that we can approximate the exact eigenfunction with a so-called Slater determinant.











Slater determinant

The state function $ \Phi(x_1, x_2,\dots ,x_N,\alpha,\beta,\dots, \sigma)$ is given by

$$ \begin{equation} \frac{1}{\sqrt{N!}}\left| \begin{array}{ccccc} \psi_{\alpha}(x_1)& \psi_{\alpha}(x_2)& \dots & \dots & \psi_{\alpha}(x_N)\\ \psi_{\beta}(x_1)&\psi_{\beta}(x_2)& \dots & \dots & \psi_{\beta}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{\sigma}(x_1)&\psi_{\sigma}(x_2)& \dots & \dots & \psi_{\sigma}(x_N)\end{array} \right|, \label{eq:HartreeFockDet} \end{equation} $$

where \( x_i \) stand for the coordinates and spin values of a particle \( i \) and \( \alpha,\beta,\dots, \gamma \) are quantum numbers needed to describe remaining quantum numbers.











Determinant algebra

Since we will mainly deal with Fermions (identical and indistinguishable particles) we will form an ansatz for a given state in terms of so-called Slater determinants determined by a chosen basis of single-particle functions.

For a given \( n\times n \) matrix \( \mathbf{A} \) we can write its determinant

$$ det(\mathbf{A})=|\mathbf{A}|= \left| \begin{array}{ccccc} a_{11}& a_{12}& \dots & \dots & a_{1n}\\ a_{21}&a_{22}& \dots & \dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2}& \dots & \dots & a_{nn}\end{array} \right|, $$

in a more compact form as

$$ |\mathbf{A}|= \sum_{i=1}^{n!}(-1)^{p_i}\hat{P}_i a_{11}a_{22}\dots a_{nn}, $$

where \( \hat{P}_i \) is a permutation operator which permutes the column indices \( 1,2,3,\dots,n \) and the sum runs over all \( n! \) permutations.











Transposition of column indices

The quantity \( p_i \) represents the number of transpositions of column indices that are needed in order to bring a given permutation back to its initial ordering, in our case given by \( a_{11}a_{22}\dots a_{nn} \) here.











Simple determinant example

A simple \( 2\times 2 \) determinant illustrates this. We have

$$ det(\mathbf{A})= \left| \begin{array}{cc} a_{11}& a_{12}\\ a_{21}&a_{22}\end{array} \right|= (-1)^0a_{11}a_{22}+(-1)^1a_{12}a_{21}, $$

where in the last term we have interchanged the column indices \( 1 \) and \( 2 \). The natural ordering we have chosen is \( a_{11}a_{22} \).











Back to the derivation of the energy

The single-particle function \( \psi_{\alpha}(x_i) \) are eigenfunctions of the onebody Hamiltonian \( h_i \), that is

$$ \hat{h}_0(x_i)=\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i), $$

with eigenvalues

$$ \hat{h}_0(x_i) \psi_{\alpha}(x_i)=\left(\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i)\right)\psi_{\alpha}(x_i)=\varepsilon_{\alpha}\psi_{\alpha}(x_i). $$









Non-interacting system

The energies \( \varepsilon_{\alpha} \) are the so-called non-interacting single-particle energies, or unperturbed energies. The total energy is in this case the sum over all single-particle energies, if no two-body or more complicated many-body interactions are present.











Ground state energy

Let us denote the ground state energy by \( E_0 \). According to the variational principle we have

$$ E_0 \le E[\Phi] = \int \Phi^*\hat{H}\Phi d\mathbf{\tau}, $$

where \( \Phi \) is a trial function which we assume to be normalized

$$ \int \Phi^*\Phi d\mathbf{\tau} = 1, $$

where we have used the shorthand \( d\mathbf{\tau}=dx_1dr_2\dots dr_N \).











In the Hartree-Fock method the trial function is the Slater determinant which can be rewritten as

$$ \Phi(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \frac{1}{\sqrt{N!}}\sum_{P} (-)^P\hat{P}\psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N), $$

which equals

$$ \sqrt{N!}\hat{A}\Phi_H, $$

where we have introduced the antisymmetrization operator \( \hat{A} \) defined by the summation over all possible permutations of two particles.











Using expansions

It is defined as

$$ \hat{A} = \frac{1}{N!}\sum_{p} (-)^p\hat{P}, $$

with \( p \) standing for the number of permutations. We have introduced for later use the so-called Hartree-function, defined by the simple product of all possible single-particle functions

$$ \Phi_H(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N). $$









Using invariance

Both \( \hat{H}_0 \) and \( \hat{H}_I \) are invariant under all possible permutations of any two particles and hence commute with \( \hat{A} \)

$$ [H_0,\hat{A}] = [H_I,\hat{A}] = 0. $$

Furthermore, \( \hat{A} \) satisfies

$$ \hat{A}^2 = \hat{A}, \label{AntiSymSquared} $$

since every permutation of the Slater determinant reproduces it.











Expectation value

The expectation value of \( \hat{H}_0 \)

$$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{A}\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau} $$

is readily reduced to

$$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau}. $$

The next step is to replace the antisymmetrization operator by its definition and to replace \( \hat{H}_0 \) with the sum of one-body operators

$$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{i=1}^N \sum_{p} (-)^p\int \Phi_H^*\hat{h}_0\hat{P}\Phi_H d\mathbf{\tau}. $$









Vanishing terms

The integral vanishes if two or more particles are permuted in only one of the Hartree-functions \( \Phi_H \) because the individual single-particle wave functions are orthogonal. We obtain then

$$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau}= \sum_{i=1}^N \int \Phi_H^*\hat{h}_0\Phi_H d\mathbf{\tau}. $$

Orthogonality of the single-particle functions allows us to further simplify the integral, and we arrive at the following expression for the expectation values of the sum of one-body Hamiltonians

$$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{\mu=1}^N \int \psi_{\mu}^*(x)\hat{h}_0\psi_{\mu}(x)dx d\mathbf{r}. $$









Shorthand notation

We introduce the following shorthand for the above integral

$$ \langle \mu | \hat{h}_0 | \mu \rangle = \int \psi_{\mu}^*(x)\hat{h}_0\psi_{\mu}(x)dx, $$

which allows us to rewrite the expectation values as

$$ \int \Phi^*\hat{H}_0\Phi d\tau = \sum_{\mu=1}^N \langle \mu | \hat{h}_0 | \mu \rangle. $$









Expectation value for two-body operator

The expectation value of the two-body part of the Hamiltonian is obtained in a similar manner. We have

$$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{A}\hat{H}_I\hat{A}\Phi_H d\mathbf{\tau}, $$

which reduces to

$$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i\le j=1}^N \sum_{p} (-)^p\int \Phi_H^*\hat{v}(r_{ij})\hat{P}\Phi_H d\mathbf{\tau}, $$

by following the same arguments as for the one-body Hamiltonian.











Final expressions

Because of the dependence on the inter-particle distance \( r_{ij} \), permutations of any two particles no longer vanish, and we get

$$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i < j=1}^N \int \Phi_H^*\hat{v}(r_{ij})(1-P_{ij})\Phi_H d\mathbf{\tau}, $$

where \( P_{ij} \) is the permutation operator that interchanges particle \( i \) and particle \( j \). Again we use the assumption that the single-particle wave functions are orthogonal.











Final expressions

We obtain

$$ \begin{align*} \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^N\sum_{\nu=1}^N &\left[ \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j \right.\\ &\left. - \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j \right]. \label{H2Expectation} \end{align*} $$

The first term is the so-called direct term. It is frequently also called the Hartree term, while the second is due to the Pauli principle and is called the exchange term or just the Fock term. The factor \( 1/2 \) is introduced because we now run over all pairs twice.











Some additional definitions

The last equation allows us to introduce some further definitions. The single-particle wave functions \( \psi_{\mu}(x) \), defined by the quantum numbers \( \mu \) and \( x \) are defined as the overlap

$$ \psi_{\alpha}(x) = \langle x | \alpha \rangle . $$









Additional expressions

We introduce the following shorthands for the above two integrals

$$ \langle \mu\nu|\hat{v}|\mu\nu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j, $$

and

$$ \langle \mu\nu|\hat{v}|\nu\mu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j. $$









Preparing for later studies: varying the coefficients of a wave function expansion and orthogonal transformations

It is common to expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new single-particle basis (this is a normal approach for Hartree-Fock theory) by performing a unitary transformation on our previous basis (labelled with greek indices) as

$$ \begin{equation} \psi_p^{new} = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}. \label{eq:newbasis} \end{equation} $$

In this case we vary the coefficients \( C_{p\lambda} \). If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis \( \phi_{\lambda} \) is orthogonal.











Choice of basis states

It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have

$$ \hat{h}_0\phi_{\lambda}=\epsilon_{\lambda}\phi_{\lambda}. $$

The single-particle wave functions \( \phi_{\lambda}(\mathbf{r}) \), defined by the quantum numbers \( \lambda \) and \( \mathbf{r} \) are defined as the overlap

$$ \phi_{\lambda}(\mathbf{r}) = \langle \mathbf{r} | \lambda \rangle . $$









In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc).











ONB again

We stated that a unitary transformation keeps the orthogonality. To see this consider first a basis of vectors \( \mathbf{v}_i \),

$$ \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} $$

We assume that the basis is orthogonal, that is

$$ \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$









Preserving orthogonality

An orthogonal or unitary transformation

$$ \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, $$

preserves the dot product and orthogonality since

$$ \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$

This means that if the coefficients \( C_{p\lambda} \) belong to a unitary or orthogonal trasformation (using the Dirac bra-ket notation)

$$ \vert p\rangle = \sum_{\lambda} C_{p\lambda}\vert\lambda\rangle, $$

orthogonality is preserved, that is \( \langle \alpha \vert \beta\rangle = \delta_{\alpha\beta} \) and \( \langle p \vert q\rangle = \delta_{pq} \).











Useful property

This propertry is extremely useful when we build up a basis of many-body Stater determinant based states.

Note also that although a basis \( \vert \alpha\rangle \) contains an infinity of states, for practical calculations we have always to make some truncations.









Another useful property

Before we develop for example the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations and later shell-model calculations.

Consider the following determinant

$$ \left| \begin{array}{cc} \alpha_1b_{11}+\alpha_2sb_{12}& a_{12}\\ \alpha_1b_{21}+\alpha_2b_{22}&a_{22}\end{array} \right|=\alpha_1\left|\begin{array}{cc} b_{11}& a_{12}\\ b_{21}&a_{22}\end{array} \right|+\alpha_2\left| \begin{array}{cc} b_{12}& a_{12}\\b_{22}&a_{22}\end{array} \right| $$









Generalizing

We can generalize this to an \( n\times n \) matrix and have

$$ \left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & \sum_{k=1}^n c_k b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & \sum_{k=1}^n c_k b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2} & \dots & \sum_{k=1}^n c_k b_{nk} &\dots & a_{nn}\end{array} \right|= \sum_{k=1}^n c_k\left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots\\ \dots & \dots & \dots & \dots & \dots & \dots\\ a_{n1}& a_{n2} & \dots & b_{nk} &\dots & a_{nn}\end{array} \right| . $$

This is a property we will use in our Hartree-Fock discussions.











We can generalize the previous results, now with all elements \( a_{ij} \) being given as functions of linear combinations of various coefficients \( c \) and elements \( b_{ij} \),

$$ \left| \begin{array}{cccccc} \sum_{k=1}^n b_{1k}c_{k1}& \sum_{k=1}^n b_{1k}c_{k2} & \dots & \sum_{k=1}^n b_{1k}c_{kj} &\dots & \sum_{k=1}^n b_{1k}c_{kn}\\ \sum_{k=1}^n b_{2k}c_{k1}& \sum_{k=1}^n b_{2k}c_{k2} & \dots & \sum_{k=1}^n b_{2k}c_{kj} &\dots & \sum_{k=1}^n b_{2k}c_{kn}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots &\dots \\ \sum_{k=1}^n b_{nk}c_{k1}& \sum_{k=1}^n b_{nk}c_{k2} & \dots & \sum_{k=1}^n b_{nk}c_{kj} &\dots & \sum_{k=1}^n b_{nk}c_{kn}\end{array} \right|=det(\mathbf{C})det(\mathbf{B}), $$

where \( det(\mathbf{C}) \) and \( det(\mathbf{B}) \) are the determinants of \( n\times n \) matrices with elements \( c_{ij} \) and \( b_{ij} \) respectively. This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting \( n=2 \).











New Slater determinant

With our definition of the new basis in terms of an orthogonal basis we have

$$ \psi_p(x) = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x). $$

If the coefficients \( C_{p\lambda} \) belong to an orthogonal or unitary matrix, the new basis is also orthogonal. Our Slater determinant in the new basis \( \psi_p(x) \) is written as

$$ \frac{1}{\sqrt{N!}} \left| \begin{array}{ccccc} \psi_{p}(x_1)& \psi_{p}(x_2)& \dots & \dots & \psi_{p}(x_N)\\ \psi_{q}(x_1)&\psi_{q}(x_2)& \dots & \dots & \psi_{q}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{t}(x_1)&\psi_{t}(x_2)& \dots & \dots & \psi_{t}(x_N)\end{array} \right|=\frac{1}{\sqrt{N!}} \left| \begin{array}{ccccc} \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_1)& \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_N)\\ \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_N)\end{array} \right|, $$

which is nothing but \( det(\mathbf{C})det(\Phi) \), with \( det(\Phi) \) being the determinant given by the basis functions \( \phi_{\lambda}(x) \).











Energy functional

The energy functional is

$$ E[\Phi] = \sum_{\mu=1}^N \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}, $$

we found the expression for the energy functional in terms of the basis function \( \phi_{\lambda}(\mathbf{r}) \). We then varied the above energy functional with respect to the basis functions \( |\mu \rangle \). Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. \eqref{eq:newbasis}. We can then rewrite the energy functional as

$$ \begin{equation} E[\Phi^{New}] = \sum_{i=1}^N \langle i | h | i \rangle + \frac{1}{2}\sum_{ij=1}^N\langle ij|\hat{v}|ij\rangle_{AS}, \label{FunctionalEPhi2} \end{equation} $$

where \( \Phi^{New} \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis}.











New expression

Using Eq. \eqref{eq:newbasis} we can rewrite Eq. \eqref{FunctionalEPhi2} as

$$ \begin{equation} E[\Psi] = \sum_{i=1}^N \sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | h | \beta \rangle + \frac{1}{2}\sum_{ij=1}^N\sum_{{\alpha\beta\gamma\delta}} C^*_{i\alpha}C^*_{j\beta}C_{i\gamma}C_{j\delta}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}. \label{FunctionalEPhi3} \end{equation} $$
© 1999-2024, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license