Gamma Matrices

In the last chapter, we introduced the momentum-space Dirac equation, which we rewrite in the form 1) \begin{equation} ( \gamma_\mu \, p^\mu - m ) \Psi = 0 \label{diracnew} \end{equation} where we have set $c=1$. The gamma matrices $\gamma_\mu$ satisfy \begin{equation} \{ \gamma_\mu,\gamma_\nu \} = \gamma_\mu\gamma_\nu + \gamma_\nu\gamma_\mu = 2 g_{\mu\nu} \label{gammaid} \end{equation} How do we find such matrices?

It turns out that, in all the cases we will be interested in, the gamma matrices can be constructed in blocks from the Pauli matrices $\SIGMA_a$. We begin by discussing how to construct big matrices from small ones in this way.

Imagine a chessboard. It's an $8\times8$ grid. Now suppose it's a 3-dimensional chess game. Easy; just stack 8 boards on top of each other. But there's no need to stack them vertically! Simply put 8 ordinary chess boards next to each other, and you can still play 3-dimensional chess. What about 4 dimensions? Take 8 rows of 8 chessboards each. You can keep going to get a chess game in any (finite!)\ dimension, but 4 is enough for our needs.

How do you label a square in our 4-dimensional chess game? You need to specify both the chessboard, and the square on the chessboard. But each of these specifications corresponds to an element of an $8\times8$ matrix! Thus, a square on the chessboard is labeled by specifying an element of two $8\times8$ matrices.

Of course, we can also imagine this setup as a single $64\times64$ board, whose squares are specified by giving an element of a $64\times64$ matrix. We have thus built up a $64\times64$ matrix using two $8\times8$ matrices, one to describe the arrangement of the blocks, the other to describe the location in each block.

Consider doing this with $2\times2$ matrices, rather than $8\times8$. Suppose the first is \begin{equation} \SIGMA_1 = \SIGMA_x = \begin{pmatrix}0& 1\cr \noalign{\smallskip} 1& 0\cr\end{pmatrix} \end{equation} and the second is the identity matrix. What is the result?

The first matrix, $\SIGMA_1$, gives the block structure: The upper-left and lower-right blocks are (multiplied by) $0$, while the remaining blocks are (multiplied by) $1$. What's in each block? The identity matrix! What is the result? We write \begin{equation} \SIGMA_1 \otimes \II = \begin{pmatrix} 0& 0& 1& 0\cr 0& 0& 0& 1\cr 1& 0& 0& 0\cr 0& 1& 0& 0\cr \end{pmatrix} \end{equation} where the symbol $\otimes$ is read “tensor”; this is a tensor product. Tensors are generalizations of matrices, in this case a 4-dimensional array of numbers, which is $2\times2\times2\times2$. Just as with the chessboards, we reinterpret this as an ordinary matrix, which in this case is $4\times4$.

The power of this description comes from the fact that matrix multiplication is compatible with the tensor product, in the sense that \begin{equation} (\AA_1\otimes\BB_1) (\AA_2\otimes\BB_2) = (\AA_1\AA_2) \otimes (\BB_1\BB_2) \label{tensorid} \end{equation} This doesn't quite work for anticommutators, except in special cases. One such case occurs if either $[\AA_1,\AA_2] = 0$ or $[\BB_1,\BB_2] = 0$, then \begin{equation} \{ (\AA_1\otimes\BB_1) , (\AA_2\otimes\BB_2) \} = \frac12 \{\AA_1,\AA_2\} \otimes \{\BB_1,\BB_2\} \label{antiid} \end{equation} By constructing the gamma matrices as tensor products of Pauli matrices, we can use ($\ref{tensorid}$) and ($\ref{antiid}$) to work out products of gamma matrices in terms of products of Pauli matrices, which are much easier. Remembering that \begin{equation} \SIGMA_2 = \SIGMA_y = \begin{pmatrix}0& -i\cr \noalign{\smallskip} i&   0\cr\end{pmatrix} \qquad \SIGMA_3 = \SIGMA_z = \begin{pmatrix}1&   0\cr \noalign{\smallskip} 0& -1\cr\end{pmatrix} \end{equation} it is easy to check that the Pauli matrices anticommute and square to the identity, that is, \begin{equation} \{ \SIGMA_a,\SIGMA_b \} = 2 \delta_{ab} \end{equation} where $\delta_{ab}$ is the Kronecker delta, which is $1$ if $a=b$ and $0$ otherwise.

There are many possible choices of matrices $\gamma_\mu$ which satisfy ($\ref{gammaid}$); each such choice is called a representation. We seek a minimal representation, that is, we are looking for the smallest matrices possible, so that the solutions $\Psi$ of the Dirac equation have as few physical degrees of freedom as possible. In $4$ dimensions, the smallest matrices which can be used are $4\times4$, but there are still many representations.

We choose a representation which will generalize nicely to the other division algebras, namely \begin{align} \gamma_0 &= \SIGMA_1 \otimes \II = \begin{pmatrix}0& \II\cr \noalign{\smallskip} \II& 0\cr\end{pmatrix} \label{rep1}\\ \noalign{\smallskip} \gamma_a &= i\SIGMA_2 \otimes \SIGMA_a = \begin{pmatrix} 0& \SIGMA_a\cr \noalign{\smallskip} -\SIGMA_a& 0\cr \end{pmatrix} \label{rep2} \end{align} where $a=1,2,3$. These expressions are carefully chosen so that, given any two distinct gamma matrices, exactly one of the factors commutes. Using ($\ref{antiid}$), it is then obvious that distinct gamma matrices anticommute to $0$; all that remains to be checked is that they square to the correct multiples of the identity matrix. They do.

This is not the representation found in most introductory textbooks. It has the advantage that all of the gamma matrices have the same off-diagonal block structure. This emphasizes the fundamental role played by the mass term in the Dirac equation, which multiplies the identity matrix. Such a representation is called a chiral or Weyl representation (because the eigenspinors of the chiral projection operator take on a particularly nice form). Multiplying the Dirac equation ($\ref{diracnew}$) on the left by $\gamma_0$ brings it to the form \begin{equation} ( \gamma_0\gamma_\mu \, p^\mu - m\gamma_0 ) \Psi = 0 \label{DiracI} \end{equation} which is the form Dirac originally used. This form emphasizes the chiral nature of the representation: The first term is block diagonal, and the mass term “couples” the otherwise unrelated blocks. Furthermore, both matrix coefficients are now Hermitian — and both square to the identity matrix.

We have not yet said what $\Psi$ is. Since the gamma matrices are matrices, $\Psi$ is clearly a column vector of the appropriate size. However, it turns out that $\Psi$ transforms in a particular way under the Lorentz group; $\Psi$ is a (Dirac) spinor; the description “column vector” is misleading, and will be avoided.

The set of all products of gamma matrices is the basic example of a Clifford algebra. Using the anticommutativity properties, any such product can be simplified so that it contains each gamma matrix at most once. Each element of the Clifford algebra can therefore be classified as even or odd depending on the number of gamma matrices it contains. The product of an even element with another even element is still even; the even part of the Clifford algebra is a subalgebra. In the chiral representation above, the even elements are block diagonal, and the odd elements are block off-diagonal.

Our gamma matrices have the further advantage that they generalize immediately to the other division algebras. First of all, the apparent factor of $i$ in (the first factor of) $\gamma_a$ isn't really there; $i\SIGMA_2$ is real. All that is necessary is to include the generalized Pauli matrices obtained from $\SIGMA_2$ by replacing $i$ by $j$, $k$, etc. 2) There are as many of these matrices as there are imaginary units in the division algebra; they still anticommute, by virtue of the division algebra multiplication table, so that ($\ref{antiid}$) still holds. Remembering to include $\SIGMA_x$ and $\SIGMA_z$, we get 3, 4, 6, or 10 gamma matrices over $\RR$, $\CC$, $\HH$, and $\OO$, respectively. We will use this in the next chapter to discuss the Dirac equation in higher dimensions.

\endinput

We digress briefly to mention an important application of this formalism. An essential ingredient in the construction of any supersymmetric theory, such as the Green-Schwarz superstring [ 9 ], is the spinor identity \begin{equation} \epsilon^{klm} \gamma^\mu \Psi_k \bar\Psi_l \gamma_\mu \Psi_m = 0 \end{equation} for anticommuting spinors $\Psi_k$, $\Psi_l$, $\Psi_m$, where $\epsilon^{klm}$ indicates total antisymmetrization and where \begin{equation} \bar\Psi = \Psi^\dagger \gamma^0 \end{equation} is the adjoint of $\Psi$. An analogous identity holds for commuting spinors: \begin{equation} \gamma^\mu \Psi \bar\Psi \gamma_\mu \Psi = 0 \end{equation} It is well-known that the 3-$\Psi$'s rule holds only for Majorana spinors in 3 dimensions, Majorana or Weyl spinors in 4 dimensions, Weyl spinors in 6 dimensions, and Majorana-Weyl spinors in 10 dimensions. 3) Thus, the Green-Schwarz superstring exists only in those superstring [ 9 ]. As was shown by the 3-$\Psi$'s rule in all these cases is equivalent to an identity on the gamma matrices, which holds automatically when the gamma matrices are expressed in terms of the 4 division algebras, corresponding precisely to the above 4 types of spinors. then showed how to rewrite these spinor expressions in terms of $2\times2$ matrices over the appropriate division algebra, thus eliminating the gamma matrices completely. For a more detailed discussion,

1) Indices can be raised and lowered using the metric; a time component (index 0) is unchanged, while a space component (indices 1–3) picks up a minus sign. A pair of indices being summed over should always have one up and one down, in which case it doesn't matter which is which.
2) It is customary to number these sequentially from $2$, renumbering $\SIGMA_3$ to put it last. To avoid confusion, we will often use the indices $x$, $y$, $z$ for the original Pauli matrices, and $j$, $k$, etc. for the generalized Pauli matrices.
3) A Majorana representation is one in which the gamma matrices are real (or pure imaginary). In such a representation, a Majorana spinor is one whose components are real. Such spinors can also be described as eigenspinors of an appropriate operator.

Personal Tools