We prove the following:
(1) Let be a matrix and be its characteristic polynomial. Then .
An equivalent way of stating this is: (the determinant of a linear transformation is just the determinant of any matrix representation; it does not depend on choice of basis)
(2) Let be a linear transformation and be its characteristic polynomial. Then .
Suppose is a matrix. Let be the underlying field.
To prove (1), we could consider just plugging in into the characteristic polynomial, like this:
Wait, but this doesn’t make sense… What do we want here? Well, when we expand the characteristic polynomial, we take the matrix , which has ‘s along the diagonal, and then take the determinant. So we want to take a matrix which has ‘s along its diagonal, and then take the determinant. In other words, we want to work with matrices with matrix entries!
But wait, matrices don’t form a field (since some of them don’t have inverses). They do, however, form a ring , and we know that many linear algebra facts (where matrices have entries in fields) carry over to rings. We consider matrices over the ring of matrices with entries in . Equivalently, since matrices correspond to linear transformations, we could talk about matrices over the ring of linear transformations instead. This is frequently called the endomorphism ring, where is our vector space . To prevent confusion, we will denote matrices with matrix entries in bold.
Let be the identity matrix in the ring of matrices with entries in : It has identity matrices (or transformations) along its diagonal, and zero matrices elsewhere. Let be the matrix with each entry of replaced by that (thinking of entries as linear transformations, these correspond to multiplication by ). So the correct way to write (3)- what we actually want to prove- is:
Note that functions as a scalar here, multiplying by every entry in . Let be the matrix .
Let be the column vector with th entry the th standard basis vector in , so it is a column vector with entries that are column vectors. We can multiply on the left by . Then the th entry of is
(The entries here are column vectors, or equivalently elements of the vector space .)
Now we recall this fact from linear algebra:
(6) Let be a matrix, and its cofactor matrix, that is, the entry is times the determinant of with the th row and th column removed. Then
Proof (same for matrices over a ring as matrices over a field): The entry of is the dot product of the th column of with the th column of . When this simply gives the cofactor expansion of along the th column. When this gives the determinant if the th column were replaced by the th column, which is 0 since 2 columns are the same.
Let be the cofactor matrix of . We apply this to to get:
Multiplying both sides of (5) by and using (7),
is a matrix, and contains all the standard basis vectors. Hence is 0 for any , and , as desired.
The theorem can actually be made more general: In (2) we can replace with an endomorphism in a module . Take the matrix of the endomorphism with respect to any generating set (If this is not a basis the matrix is not unique, but that’s okay!), and use that to define . We get . The proof is the same.