The characteristic polynomial of a random matrix is almost always irreducible

I just uploaded to the arxiv a paper which can be viewed as vindication by anybody who has ever struggled to compute an eigenvalue by hand: the paper gives mathematical proof that eigenvalues are almost always as complicated as they can be. Intuitive as this may be, to prove it I needed two mathematical bazookas: both the extended Riemann hypothesis and the classification of finite simple groups!

An example will illustrate what I mean. Here is a random {10\times 10} matrix with {\pm1} entries:

\displaystyle  M = \left(\begin{array}{rrrrrrrrrr} -1 & 1 & 1 & -1 & -1 & 1 & -1 & 1 & -1 & 1 \\ -1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 \\ -1 & -1 & 1 & 1 & 1 & -1 & -1 & -1 & 1 & 1 \\ 1 & -1 & -1 & 1 & -1 & -1 & 1 & 1 & 1 & -1 \\ -1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & 1 & -1 & 1 & 1 & 1 & -1 & 1 \\ 1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 \\ -1 & -1 & 1 & -1 & 1 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 & -1 & 1 & -1 & -1 \end{array}\right) .

The eigenvalues of {M} as approximated by scipy.lingalg.eig are shown below.

But what if we want to know the eigenvalues exactly? Well, they are the roots of the characteristic polynomial of {M}, which is

\displaystyle  \varphi(x) = x^{10} - 4x^{9} + 6x^{8} - 80x^{6} + 144x^{5} - 544x^{4} + 1152x^{3} - 1024x^{2} + 1280x - 1024,

so we just have to solve {\varphi(x) = 0}. The Abel–Ruffini theorem states that, in general, this is not possible in the quadratic-formula-type sense involving radicals, etc., so there are certainly evil matrices that defy solution. But could there be something special about random matrices that makes {\varphi} typically more special?

It is not at all obvious that {\varphi} is “generic”. For one thing, if {M} is singular, then {\varphi} has an obvious root: zero! For another, if {M} happens to have a repeated eigenvalue, then {\varphi} and {\varphi'} share a factor, which is certainly a special algebraic property. It turns out that these events are low-probability events, but those are landmark results about random matrices (due originally to Komlós and Tao–Vu (symmetric case) and Ge (nonsymmetric), respectively).

Algebraically, the complexity of solving {\varphi(x) = 0} is measured by the Galois group {G}. In particular, the polynomial is irreducible if and only if {G} is transitive, and the equation is solvable by a formula involving radicals as in the quadratic formula if and only if {G} is solvable in the sense of group theory. Galois groups are not easy to calculate in general, but “typically” they are, by the following well-known trick. If we factorize {\varphi} mod {p} for some prime {p}, then, as long as we do not get a repeated factor, the degree sequence of the factorization is the same as the cycle type of some element of {G}. If we do this for enough small primes {p} we can often infer that {G = S_n}.

Let’s do this for the above example. If we factorize {\varphi} mod {p} for {p < 20}, discarding the “ramified” primes for which there is a repeated factor, we get the following degree sequences:

{p} degree sequence
{5} {\left(9, 1\right)}
{7} {\left(4, 4, 2\right)}
{13} {\left(5, 3, 2\right)}
{17} {\left(5, 3, 2\right)}
{19} {\left(7, 2, 1\right)}

Therefore {G} contains a {9}-cycle, an element of type {(4, 4, 2)}, etc. Just from the {9}-cycle and the {(4,4,2)} element it is clear that {G} must be transitive (in fact 2-transitive), so {\varphi} is irreducible. The are various other rules that help eliminate other possibilities for {G}. A common one is a theorem of Jordan which states that if {G} is primitive and contains a {p}-cycle for some prime {p \leq n - 3} then {G \geq A_n}. The element of cycle type {(5, 3, 2)} is odd and has a power which is a 5-cycle, so Jordan’s theorem implies {G = S_{10}}.

Conclusion: for the matrix {M} above, solving {\varphi(x) = 0} is a certifiable pain. You should just give up and accept {\varphi} itself as the prettiest answer you are going to get.

What my paper shows in general is that, if you choose the entries of an {n\times n} matrix from a fixed distribution in the integers,then, with probability tending to {1} as {n \to \infty}, the characteristic polynomial {\varphi} is irreducible, and moreover its Galois group is at least {A_n}. The result is conditional on the extended Riemann hypothesis, as mentioned, except in some nice special cases, such as when the entries are uniformly distributed in {\{1, \dots, 210\}.}

Note I only said “at least {A_n}”. Presumably the event {G = A_n} also has negligible probability, so {G = S_n} with probability {1 - o(1)}, but this remains open, and seems difficult. To prove it (naively), you have to show that the discriminant of the characteristic polynomial is, with high probability, not a perfect square. That should certainly be a low-probability event, but it’s not clear how to prove it!

Why do I need the extended Riemann hypothesis (ERH) and the classification of finite simple groups (CFSG)?

  • ERH: The example above involving factorizing mod {p} is not just useful in practice but also in theory. The prime ideal theorem tells you the statistics of what happens with your polynomial mod {p} for large {p}. In particular, to show that {G} is transitive it suffices to show that {\varphi} has one root on average mod {p}, for large primes {p}. Exactly how large {p} must be is determined by the error term in the prime ideal theorem, and ERH really helps a lot.
  • CFSG: An elaboration of the above strategy is used to show that {G} is not just transitive but {m}-transitive for any constant {m}. By CFSG, {m = 4} and {n > 24} is enough to imply {G \geq A_n}.

Arguably, matrices are not playing a large role in the statement of the theorem: it is just a statement about a random polynomial {\varphi}, where the model happens to be “take a random matrix with independent entries and take its characteristic polynomial”. The theorem should be true for any good model for a random polynomial of high degree. For example, in the “independent (or restricted) coefficients model”, the coefficients of {\varphi} are independent with some fixed distribution. For this model the corresponding statements were proved recently by Bary-Soroker–Kozma (see also Bary-Soroker’s blog post) and Breuillard–Varju, and I certainly borrow a lot from those papers. However, the common ground is isolated to the local–global principle that reduces the problem to a problem modulo {p}, and there the commonality ends. The independent coefficients case becomes a question about a certain kind of random walk (see the Breuillard–Varju paper) mod {p}, whereas the characteristic polynomial case becomes a problem about counting eigenvalues of a random matrix mod {p}.

There are many beautiful questions and mysteries about random polynomials. See for example the many high-definition pictures on John Baez’s website here showing the roots of random polynomials with independent coefficients. In many ways, taking the characteristic polynomial of a random matrix is actually a more natural model of a random polynomial, and some aspects become simpler. For example, I bet that the {A_n} vs {S_n} question will be settled sooner in the case of a random characteristic polynomial than in the case of independent coefficients.

Separating big sticks from little sticks, and applications

Start with a stick. Randomly break it into two pieces and throw away one half. Then randomly break what’s left into two pieces and throw away half, and so on forever. At the end of this process (until we get bored or run out of stick or whatever), collect up all the pieces of stick we threw aside and examine them. Here is a theorem.

Theorem 1 Almost surely you can divide the stick pieces into two piles, the big sticks and the little sticks, in such a way that the little sticks, even put back together end-to-end, are much smaller than the littlest big stick.

To formalize the process, suppose the original stick has length {1}, and we break it into {u_1} and {1-u_1}, and then we break {u_1} into {u_1u_2} and {u_1(1-u_2)}, and so on. In the end we have a stick of length

\displaystyle L_n = u_1 \cdots u_{n-1} (1-u_n)

for each {n\geq 1}, where {u_1, \dots, u_n} are independent and {U[0,1]}.

Warning: {L_n} is almost surely not monotonic! The biggest stick piece is not necessarily the first piece we broke off, although it is with positive probability. The distribution of {L_1 = 1-u_1} is {U[0,1]}, but the distribution of {L_\text{max} = \max_{n\geq 1} L_n} is rather more complicated (more on this later).

Consider the record process {R_n = \min_{i \leq n} (1-u_i)}. At step {n}, there is a probability {R_{n-1}} that {1-u_n} is a new record, and in this circumstance we have {R_n = 1-u_n \sim U[0, R_{n-1}]}. Therefore there will be infinitely many {n} such that {R_n < \epsilon R_{n-1}}. But {u_n} and {1-u_n} have the same distribution, so there will also be infinitely many {n} such that {u_n < \epsilon R_n}. For any such {n},

\displaystyle u_1 \cdots u_n < \epsilon u_1 \cdots u_{n-1} \min_{i \leq n} (1-u_i) \leq \epsilon \min_{i \leq n} u_1 \cdots u_{i-1} (1-u_i) = \epsilon \min_{i \leq n} L_i.

But

\displaystyle u_1 \cdots u_n = \sum_{i > n} L_i.

Therefore after step {n} the total of what is left is small compared to our smallest stick so far, and this proves the theorem.

As it turns out, sticks have applications!

Corollary 2 Let {\pi \in S_n} be a random permutation. With high probability (as {n\to\infty}), we can divide the cycles of {\pi} into two sets, the big cycles {\sigma_1, \dots, \sigma_k} and the little cycles {\sigma_{k+1}, \dots, \sigma_m}, in such a way that

\displaystyle \sum_{i = k+1}^m |\sigma_i| < \epsilon \min_{1 \leq i \leq k} |\sigma_i|.

Corollary 3 Let {n \in [1, X]} be a random integer. With high probability (as {X \to \infty}), we can factorize {n} as {n_1 n_2} in such a way that

\displaystyle n_2 < (\min_{p \mid n_1} p)^\epsilon.

Corollary 4 Let {f \in \mathbf{F}_q[X]} be a random polynomial of degree {d}. With high probability (as {d \to \infty}), we can factorize {f} as {f = f_1 f_2} in such a way that

\displaystyle \deg f_2 < \epsilon \min_{1 \neq \phi \mid f_1} \deg \phi.

What about {L_\text{max}}? By the same connections, {L_\text{max}} models the longest cycle in a random permutation, the log of the largest prime factor of a random integer, and the degree of the largest irreducible factor of a random polynomial. The event that {L_\text{max} < x} is the intersection of two events: {1-u_1 < x} and {\max_{n\geq 2} L_n < x}. But it is easy to see that {\max_{n \geq 2} L_n} has the same distribution as {u_1 L'_\text{max}}, where {L'_\text{max}} is an independent copy of {L_\text{max}}. (The largest stick other than the first is like the largest stick if we had started from a stick of length {u_1} instead of {1}.) Therefore

\displaystyle \mathbf{P}(L_\text{max} < x) = \mathbf{P}(1-u < x, u L_{\max} < x) = \int_{1-x}^1 \mathbf{P}(L_\text{max} < x/u) \, du.

Let {\phi(t) = \mathbf{P}(L_\text{max} < 1/t)}. Then {\phi(t) = 1} for {0  1} we have

\displaystyle \phi(t) = \int_{1-1/t}^1 \phi(tu) \, du = \frac1t \int_{t-1}^t \phi(s) \, ds,

or the delay differential equation

\displaystyle t \phi'(t) + \phi(t-1) = 0.

There is not a much more explicit expression for this function: this is the Dickman–de Bruijn function.

Bonus: What is the probabiliy that {L_\text{max} = L_1}? Answer:

\displaystyle \mathbf{P}(u_1 L'_\text{max} < 1-u_1) = \int_0^1 \phi(u/(1-u)) \, du = \int_0^\infty \frac{\phi(t)}{(t+1)^2}\, dt \approx 0.62

(this is the Golomb–Dickman constant).

This is some sort of universality phenomenon: we have several natural objects (permutations, integers, polynomials) that break up into irreducible pieces randomly, so it is plausible that some universal fundamental process should underlie each of them. On the other hand, the connections seem to run deeper than just the stick-breaking process. For example, if you look at the proportion of permutations having no cycles smaller than some bound, or the proportion of integers having no prime factors below an equivalent bound, in both cases you run into the Buchstab function, which is similar but different. This connection is not modelled by the stick-breaking process.

Some references:

  1. Tao on the stick-breaking process, or the Poisson–Dirichlet process (which is essentially the same);
  2. the distribution of large prime factors of a random integer was found by Billingsley (1972); the clean stick-breaking formulation is due to Donnelly and Grimmett (1993);
  3. the same result for permutations was established by Kingman (1977) and Vershik and Shmidt (1977); see also Arratia, Barbour, and Tavarè (2006) (and references therein) for convergence rate estimates;
  4. best of all, see the comic book by Andrew and Jennifer Granville (2019).