A couple of important papers with humorously butchered titles by auto-parse failure:

# Separating big sticks from little sticks, and applications

Start with a stick. Randomly break it into two pieces and throw away one half. Then randomly break what’s left into two pieces and throw away half, and so on forever. At the end of this process (until we get bored or run out of stick or whatever), collect up all the pieces of stick we threw aside and examine them. Here is a theorem.

Theorem 1Almost surely you can divide the stick pieces into two piles, the big sticks and the little sticks, in such a way that the little sticks, even put back together end-to-end, are much smaller than the littlest big stick.

To formalize the process, suppose the original stick has length , and we break it into and , and then we break into and , and so on. In the end we have a stick of length

for each , where are independent and .

Warning: is almost surely not monotonic! The biggest stick piece is not necessarily the first piece we broke off, although it is with positive probability. The distribution of is , but the distribution of is rather more complicated (more on this later).

Consider the record process . At step , there is a probability that is a new record, and in this circumstance we have . Therefore there will be infinitely many such that . But and have the same distribution, so there will also be infinitely many such that . For any such ,

But

Therefore after step the total of what is left is small compared to our smallest stick so far, and this proves the theorem.

As it turns out, sticks have applications!

Corollary 2Let be a random permutation. With high probability (as ), we can divide the cycles of into two sets, the big cycles and the little cycles , in such a way that

Corollary 3Let be a random integer. With high probability (as ), we can factorize as in such a way that

Corollary 4Let be a random polynomial of degree . With high probability (as ), we can factorize as in such a way that

What about ? By the same connections, models the longest cycle in a random permutation, the log of the largest prime factor of a random integer, and the degree of the largest irreducible factor of a random polynomial. The event that is the intersection of two events: and . But it is easy to see that has the same distribution as , where is an independent copy of . (The largest stick other than the first is like the largest stick if we had started from a stick of length instead of .) Therefore

Let . Then for we have

or the delay differential equation

There is not a much more explicit expression for this function: this is the Dickman–de Bruijn function.

*Bonus*: What is the probabiliy that ? Answer:

(this is the Golomb–Dickman constant).

This is some sort of universality phenomenon: we have several natural objects (permutations, integers, polynomials) that break up into irreducible pieces randomly, so it is plausible that some universal fundamental process should underlie each of them. On the other hand, the connections seem to run deeper than just the stick-breaking process. For example, if you look at the proportion of permutations having *no* cycles *smaller* than some bound, or the proportion of integers having no prime factors below an equivalent bound, in both cases you run into the Buchstab function, which is similar but different. This connection is not modelled by the stick-breaking process.

Some references:

- Tao on the stick-breaking process, or the Poisson–Dirichlet process (which is essentially the same);
- the distribution of large prime factors of a random integer was found by Billingsley (1972); the clean stick-breaking formulation is due to Donnelly and Grimmett (1993);
- the same result for permutations was established by Kingman (1977) and Vershik and Shmidt (1977); see also Arratia, Barbour, and Tavarè (2006) (and references therein) for convergence rate estimates;
- best of all, see the comic book by Andrew and Jennifer Granville (2019).