# 姓名和歷史不可避免的悖論？

If we have a trial with non-zero probability of success $p>0$, and we repeat the trial $n$ times, the probability of at least one success is $p(n)=1-(1-p)^n$, which converges to $1$ when $n\to\infty$. An example would be tossing a fair coin (with $p=1/2$), and expecting to get heads eventually. Even if the coin is loaded for tails as long as $p>0$ we can still expect getting heads eventually. Of course, if the probability $p$ changes from trial to trial then it is possible that $p(n)$ does not converge to $1$, but that would not be repeating "the same" trial, nor would it be paradoxical.

Probability of eventual return in 3D random walks is discussed on Math Overflow for example, or in this book chapter, neither of which describes its values as paradoxical. The paradoxes mentioned are unrelated: the birthday paradox and the Levinthal paradox of protein folding. Similarly, the only paper I found that mentions a "paradox" in connection with the Galton-Watson process refers to something else.

These are not however cases of "repeating the same trial" for eventual success. In 3D walks for example the outcomes of random "trials" are individual steps $X_i$ chosen at random, but "success" is defined in terms of the final position $S_n=X_1+\cdots+X_n$, not individual $X_i$, which is clearly different from success in coin tosses. Moreover, each step is performed in different locations at different distances from the origin, so they are not "the same" trials by any stretch. The probability of eventual return is $1$ in 1D and 2D, but only about $0.65$ in 3D, which in principle is not very surprising since there is a lot more room to wander around in 3D (frankly, it might be more surprising that it is still $1$ in 2D). MO thread gives a more detailed intuitive explanation for the difference based on convergence/divergence of $1/n^{d/2}$ series.

A more general phenomenon at play here is the difference between recurrent and transient states in Markov chains. A state is called recurrent (persistent) if the probability of eventual return to it is $1$, and transient otherwise. Independent trials form a so-called Bernoulli scheme, which is a very special case of a Markov chain, where the next state does not depend even on the previous one (which is what makes them "the same" trial). By the first paragraph above in a Bernoulli scheme all states are recurrent, but general Markov chains are free to have transient ones.