姓名和歷史不可避免的悖論?


3

概率論中的違反直覺的結果可能需要描述一個真實的悖論,這是事實,無限次重複具有非零成功機會的實驗並不總是能夠保證最終的成功。如果成功的可能性隨著每次迭代而迅速減小到足以補償重複的程度,就會出現這種情況。

這個悖論是否有名稱,是否在任何地方都進行了明確討論?這種矛盾現像是在用來模擬滅絕概率的分支過程中出現的,包括高爾頓和沃森在1874年關於滅絕姓氏的著作中。它也發生在3D隨機行走中,返回原點的概率小於1,由Pólya在1921年證明。

這些資料或其他資料是否討論了明顯的悖論?

1

If we have a trial with non-zero probability of success $p>0$, and we repeat the trial $n$ times, the probability of at least one success is $p(n)=1-(1-p)^n$, which converges to $1$ when $n\to\infty$. An example would be tossing a fair coin (with $p=1/2$), and expecting to get heads eventually. Even if the coin is loaded for tails as long as $p>0$ we can still expect getting heads eventually. Of course, if the probability $p$ changes from trial to trial then it is possible that $p(n)$ does not converge to $1$, but that would not be repeating "the same" trial, nor would it be paradoxical.

Probability of eventual return in 3D random walks is discussed on Math Overflow for example, or in this book chapter, neither of which describes its values as paradoxical. The paradoxes mentioned are unrelated: the birthday paradox and the Levinthal paradox of protein folding. Similarly, the only paper I found that mentions a "paradox" in connection with the Galton-Watson process refers to something else.

These are not however cases of "repeating the same trial" for eventual success. In 3D walks for example the outcomes of random "trials" are individual steps $X_i$ chosen at random, but "success" is defined in terms of the final position $S_n=X_1+\cdots+X_n$, not individual $X_i$, which is clearly different from success in coin tosses. Moreover, each step is performed in different locations at different distances from the origin, so they are not "the same" trials by any stretch. The probability of eventual return is $1$ in 1D and 2D, but only about $0.65$ in 3D, which in principle is not very surprising since there is a lot more room to wander around in 3D (frankly, it might be more surprising that it is still $1$ in 2D). MO thread gives a more detailed intuitive explanation for the difference based on convergence/divergence of $1/n^{d/2}$ series.

A more general phenomenon at play here is the difference between recurrent and transient states in Markov chains. A state is called recurrent (persistent) if the probability of eventual return to it is $1$, and transient otherwise. Independent trials form a so-called Bernoulli scheme, which is a very special case of a Markov chain, where the next state does not depend even on the previous one (which is what makes them "the same" trial). By the first paragraph above in a Bernoulli scheme all states are recurrent, but general Markov chains are free to have transient ones.