In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, namely a tail event of independent σ-algebras, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.
Tail events are defined in terms of countably infinite families of σ-algebras. For illustrative purposes, we present here the special case in which each sigma algebra is generated by a random variable X k {\displaystyle X_{k}} for k ∈ N {\displaystyle k\in \mathbb {N} } . Let F {\displaystyle {\mathcal {F}}} be the sigma-algebra generated jointly by all of the X k {\displaystyle X_{k}} . Then, a tail event F ∈ F {\displaystyle F\in {\mathcal {F}}} is an event the occurrence of which cannot depend on the outcome of a finite subfamily of these random variables. (Note: F {\displaystyle F} belonging to F {\displaystyle {\mathcal {F}}} implies that membership in F {\displaystyle F} is uniquely determined by the values of the X k {\displaystyle X_{k}} , but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence of the X k {\displaystyle X_{k}} converges, and the event that its sum converges are both tail events. If the X k {\displaystyle X_{k}} are, for example, all Bernoulli-distributed, then the event that there are infinitely many k ∈ N {\displaystyle k\in \mathbb {N} } such that X k = X k + 1 = ⋯ = X k + 100 = 1 {\displaystyle X_{k}=X_{k+1}=\dots =X_{k+100}=1} is a tail event. If each X k {\displaystyle X_{k}} models the outcome of the k {\displaystyle k} -th coin toss in a modeled, infinite sequence of coin tosses, this means that a sequence of 100 consecutive heads occurring infinitely many times is a tail event in this model.
Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of the X k {\displaystyle X_{k}} is removed.
In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one.
A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let (Ω,F,P) be a probability space and let Fn be a sequence of σ-algebras contained in F. Let
be the smallest σ-algebra containing Fn, Fn+1, .... The terminal σ-algebra of the Fn is defined as T ( ( F n ) n ∈ N ) = ⋂ n = 1 ∞ G n {\displaystyle {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })=\bigcap _{n=1}^{\infty }G_{n}} .
Kolmogorov's zero–one law asserts that, if the Fn are stochastically independent, then for any event E ∈ T ( ( F n ) n ∈ N ) {\displaystyle E\in {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })} , one has either P(E) = 0 or P(E)=1.
The statement of the law in terms of random variables is obtained from the latter by taking each Fn to be the σ-algebra generated by the random variable Xn. A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by all Xn, but which is independent of any finite number of Xn. That is, a tail event is precisely an element of the terminal σ-algebra ⋂ n = 1 ∞ G n {\displaystyle \textstyle {\bigcap _{n=1}^{\infty }G_{n}}} .
An invertible measure-preserving transformation on a standard probability space that obeys the 0-1 law is called a Kolmogorov automorphism.[clarification needed] All Bernoulli automorphisms are Kolmogorov automorphisms but not vice versa. The presence of an infinite cluster in the context of percolation theory also obeys the 0-1 law.
Let { X n } n {\displaystyle \{X_{n}\}_{n}} be a sequence of independent random variables, then the event { lim n → ∞ ∑ k = 1 n X k exists } {\displaystyle \left\{\lim _{n\rightarrow \infty }\sum _{k=1}^{n}X_{k}{\text{ exists }}\right\}} is a tail event. Thus by Kolmogorov 0-1 law, it has either probability 0 or 1 to happen. Note that independence is required for the tail event condition to hold. Without independence we can consider a sequence that's either ( 0 , 0 , 0 , … ) {\displaystyle (0,0,0,\dots )} or ( 1 , 1 , 1 , … ) {\displaystyle (1,1,1,\dots )} with probability 1 2 {\displaystyle {\frac {1}{2}}} each. In this case the sum converges with probability 1 2 {\displaystyle {\frac {1}{2}}} .