Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables. A weak version of this result was first shown by Harald Cramér in 1938.
The logarithmic moment generating function (which is the cumulant-generating function) of a random variable is defined as:
Let X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } be a sequence of iid real random variables with finite logarithmic moment generating function, i.e. Λ ( t ) < ∞ {\displaystyle \Lambda (t)<\infty } for all t ∈ R {\displaystyle t\in \mathbb {R} } .
Then the Legendre transform of Λ {\displaystyle \Lambda } :
satisfies,
for all x > E [ X 1 ] . {\displaystyle x>\operatorname {E} [X_{1}].}
In the terminology of the theory of large deviations the result can be reformulated as follows:
If X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } is a series of iid random variables, then the distributions ( L ( 1 n ∑ i = 1 n X i ) ) n ∈ N {\displaystyle \left({\mathcal {L}}({\tfrac {1}{n}}\sum _{i=1}^{n}X_{i})\right)_{n\in \mathbb {N} }} satisfy a large deviation principle with rate function Λ ∗ {\displaystyle \Lambda ^{*}} .