A mixed Poisson distribution is a univariate discrete probability distribution in stochastics. It results from assuming that the conditional distribution of a random variable, given the value of the rate parameter, is a Poisson distribution, and that the rate parameter itself is considered as a random variable. Hence it is a special case of a compound probability distribution. Mixed Poisson distributions can be found in actuarial mathematics as a general approach for the distribution of the number of claims and is also examined as an epidemiological model.[1] It should not be confused with compound Poisson distribution or compound Poisson process.[2]
A random variable X satisfies the mixed Poisson distribution with density π(λ) if it has the probability distribution[3]
P ( X = k ) = ∫ 0 ∞ λ k k ! e − λ π ( λ ) d λ . {\displaystyle \operatorname {P} (X=k)=\int _{0}^{\infty }{\frac {\lambda ^{k}}{k!}}e^{-\lambda }\,\,\pi (\lambda )\,d\lambda .}
If we denote the probabilities of the Poisson distribution by qλ(k), then
P ( X = k ) = ∫ 0 ∞ q λ ( k ) π ( λ ) d λ . {\displaystyle \operatorname {P} (X=k)=\int _{0}^{\infty }q_{\lambda }(k)\,\,\pi (\lambda )\,d\lambda .}
In the following let μ π = ∫ 0 ∞ λ π ( λ ) d λ {\displaystyle \mu _{\pi }=\int _{0}^{\infty }\lambda \,\,\pi (\lambda )\,d\lambda \,} be the expected value of the density π ( λ ) {\displaystyle \pi (\lambda )\,} and σ π 2 = ∫ 0 ∞ ( λ − μ π ) 2 π ( λ ) d λ {\displaystyle \sigma _{\pi }^{2}=\int _{0}^{\infty }(\lambda -\mu _{\pi })^{2}\,\,\pi (\lambda )\,d\lambda \,} be the variance of the density.
The expected value of the mixed Poisson distribution is
E ( X ) = μ π . {\displaystyle \operatorname {E} (X)=\mu _{\pi }.}
For the variance one gets[3]
Var ( X ) = μ π + σ π 2 . {\displaystyle \operatorname {Var} (X)=\mu _{\pi }+\sigma _{\pi }^{2}.}
The skewness can be represented as
v ( X ) = ( μ π + σ π 2 ) − 3 / 2 [ ∫ 0 ∞ ( λ − μ π ) 3 π ( λ ) d λ + μ π ] . {\displaystyle \operatorname {v} (X)={\Bigl (}\mu _{\pi }+\sigma _{\pi }^{2}{\Bigr )}^{-3/2}\,{\Biggl [}\int _{0}^{\infty }(\lambda -\mu _{\pi })^{3}\,\pi (\lambda )\,d{\lambda }+\mu _{\pi }{\Biggr ]}.}
The characteristic function has the form
φ X ( s ) = M π ( e i s − 1 ) . {\displaystyle \varphi _{X}(s)=M_{\pi }(e^{is}-1).\,}
Where M π {\displaystyle M_{\pi }} is the moment generating function of the density.
For the probability generating function, one obtains[3]
m X ( s ) = M π ( s − 1 ) . {\displaystyle m_{X}(s)=M_{\pi }(s-1).\,}
The moment-generating function of the mixed Poisson distribution is
M X ( s ) = M π ( e s − 1 ) . {\displaystyle M_{X}(s)=M_{\pi }(e^{s}-1).\,}
Theorem—Compounding a Poisson distribution with rate parameter distributed according to a gamma distribution yields a negative binomial distribution.[3]
Let π ( λ ) = ( p 1 − p ) r Γ ( r ) λ r − 1 e − p 1 − p λ {\displaystyle \pi (\lambda )={\frac {({\frac {p}{1-p}})^{r}}{\Gamma (r)}}\lambda ^{r-1}e^{-{\frac {p}{1-p}}\lambda }} be a density of a Γ ( r , p 1 − p ) {\displaystyle \operatorname {\Gamma } \left(r,{\frac {p}{1-p}}\right)} distributed random variable.
P ( X = k ) = 1 k ! ∫ 0 ∞ λ k e − λ ( p 1 − p ) r Γ ( r ) λ r − 1 e − p 1 − p λ d λ = p r ( 1 − p ) − r Γ ( r ) k ! ∫ 0 ∞ λ k + r − 1 e − λ 1 1 − p d λ = p r ( 1 − p ) − r Γ ( r ) k ! ( 1 − p ) k + r ∫ 0 ∞ λ k + r − 1 e − λ d λ ⏟ = Γ ( r + k ) = Γ ( r + k ) Γ ( r ) k ! ( 1 − p ) k p r {\displaystyle {\begin{aligned}\operatorname {P} (X=k)&={\frac {1}{k!}}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda }{\frac {({\frac {p}{1-p}})^{r}}{\Gamma (r)}}\lambda ^{r-1}e^{-{\frac {p}{1-p}}\lambda }\,d\lambda \\&={\frac {p^{r}(1-p)^{-r}}{\Gamma (r)k!}}\int _{0}^{\infty }\lambda ^{k+r-1}e^{-\lambda {\frac {1}{1-p}}}\,d\lambda \\&={\frac {p^{r}(1-p)^{-r}}{\Gamma (r)k!}}(1-p)^{k+r}\underbrace {\int _{0}^{\infty }\lambda ^{k+r-1}e^{-\lambda }\,d\lambda } _{=\Gamma (r+k)}\\&={\frac {\Gamma (r+k)}{\Gamma (r)k!}}(1-p)^{k}p^{r}\end{aligned}}}
Therefore we get X ∼ NegB ( r , p ) . {\displaystyle X\sim \operatorname {NegB} (r,p).}
Theorem—Compounding a Poisson distribution with rate parameter distributed according to an exponential distribution yields a geometric distribution.
Let π ( λ ) = 1 β e − λ β {\displaystyle \pi (\lambda )={\frac {1}{\beta }}e^{-{\frac {\lambda }{\beta }}}} be a density of a Exp ( 1 β ) {\displaystyle \operatorname {Exp} \left({\frac {1}{\beta }}\right)} distributed random variable. Using integration by parts n times yields: P ( X = k ) = 1 k ! ∫ 0 ∞ λ k e − λ 1 β e − λ β d λ = 1 k ! β ∫ 0 ∞ λ k e − λ ( 1 + β β ) d λ = 1 k ! β ⋅ k ! ( β 1 + β ) k ∫ 0 ∞ e − λ ( 1 + β β ) d λ = ( β 1 + β ) k ( 1 1 + β ) {\displaystyle {\begin{aligned}\operatorname {P} (X=k)&={\frac {1}{k!}}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda }{\frac {1}{\beta }}e^{-{\frac {\lambda }{\beta }}}\,d\lambda \\&={\frac {1}{k!\beta }}\int _{0}^{\infty }\lambda ^{k}e^{-\lambda \left({\frac {1+\beta }{\beta }}\right)}\,d\lambda \\&={\frac {1}{k!\beta }}\cdot k!\left({\frac {\beta }{1+\beta }}\right)^{k}\int _{0}^{\infty }e^{-\lambda \left({\frac {1+\beta }{\beta }}\right)}\,d\lambda \\&=\left({\frac {\beta }{1+\beta }}\right)^{k}\left({\frac {1}{1+\beta }}\right)\end{aligned}}} Therefore we get X ∼ G e o ( 1 1 + β ) . {\displaystyle X\sim \operatorname {Geo\left({\frac {1}{1+\beta }}\right)} .}