In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function
where Kp is a modified Bessel function of the second kind, a > 0, b > 0 and p a real parameter. It is used extensively in geostatistics, statistical linguistics, finance, etc. This distribution was first proposed by Étienne Halphen.[1][2][3] It was rediscovered and popularised by Ole Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. Its statistical properties are discussed in Bent Jørgensen's lecture notes.[4]
By setting θ = a b {\displaystyle \theta ={\sqrt {ab}}} and η = b / a {\displaystyle \eta ={\sqrt {b/a}}} , we can alternatively express the GIG distribution as
where θ {\displaystyle \theta } is the concentration parameter while η {\displaystyle \eta } is the scaling parameter.
Barndorff-Nielsen and Halgreen proved that the GIG distribution is infinitely divisible.[5]
The entropy of the generalized inverse Gaussian distribution is given as[citation needed]
where [ d d ν K ν ( a b ) ] ν = p {\displaystyle \left[{\frac {d}{d\nu }}K_{\nu }\left({\sqrt {ab}}\right)\right]_{\nu =p}} is a derivative of the modified Bessel function of the second kind with respect to the order ν {\displaystyle \nu } evaluated at ν = p {\displaystyle \nu =p}
The characteristic of a random variable X ∼ G I G ( p , a , b ) {\displaystyle X\sim GIG(p,a,b)} is given as(for a derivation of the characteristic function, see supplementary materials of [6] )
for t ∈ R {\displaystyle t\in \mathbb {R} } where i {\displaystyle i} denotes the imaginary number.
The inverse Gaussian and gamma distributions are special cases of the generalized inverse Gaussian distribution for p = −1/2 and b = 0, respectively.[7] Specifically, an inverse Gaussian distribution of the form
is a GIG with a = λ / μ 2 {\displaystyle a=\lambda /\mu ^{2}} , b = λ {\displaystyle b=\lambda } , and p = − 1 / 2 {\displaystyle p=-1/2} . A Gamma distribution of the form
is a GIG with a = 2 β {\displaystyle a=2\beta } , b = 0 {\displaystyle b=0} , and p = α {\displaystyle p=\alpha } .
Other special cases include the inverse-gamma distribution, for a = 0.[7]
The GIG distribution is conjugate to the normal distribution when serving as the mixing distribution in a normal variance-mean mixture.[8][9] Let the prior distribution for some hidden variable, say z {\displaystyle z} , be GIG:
and let there be T {\displaystyle T} observed data points, X = x 1 , … , x T {\displaystyle X=x_{1},\ldots ,x_{T}} , with normal likelihood function, conditioned on z : {\displaystyle z:}
where N ( x ∣ μ , v ) {\displaystyle N(x\mid \mu ,v)} is the normal distribution, with mean μ {\displaystyle \mu } and variance v {\displaystyle v} . Then the posterior for z {\displaystyle z} , given the data is also GIG:
where S = ∑ i = 1 T ( x i − α ) 2 {\displaystyle \textstyle S=\sum _{i=1}^{T}(x_{i}-\alpha )^{2}} .[note 1]
The Sichel distribution results when the GIG is used as the mixing distribution for the Poisson parameter λ {\displaystyle \lambda } .[10][11]