Assume the are i.i.d. normal with zero mean and variance . Let be the matrix with th row equal to .
Then the g-prior for is the multivariate normal distribution with prior mean a hyperparameter and covariance matrix proportional to , i.e.,
where g is a positive scalar parameter.
Posterior distribution of beta
The posterior distribution of is given as
where and
is the maximum likelihood (least squares) estimator of . The vector of regression coefficients can be estimated by its posterior mean under the g-prior, i.e., as the weighted average of the maximum likelihood estimator and ,
Clearly, as g →∞, the posterior mean converges to the maximum likelihood estimator.
Selection of g
Estimation of g is slightly less straightforward than estimation of .
A variety of methods have been proposed, including Bayes and empirical Bayes estimators.[3]
References
^Zellner, A. (1986). "On Assessing Prior Distributions and Bayesian Regression Analysis with g Prior Distributions". In Goel, P.; Zellner, A. (eds.). Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti. Studies in Bayesian Econometrics and Statistics. Vol. 6. New York: Elsevier. pp. 233–243. ISBN978-0-444-87712-3.
Datta, Jyotishka; Ghosh, Jayanta K. (2015). "In Search of Optimal Objective Priors for Model Selection and Estimation". In Upadhyay, Satyanshu Kumar; et al. (eds.). Current Trends in Bayesian Methodology with Applications. CRC Press. pp. 225–243. ISBN978-1-4822-3511-1.