# Bayesian inference with INLA

*2019-10-10*

# Preamble

The integrated nested Laplace approximation (INLA) is a method for approximate Bayesian inference. In the last years it has established itself as an alternative to other methods such as Markov chain Monte Carlo because of its speed and ease of use via the R-INLA package. Although the INLA methodology focuses on models that can be expressed as latent Gaussian Markov random fields (GMRF), this encompasses a large family of models that are used in practice.

This book is aimed at providing an introduction to the INLA method and the
associated `INLA`

(or `R-INLA`

) package. It starts with a short introduction
to Bayesian inference in order to place INLA within context. Next, an
introduction to the INLA method is given, and followed by two simple
examples on using the `INLA`

package for the `R`

statistical software. Next,
different types of widely used models are described in different chapters.
To mention a few, these include mixed-effects models, multilevel models,
spatial and spatio-temporal models, smoothing methods, survival analysis
and others.

In addition to describing how to use the `INLA`

package for model fitting, some
advanced features available are covered as well. These are commonly employed to
build different types of models, as well as to implement new latent effects and
priors within the INLA framework. This is particularly important as makes model
fitting more flexible. Among the advanced features discussed, it iw worth
mentioning models with several likelihoods (to build joint models), shared
effects between different likelihoods and the possibility to embed INLA within
MCMC algorithms for flexible model fitting.

One idea that has been stressed in the book is that INLA can be used as a toolbox and that it can be combined with other methods for Bayesian inference. This is particular interesting when building models that do not fall within the class of latent GMRF. Two examples include mixture models and including imputation mechanisms of missing observations in the latent GMRF.

Another important issue about this book is that a Gitbook version is available
from the book website, and I can only thank my editor John Kimmel and the
publisher for agreeing with this. This has also been possible thanks to the use
of `rmarkdown`

(Xie, Allaire, and Grolemund 2018) and the `bookdown`

(Xie 2016) packages.
Furthermore, all examples in the book are fully reproducible, with datasets and
`R`

code available from the book website.

## Prerequisites

Although Chapter 1 provides a bit of context about Bayesian inference, the book assumes that the reader has a good understanding of Bayesian inference. In particular, a general course about Bayesian inference at the M.Sc. or Ph.D. level would be good starting point. Kruschke (2015) and McElreath (2016) are two recent books that can be used to learn about Bayesian inference. Carlin and Louis (2008) and Gelman et al. (2013) provide a more in depth approach to Bayesian inference. Specific references to particular methods described in the book are mentioned wherever necessary.

This book does not assume any knowledge of the integrated nested Laplace
approximation and the `INLA`

package. However, the reader may want to check the
nice introduction in Morrison (2017), as well as the examples provided in
Bakka (2019) and J. J. Faraway (2019b).

## Data and code sources

Most data and code sources are from `R`

packages and they are cited in the
appropriate chapters. However, some data have been obtained from different
external sources and these are listed below. We have been as thorough as
possible when acknowledging for data and code sources and we hope that there
are no omissions.

In Chapter 4, the original data about the 1988 election in the United States of America have been obtained from Prof. Andrew Gelman’s website, which can be downloaded from http://www.stat.columbia.edu/~gelman/arm/examples/election88.

In Chapter 4, the original data about “stop-and-frisk” in New York have been obtained from Prof. Andrew Gelman’s website at http://www.stat.columbia.edu/~gelman/arm/examples/police.

Current precinct boundaries have been obtained from the web of the City of New York at https://data.cityofnewyork.us/Public-Safety/Police-Precincts/78dh-3ptz/data.

In Chapter 8, the original New Mexico dataset health data has been obtained from the SaTScan website at https://www.satscan.org and completed with the county boundaries available from the US Census Bureau website (https://www.census.gov).

In Chapter 9, the temperature dataset has been obtained from the associated website of Fahrmeir and Kneib (2011) at http://www.smoothingbook.org/.

In Chapter 12, the code for the analysis of the

`nhanes2`

dataset is available from GitHub at https://github.com/becarioprecario/INLAMCMC_examples.Some general on-line resources that I have found interesting include Bakka (2019), J. J. Faraway (2019b), Jovanovic (2015) and Morrison (2017).

## Acknowledgements

First of all, I would like to thank Håvard Rue and coauthors for giving us INLA and all its derived works. I have had the chance to discuss many issues about INLA with them throughout the years and they have always been a source of inspiration and new ideas.

My friends and colleagues from the VAlència Bayesian Research group (http://vabar.es) have also provided a nurturing environment for the development of this book. Furthermore, thanks to all my co-authors for coming with interesting research questions. Some of our joint work has been illustrated in this book. I am profoundly indebted to Susie Bayarri, Juan Ferrándiz and Antonio López, with whom I took my first steps in the Bayesian world.

And last but not least, I wanted to thank John Kimmel as the editor in chage of this book for his continuous support and patience every time a requested an extension to the previous deadline. He also made sure that early versions of the book where thoroughfully reviewed by a number of anonymous reviewers. Thanks to their comments the book has improved.

This work has also been partly supported by grants PPIC-2014-001-P and SBPLY/17/180501/000491, funded by Consejería de Educación, Cultura y Deportes (Junta de Comunidades de Castilla-La Mancha, Spain) and FEDER, and grant MTM2016-77501-P, funded by Ministerio de Economía y Competitividad (Spain).

Virgilio Gómez-Rubio

La Noguera (Nerpio, Albacete, Spain)

%Needed because of the preamble in index.Rmd

### References

Bakka, Haakon. 2019. “Small Tutorials on INLA.” https://haakonbakka.bitbucket.io/organisedtopics.html.

Carlin, Bradley P., and Thomas A. Louis. 2008. *Bayesian Methods for Data Analysis*. 3rd ed. Boca Raton, FL: Chapman & Hall/CRC Press.

Fahrmeir, Ludwig, and Thomas Kneib. 2011. *Bayesian Smoothing Regression for Longitudinal, Spatial and Event History Data*. New York: Oxford University Press.

Faraway, Julian J. 2019b. “INLA for Linear Mixed Models.” \url{http://www.maths.bath.ac.uk/~jjf23/inla/}.

Gelman, Andrew, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. 2013. *Bayesian Data Analysis*. 3rd ed. Boca Raton, FL: Chapman & Hall/CRC Press.

Jovanovic, Mladen. 2015. “R Playbook: Introduction to Mixed-Models. Multilevel Models Playbook.” http://complementarytraining.net/r-playbook-introduction-to-multilevelhierarchical-models/.

Kruschke, John K. 2015. *Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan*. 2nd ed. Amsterdam: Academic Press.

McElreath, Richard. 2016. *Statistical Rethinking: A Bayesian Course with Examples in R and Stan*. Boca Raton, FL: Chapman & Hall/CRC Press.

Morrison, Kathryn. 2017. “A Gentle INLA Tutorial.” https://www.precision-analytics.ca/blog/a-gentle-inla-tutorial/.

Xie, Yihui. 2016. *Bookdown: Authoring Books and Technical Documents with R Markdown*. Boca Raton, Florida: Chapman; Hall/CRC. https://github.com/rstudio/bookdown.

Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. *R Markdown: The Definitive Guide*. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown.