Getting started

This page builds the smallest end-to-end fit: a Poisson regression with an intercept and an iid per-observation random effect, fit by INLA. It exists to show the moving parts, not the modelling — for canonical datasets, see the vignettes.

A toy Poisson + IID model

n = 200 observations with intercept α and an iid random effect u_i ∼ N(0, τ⁻¹):

\[y_i \sim \mathrm{Poisson}(\exp(\alpha + u_i)), \quad u \sim \mathcal{N}(0, \tau^{-1} I).\]

using Random, Distributions, SparseArrays, LinearAlgebra
using GMRFs, LatentGaussianModels

rng = Random.Xoshiro(20260423)
n   = 200

α_true = 0.5
τ_true = 4.0
u_true = (1 / sqrt(τ_true)) .* randn(rng, n)
η_true = α_true .+ u_true
y      = [rand(rng, Poisson(exp(η_true[i]))) for i in 1:n]

Build the model

A LatentGaussianModel has three pieces: the likelihood, a tuple of latent components, and a sparse projector A mapping the stacked latent vector x = [α; u] to the linear predictor η = A x.

c_int = Intercept()
c_iid = IID(n; hyperprior = PCPrecision(1.0, 0.01))

A = sparse([ones(n) Matrix{Float64}(I, n, n)])

ℓ     = PoissonLikelihood()
model = LatentGaussianModel(ℓ, (c_int, c_iid), A)

The hyperprior PCPrecision(U = 1.0, α = 0.01) reads "1% prior probability that the standard deviation exceeds 1". It is the Penalised Complexity prior of Simpson et al. (2017) and is the v0.1 default for precisions.

Fit by INLA

res = inla(model, y; int_strategy = :grid)

inla(...) is shorthand for fit(model, y, INLA(); int_strategy = :grid). For higher-dimensional θ the default :auto switches to CCD; here dim(θ) = 1 so :grid is fine.

What's in the result

fixed_effects(model, res)        # posterior summary of α
1-element Vector{@NamedTuple{name::String, mean::Float64, sd::Float64, lower::Float64, upper::Float64}}:
 (name = "Intercept[1]", mean = 0.5293492093732829, sd = 0.058560983873523605, lower = 0.41457178998941113, upper = 0.6441266287571547)
hyperparameters(model, res)      # posterior summary of τ
1-element Vector{@NamedTuple{name::String, mean::Float64, sd::Float64, lower::Float64, upper::Float64}}:
 (name = "IID[2]", mean = 2.2309807845495793, sd = 0.6033886261648357, lower = 1.0483608076319597, upper = 3.413600761467199)
log_marginal_likelihood(res)     # for model comparison
-339.8248734899833

That's the loop. Real models add covariates (FixedEffects(X)), swap IID for BYM2(graph; …) or RW1(n), pick a richer likelihood (NegativeBinomialLikelihood, GammaLikelihood, …), and turn on diagnostics (dic, waic, cpo, pit). The Scotland BYM2 vignette walks through that on a real dataset.

Coming from R-INLA?

The model above can also be written as a single @lgm expression — the same constructor call, in formula notation:

using LGMFormula
df = (y = y, idx = collect(1:n))
model = @lgm y ~ 1 + f(idx, IID(n; hyperprior = PCPrecision(1.0, 0.01))) data=df family=PoissonLikelihood()

The macro is sugar over the explicit constructor; it does no numerical work. See the migration guide for the side-by-side R-INLA → @lgm mapping on Scotland BYM2, Tokyo rainfall, and Meuse SPDE.