GMRFs.jl

Sparse Gaussian Markov random fields: graph, precision, sampling, conditioning, log-density. The numerical core of the ecosystem; useful standalone for anyone needing GMRF primitives without the INLA machinery on top.

What's here

  • An abstract type AbstractGMRF plus concrete leaves: IIDGMRF, RW1GMRF, RW2GMRF, AR1GMRF, SeasonalGMRF, BesagGMRF.
  • GMRFGraph — the topological substrate (a thin wrapper around Graphs.jl's SimpleGraph plus the adjacency SparseMatrixCSC).
  • Sparse Cholesky-based sampling, log-density evaluation, marginal variances via SelectedInversion.jl.
  • A FactorCache that exposes LinearSolve.jl's symbolic-factorisation reuse so callers can repeatedly solve Q x = b for varying values of Q with the same sparsity pattern. Hot-path callers (the inner Newton in LatentGaussianModels.fit_laplace!) lean on this.
  • Linear-constraint types (NoConstraint, LinearConstraint) and Freni-Sterrantino-style sum-to-zero constraints for intrinsic CARs on disconnected graphs.

Required methods for new GMRF subtypes

num_nodes(g)           -> Int
precision_matrix(g)    -> AbstractMatrix
constraints(g)         -> AbstractConstraint   # default: NoConstraint()

Sampling and log-density fall out of the precision matrix and constraint set automatically.

API

GMRFs.GMRFsModule
GMRFs

Sparse Gaussian Markov Random Fields for Julia. Standalone-useful and the numerical core of the Julia INLA ecosystem.

See the package README and plans/plan.md for the design document.

source
GMRFs.AR1GMRFType
AR1GMRF{T}(n; ρ, τ)

Stationary AR(1) with correlation ρ ∈ (-1, 1) and marginal precision τ > 0. The joint precision is tridiagonal, full-rank; this is a proper GMRF with Var(x_i) = 1/τ for every i.

Q = (τ / (1 - ρ²)) · S
S_11 = S_nn = 1
S_ii = 1 + ρ²           for interior i
S_{i,i+1} = -ρ

This matches Rue & Held (2005, Eq. 1.39) and R-INLA's ar1 convention, where τ is the marginal precision of the field.

Note: R-INLA parameterizes via log((1+ρ)/(1−ρ)) internally. This constructor takes ρ directly; the logit-Fisher mapping is applied at the LatentGaussianModels.jl component layer.

source
GMRFs.AbstractConstraintType
AbstractConstraint

Base type for linear constraints on a GMRF. Currently only LinearConstraint (the hard constraint form A x = e) and NoConstraint are implemented.

Sampling and inference on constrained GMRFs uses the standard conditioning-by-kriging correction (Rue & Held 2005, §2.3.3); that lives in the sampling module, not here — this file provides only the data types.

source
GMRFs.AbstractGMRFType
AbstractGMRF

A Gaussian Markov random field on a graph. Concrete subtypes are distributions of vectors x ∈ ℝⁿ with precision matrix Q and mean μ (possibly zero).

Required methods:

  • num_nodes(g) — dimension of the field.
  • precision_matrix(g) — the full symmetric precision matrix Q as a SparseMatrixCSC{<:Real,Int}.
  • prior_mean(g) — the mean vector μ. Defaults to zeros(num_nodes(g)).
  • rankdef(g) — rank deficiency of Q (size of the null space). 0 for proper GMRFs, ≥ 1 for intrinsic ones.

Intrinsic GMRFs (e.g. Besag, RW1, RW2) additionally carry constraint metadata; see constraints(::AbstractGMRF).

source
GMRFs.AbstractGMRFGraphType
AbstractGMRFGraph

The topology on which a GMRF lives. A thin wrapper over a Graphs.AbstractGraph that also caches the sparsity pattern of the precision matrix (lower-triangular positions).

Concrete subtypes must implement:

  • graph(g) -> AbstractGraph
  • num_nodes(g) -> Int
  • adjacency_matrix(g) -> SparseMatrixCSC{Bool,Int} (symmetric, zero diagonal, Bool-valued)
source
GMRFs.BesagGMRFType
BesagGMRF(g::AbstractGMRFGraph; τ = 1.0, scale_model = true)

The intrinsic CAR / Besag GMRF on graph g. Precision Q = τ · D · L · D where L is the combinatorial graph Laplacian and D = Diagonal(√c). The per-node vector c is the Sørbye-Rue (2014) geometric-mean scaling constant of the connected component containing each node — Freni-Sterrantino et al. (2018) showed that on disconnected graphs each component needs its own scaling. With scale_model = true (default, matching R-INLA ≥ 17.06) the constants come from per_component_scale_factors; with scale_model = false they are all 1.

rankdef equals the number of connected components of g.

source
GMRFs.BesagGMRFMethod
BesagGMRF(W::AbstractMatrix; τ = 1.0, scale_model = true)

Convenience constructor from an adjacency matrix.

source
GMRFs.FactorCacheType
FactorCache{F}

A reusable sparse Cholesky factor for a fixed sparsity pattern. Construct with FactorCache(Q) on any symmetric-positive-definite sparse matrix; subsequent calls to update!(cache, Q_new) reuse the symbolic factorisation when Q_new has the same sparsity pattern as the original Q.

This is the primary reuse mechanism for the inner Newton loop in the LGM Laplace step, and for the outer θ grid in INLA.

source
GMRFs.FactorCacheMethod
FactorCache(Q::AbstractMatrix)

Build a Cholesky cache from a symmetric-positive-definite sparse matrix Q. Q is promoted to a Julia-native SparseMatrixCSC internally.

source
GMRFs.GMRFGraphType
GMRFGraph{G<:AbstractGraph} <: AbstractGMRFGraph

Default concrete graph: wraps a Graphs.SimpleGraph plus the adjacency matrix so we do not rebuild it every time we need the sparsity pattern.

source
GMRFs.GMRFGraphMethod
GMRFGraph(W::AbstractMatrix)

Build from an explicit n × n adjacency matrix W. Interprets any nonzero entry as an edge; symmetry is required (checked) and the diagonal is ignored. Loops are not currently supported — we error on a nonzero diagonal to avoid silently downgrading a self-loop to nothing, since gmrflib users sometimes pass a diagonal-loaded matrix by mistake.

source
GMRFs.Generic0GMRFType
Generic0GMRF(R; τ = 1.0, rankdef = 0, scale_model = false)

User-supplied structure matrix R (symmetric, non-negative definite), precision Q = τ · R. rankdef must be supplied since in general we cannot infer it cheaply from R. If scale_model = true, the Sørbye-Rue scaling is applied to R before multiplying by τ.

R-INLA's generic1 is a special case of this with R rescaled so its largest eigenvalue is 1; that transformation lives at the LGM component layer.

source
GMRFs.IIDGMRFType
IIDGMRF{T}(n; τ = one(T))

x ∼ N(0, τ⁻¹ I_n). The precision matrix is τ · I.

source
GMRFs.LinearConstraintType
LinearConstraint{T,M<:AbstractMatrix{T},V<:AbstractVector{T}} <: AbstractConstraint

The hard linear constraint A x = e, with A of shape k × n and e of length k. For sum-to-zero constraints on an intrinsic Besag with r connected components, A has r rows — one indicator row per component.

source
GMRFs.LinearConstraintMethod
LinearConstraint(A::AbstractMatrix)

Homogeneous constraint A x = 0. RHS defaults to zeros(size(A,1)) in the element type of A.

source
GMRFs.NoConstraintType
NoConstraint <: AbstractConstraint

Singleton for "no constraint applied". Returned by default from constraints(::AbstractGMRF) for proper GMRFs.

source
GMRFs.RW1GMRFType
RW1GMRF{T}(n; τ = one(T), cyclic = false)

First-order random walk precision on n nodes. Open form has rank deficiency 1 (constant shift), cyclic form has rank deficiency 1 as well (the cyclic sum-to-zero null space).

source
GMRFs.RW2GMRFType
RW2GMRF{T}(n; τ = one(T), cyclic = false)

Second-order random walk precision on n nodes. R = D' D where D is the second-difference operator. Rank deficiency 2 (linear null space) in the open case, 1 in the cyclic case.

source
GMRFs.SeasonalGMRFType
SeasonalGMRF{T}(n; period, τ = 1.0)

Intrinsic seasonal-variation model of length n and period s = period. Precision

Q = τ · B' B,     B ∈ ℝ^{(n-s+1) × n},
B_{t,j} = 1 if j ∈ [t, t+s-1] else 0,

so that each s-consecutive sum of the field is penalised:

π(x | τ) ∝ exp(-½ τ Σ_{t=1}^{n-s+1} (x_t + ⋯ + x_{t+s-1})²).

The null space has dimension s - 1: it is spanned by all s-periodic sequences summing to zero within one period. Equivalently, a basis is ε_k = e_k - e_s (repeated with period s) for k = 1, …, s-1, where e_k is the k-th canonical basis vector of ℝ^s; so rankdef = s-1. null_space_basis returns the orthonormalised version of this basis.

The default constraints is a single sum-to-zero row (matching R-INLA's model = "seasonal"); the remaining s - 2 null directions are identified by the likelihood. Matches Rue & Held (2005, §3.4.3).

source
GMRFs.SymmetricQType
SymmetricQ{T,M}

Lightweight view that tags a sparse matrix as a symmetric precision matrix. Internally stores the upper triangle (Julia's Symmetric convention) of a SparseMatrixCSC{T}; algebraic operations reconstitute the full symmetric form on demand.

This is the analogue of R-INLA's Qfunc-produced symmetric Q. It is mostly a semantic wrapper — SparseMatrixCSC already stores symmetric matrices fine; this type exists so that functions that require "precision matrix" in their signature do not accept arbitrary sparse matrices by mistake.

source
GMRFs.SymmetricQMethod
SymmetricQ(A::SparseMatrixCSC)

Wrap a sparse matrix as a precision matrix. A must be square and symmetric.

source
Base.:\Method
Base.:\(cache::FactorCache, b::AbstractVecOrMat)

Solve Q x = b using the cached factor.

source
Base.randMethod
Base.rand(rng::AbstractRNG, g::AbstractGMRF) -> Vector{Float64}

Draw one sample x ∼ N(μ, Q⁺) from the GMRF g. For intrinsic models the sample satisfies the canonical per-component sum-to-zero constraint (Freni-Sterrantino et al. 2018). Allocates.

source
Distributions.logpdfMethod
Distributions.logpdf(g::AbstractGMRF, x::AbstractVector; check_constraint = true)

Log-density of a GMRF at point x. For intrinsic GMRFs the density is defined on the non-null subspace {x : V' x = 0}; if check_constraint = true (default) we check that V'x is near zero and throw a PriorConstraintError otherwise. Pass check_constraint = false to get the formal quadratic form without the check.

source
GMRFs._generic_scale_factorMethod

Generic dense reference scale-factor for an arbitrary structure matrix R with known rank deficiency rd. For intrinsic Besag we have the combinatorial-Laplacian null space explicitly (constant-per-component); here we fall back to an eigendecomposition on the dense form.

source
GMRFs.constraintsMethod
constraints(g::AbstractGMRF) -> AbstractConstraint

The default constraint attached to a GMRF. Proper GMRFs return NoConstraint(); intrinsic ones (RW1, RW2, Besag) return a LinearConstraint with one sum-to-zero row per connected component (Freni-Sterrantino et al. 2018).

source
GMRFs.factorMethod
factor(cache::FactorCache)

Return the underlying Cholesky factor.

source
GMRFs.graphFunction
graph(g::AbstractGMRFGraph)

Return the wrapped Graphs.AbstractGraph.

source
GMRFs.marginal_variancesMethod
marginal_variances(g::AbstractGMRF; method = :auto) -> Vector{Float64}

Return the vector of marginal variances for a GMRF. For a proper GMRF this is diag(Q⁻¹); for an intrinsic (rank-deficient) GMRF it is the generalised-inverse diagonal on the non-null subspace, diag(Q⁺) = diag(inv(Q + V V') - V V').

method:

  • :auto (default) — :selinv for proper GMRFs, :dense for intrinsic. The intrinsic path augments Q with the null-space basis, which defeats sparsity, so dense is the honest default there.
  • :selinv — force the sparse Takahashi path. Errors on intrinsic GMRFs (Q is singular).
  • :dense — densify. Correctness oracle only.
source
GMRFs.marginal_variancesMethod
marginal_variances(Q::AbstractSparseMatrix; method = :selinv) -> Vector{Float64}

Return diag(Q⁻¹) for a proper, symmetric positive-definite sparse precision Q.

method:

  • :selinv (default) — Takahashi recursion on the sparse Cholesky factor via SelectedInversion.selinv_diag. Scales to large sparse Q.
  • :dense — densify Q and take a straight inverse. Correctness oracle only; slow.
source
GMRFs.nconnected_componentsMethod
nconnected_components(g::AbstractGMRFGraph)

Number of connected components of the underlying graph. Relevant for constraint generation on disconnected intrinsic models (one sum-to-zero constraint per component, Freni-Sterrantino et al. 2018).

source
GMRFs.null_space_basisFunction
null_space_basis(g::AbstractGMRF) -> Matrix{Float64}

Return an orthonormal basis V of the null space of precision_matrix(g), with shape (n, rankdef(g)). V'V = I. For proper GMRFs the result is zeros(n, 0).

This is a Float64 matrix by default. If a different element type is needed, convert at the call site.

source
GMRFs.num_nodesFunction
num_nodes(g::AbstractGMRFGraph)

Number of latent nodes in the graph (length of any vector sampled from a GMRF on g).

source
GMRFs.per_component_scale_factorsMethod
per_component_scale_factors(g::AbstractGMRFGraph) -> Vector{Float64}

Per-component Sørbye-Rue (2014) geometric-mean scaling constants for the intrinsic Besag GMRF on a (possibly disconnected) graph g. Returns a vector of length K = nconnected_components(g) indexed by component label, where entry k makes the geometric-mean marginal variance equal 1 on the non-null subspace of component k alone.

This is the Freni-Sterrantino et al. (2018) refinement of Sørbye-Rue on disconnected graphs: each component is scaled independently rather than scaling the union with a single constant. R-INLA has applied this correction since 2018; matching it is required for posterior agreement on graphs like Sardinia.

source
GMRFs.precision_matrixFunction
precision_matrix(g::AbstractGMRF)

Full symmetric precision matrix Q as a SparseMatrixCSC{<:Real,Int}. Must include both upper and lower triangles so that slicing and linear solves work without additional wrapping.

source
GMRFs.prior_meanMethod
prior_mean(g::AbstractGMRF)

Prior mean vector. Defaults to zero for all currently implemented GMRFs; carried as a function so that future mean-shifted types can override.

source
GMRFs.rankdefMethod
rankdef(g::AbstractGMRF) -> Int

Rank deficiency of Q. 0 for proper GMRFs (IID, AR1), positive for intrinsic ones (1 for connected RW1/Besag, 2 for RW2, equal to the number of connected components for disconnected Besag).

source
GMRFs.scale_factorMethod
scale_factor(g::AbstractGMRFGraph) -> Float64

The Sørbye & Rue (2014) geometric-mean scaling constant for a connected intrinsic Besag GMRF on g. Applied so that under Q_scaled = c · L, the geometric mean of marginal variances of τ⁻¹ Q_scaled⁻¹ on the non-null subspace equals τ⁻¹.

For disconnected graphs use per_component_scale_factors: the single-scalar scaling is not the correct generalisation (Freni-Sterrantino et al. 2018). On a disconnected g this function returns the geometric mean of all per-component constants — kept for backward compatibility but not what BesagGMRF / BYM / BYM2 use.

The reference implementation is dense. For production use on large graphs, the LGM Phase 3 selected-inversion implementation should be used instead (see plans/defaults-parity.md and ADR-004).

source
GMRFs.scale_modelMethod
scale_model(g::BesagGMRF)

Return a scaled copy. If g.scale_model == true this returns g unchanged; otherwise it returns a new BesagGMRF with scale_model = true. Pure function.

source
GMRFs.sum_to_zero_constraintsMethod
sum_to_zero_constraints(g::AbstractGMRFGraph; T = Float64) -> LinearConstraint

Build the per-component sum-to-zero constraint for a graph with r = nconnected_components(g) components. The resulting A has r rows, one for each component, with 1s in that component's nodes and 0s elsewhere. RHS is zeros(r).

This is the Freni-Sterrantino et al. (2018) correction for intrinsic CAR on disconnected graphs.

source
GMRFs.tabulated_precisionMethod
tabulated_precision(n, qfunc; pattern_edges = Tuple{Int,Int}[])

Build a SymmetricQ from a scalar callback qfunc(i, j) -> value, in the style of gmrflib's GMRFLib_tabulate_Qfunc. qfunc is called on i == j for diagonal entries and on (i, j) for each listed edge (both off-diagonal positions). The resulting matrix is symmetric.

pattern_edges is a collection of (i, j) pairs with i < j that describe the off-diagonal sparsity pattern. Any pair where qfunc(i, j) == 0 is skipped.

This is a utility for tests and for bespoke precision structures; most concrete GMRFs in this package build their own sparse matrix directly.

source
GMRFs.update!Method
update!(cache::FactorCache, Q_new::AbstractMatrix)

Re-factorise in place, reusing the symbolic factorisation. Q_new must have the same sparsity pattern as the matrix the cache was constructed from. Returns cache.

On SuiteSparse this runs the numeric-only phase of supernodal Cholesky (cholmod_factorize_p) — the symbolic phase is not repeated.

source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::AbstractGMRFGraph)

Symmetric Bool adjacency matrix of the graph. The (i,j) entry is true iff i and j are distinct and connected by an edge. Diagonal is false. This is the precision-matrix sparsity pattern off-diagonal; the diagonal is always present regardless.

source
Graphs.LinAlg.laplacian_matrixFunction
laplacian_matrix(g::AbstractGMRFGraph)

The combinatorial graph Laplacian L = D - A as a SparseMatrixCSC{Int,Int}, where D is the degree diagonal and A is the adjacency matrix. This is the precision structure matrix for intrinsic Besag / ICAR models (before τ scaling).

source
Random.rand!Method
Random.rand!(rng::AbstractRNG, x::AbstractVector, g::AbstractGMRF)

In-place draw. x must have length num_nodes(g). Returns x.

source