GMRFs.jl
Sparse Gaussian Markov random fields: graph, precision, sampling, conditioning, log-density. The numerical core of the ecosystem; useful standalone for anyone needing GMRF primitives without the INLA machinery on top.
What's here
- An abstract type
AbstractGMRFplus concrete leaves:IIDGMRF,RW1GMRF,RW2GMRF,AR1GMRF,SeasonalGMRF,BesagGMRF. GMRFGraph— the topological substrate (a thin wrapper around Graphs.jl'sSimpleGraphplus the adjacencySparseMatrixCSC).- Sparse Cholesky-based sampling, log-density evaluation, marginal variances via
SelectedInversion.jl. - A
FactorCachethat exposes LinearSolve.jl's symbolic-factorisation reuse so callers can repeatedly solveQ x = bfor varying values ofQwith the same sparsity pattern. Hot-path callers (the inner Newton inLatentGaussianModels.fit_laplace!) lean on this. - Linear-constraint types (
NoConstraint,LinearConstraint) and Freni-Sterrantino-style sum-to-zero constraints for intrinsic CARs on disconnected graphs.
Required methods for new GMRF subtypes
num_nodes(g) -> Int
precision_matrix(g) -> AbstractMatrix
constraints(g) -> AbstractConstraint # default: NoConstraint()Sampling and log-density fall out of the precision matrix and constraint set automatically.
API
GMRFs.GMRFs — Module
GMRFsSparse Gaussian Markov Random Fields for Julia. Standalone-useful and the numerical core of the Julia INLA ecosystem.
See the package README and plans/plan.md for the design document.
GMRFs.AR1GMRF — Type
AR1GMRF{T}(n; ρ, τ)Stationary AR(1) with correlation ρ ∈ (-1, 1) and marginal precision τ > 0. The joint precision is tridiagonal, full-rank; this is a proper GMRF with Var(x_i) = 1/τ for every i.
Q = (τ / (1 - ρ²)) · S
S_11 = S_nn = 1
S_ii = 1 + ρ² for interior i
S_{i,i+1} = -ρThis matches Rue & Held (2005, Eq. 1.39) and R-INLA's ar1 convention, where τ is the marginal precision of the field.
Note: R-INLA parameterizes via log((1+ρ)/(1−ρ)) internally. This constructor takes ρ directly; the logit-Fisher mapping is applied at the LatentGaussianModels.jl component layer.
GMRFs.AbstractConstraint — Type
AbstractConstraintBase type for linear constraints on a GMRF. Currently only LinearConstraint (the hard constraint form A x = e) and NoConstraint are implemented.
Sampling and inference on constrained GMRFs uses the standard conditioning-by-kriging correction (Rue & Held 2005, §2.3.3); that lives in the sampling module, not here — this file provides only the data types.
GMRFs.AbstractGMRF — Type
AbstractGMRFA Gaussian Markov random field on a graph. Concrete subtypes are distributions of vectors x ∈ ℝⁿ with precision matrix Q and mean μ (possibly zero).
Required methods:
num_nodes(g)— dimension of the field.precision_matrix(g)— the full symmetric precision matrixQas aSparseMatrixCSC{<:Real,Int}.prior_mean(g)— the mean vectorμ. Defaults tozeros(num_nodes(g)).rankdef(g)— rank deficiency ofQ(size of the null space).0for proper GMRFs,≥ 1for intrinsic ones.
Intrinsic GMRFs (e.g. Besag, RW1, RW2) additionally carry constraint metadata; see constraints(::AbstractGMRF).
GMRFs.AbstractGMRFGraph — Type
AbstractGMRFGraphThe topology on which a GMRF lives. A thin wrapper over a Graphs.AbstractGraph that also caches the sparsity pattern of the precision matrix (lower-triangular positions).
Concrete subtypes must implement:
graph(g) -> AbstractGraphnum_nodes(g) -> Intadjacency_matrix(g) -> SparseMatrixCSC{Bool,Int}(symmetric, zero diagonal, Bool-valued)
GMRFs.BesagGMRF — Type
BesagGMRF(g::AbstractGMRFGraph; τ = 1.0, scale_model = true)The intrinsic CAR / Besag GMRF on graph g. Precision Q = τ · D · L · D where L is the combinatorial graph Laplacian and D = Diagonal(√c). The per-node vector c is the Sørbye-Rue (2014) geometric-mean scaling constant of the connected component containing each node — Freni-Sterrantino et al. (2018) showed that on disconnected graphs each component needs its own scaling. With scale_model = true (default, matching R-INLA ≥ 17.06) the constants come from per_component_scale_factors; with scale_model = false they are all 1.
rankdef equals the number of connected components of g.
GMRFs.BesagGMRF — Method
BesagGMRF(W::AbstractMatrix; τ = 1.0, scale_model = true)Convenience constructor from an adjacency matrix.
GMRFs.FactorCache — Type
FactorCache{F}A reusable sparse Cholesky factor for a fixed sparsity pattern. Construct with FactorCache(Q) on any symmetric-positive-definite sparse matrix; subsequent calls to update!(cache, Q_new) reuse the symbolic factorisation when Q_new has the same sparsity pattern as the original Q.
This is the primary reuse mechanism for the inner Newton loop in the LGM Laplace step, and for the outer θ grid in INLA.
GMRFs.FactorCache — Method
FactorCache(Q::AbstractMatrix)Build a Cholesky cache from a symmetric-positive-definite sparse matrix Q. Q is promoted to a Julia-native SparseMatrixCSC internally.
GMRFs.GMRFGraph — Type
GMRFGraph{G<:AbstractGraph} <: AbstractGMRFGraphDefault concrete graph: wraps a Graphs.SimpleGraph plus the adjacency matrix so we do not rebuild it every time we need the sparsity pattern.
GMRFs.GMRFGraph — Method
GMRFGraph(W::AbstractMatrix)Build from an explicit n × n adjacency matrix W. Interprets any nonzero entry as an edge; symmetry is required (checked) and the diagonal is ignored. Loops are not currently supported — we error on a nonzero diagonal to avoid silently downgrading a self-loop to nothing, since gmrflib users sometimes pass a diagonal-loaded matrix by mistake.
GMRFs.Generic0GMRF — Type
Generic0GMRF(R; τ = 1.0, rankdef = 0, scale_model = false)User-supplied structure matrix R (symmetric, non-negative definite), precision Q = τ · R. rankdef must be supplied since in general we cannot infer it cheaply from R. If scale_model = true, the Sørbye-Rue scaling is applied to R before multiplying by τ.
R-INLA's generic1 is a special case of this with R rescaled so its largest eigenvalue is 1; that transformation lives at the LGM component layer.
GMRFs.IIDGMRF — Type
IIDGMRF{T}(n; τ = one(T))x ∼ N(0, τ⁻¹ I_n). The precision matrix is τ · I.
GMRFs.LinearConstraint — Type
LinearConstraint{T,M<:AbstractMatrix{T},V<:AbstractVector{T}} <: AbstractConstraintThe hard linear constraint A x = e, with A of shape k × n and e of length k. For sum-to-zero constraints on an intrinsic Besag with r connected components, A has r rows — one indicator row per component.
GMRFs.LinearConstraint — Method
LinearConstraint(A::AbstractMatrix)Homogeneous constraint A x = 0. RHS defaults to zeros(size(A,1)) in the element type of A.
GMRFs.NoConstraint — Type
NoConstraint <: AbstractConstraintSingleton for "no constraint applied". Returned by default from constraints(::AbstractGMRF) for proper GMRFs.
GMRFs.RW1GMRF — Type
RW1GMRF{T}(n; τ = one(T), cyclic = false)First-order random walk precision on n nodes. Open form has rank deficiency 1 (constant shift), cyclic form has rank deficiency 1 as well (the cyclic sum-to-zero null space).
GMRFs.RW2GMRF — Type
RW2GMRF{T}(n; τ = one(T), cyclic = false)Second-order random walk precision on n nodes. R = D' D where D is the second-difference operator. Rank deficiency 2 (linear null space) in the open case, 1 in the cyclic case.
GMRFs.SeasonalGMRF — Type
SeasonalGMRF{T}(n; period, τ = 1.0)Intrinsic seasonal-variation model of length n and period s = period. Precision
Q = τ · B' B, B ∈ ℝ^{(n-s+1) × n},
B_{t,j} = 1 if j ∈ [t, t+s-1] else 0,so that each s-consecutive sum of the field is penalised:
π(x | τ) ∝ exp(-½ τ Σ_{t=1}^{n-s+1} (x_t + ⋯ + x_{t+s-1})²).The null space has dimension s - 1: it is spanned by all s-periodic sequences summing to zero within one period. Equivalently, a basis is ε_k = e_k - e_s (repeated with period s) for k = 1, …, s-1, where e_k is the k-th canonical basis vector of ℝ^s; so rankdef = s-1. null_space_basis returns the orthonormalised version of this basis.
The default constraints is a single sum-to-zero row (matching R-INLA's model = "seasonal"); the remaining s - 2 null directions are identified by the likelihood. Matches Rue & Held (2005, §3.4.3).
GMRFs.SymmetricQ — Type
SymmetricQ{T,M}Lightweight view that tags a sparse matrix as a symmetric precision matrix. Internally stores the upper triangle (Julia's Symmetric convention) of a SparseMatrixCSC{T}; algebraic operations reconstitute the full symmetric form on demand.
This is the analogue of R-INLA's Qfunc-produced symmetric Q. It is mostly a semantic wrapper — SparseMatrixCSC already stores symmetric matrices fine; this type exists so that functions that require "precision matrix" in their signature do not accept arbitrary sparse matrices by mistake.
GMRFs.SymmetricQ — Method
SymmetricQ(A::SparseMatrixCSC)Wrap a sparse matrix as a precision matrix. A must be square and symmetric.
Distributions.logpdf — Method
Distributions.logpdf(g::AbstractGMRF, x::AbstractVector; check_constraint = true)Log-density of a GMRF at point x. For intrinsic GMRFs the density is defined on the non-null subspace {x : V' x = 0}; if check_constraint = true (default) we check that V'x is near zero and throw a PriorConstraintError otherwise. Pass check_constraint = false to get the formal quadratic form without the check.
GMRFs._generic_scale_factor — Method
Generic dense reference scale-factor for an arbitrary structure matrix R with known rank deficiency rd. For intrinsic Besag we have the combinatorial-Laplacian null space explicitly (constant-per-component); here we fall back to an eigendecomposition on the dense form.
GMRFs.connected_component_labels — Method
connected_component_labels(g::AbstractGMRFGraph) -> Vector{Int}Assign each node an integer component label in 1:nconnected_components(g).
GMRFs.constraints — Method
constraints(g::AbstractGMRF) -> AbstractConstraintThe default constraint attached to a GMRF. Proper GMRFs return NoConstraint(); intrinsic ones (RW1, RW2, Besag) return a LinearConstraint with one sum-to-zero row per connected component (Freni-Sterrantino et al. 2018).
GMRFs.factor — Method
factor(cache::FactorCache)Return the underlying Cholesky factor.
GMRFs.graph — Function
graph(g::AbstractGMRFGraph)Return the wrapped Graphs.AbstractGraph.
GMRFs.marginal_variances — Method
marginal_variances(g::AbstractGMRF; method = :auto) -> Vector{Float64}Return the vector of marginal variances for a GMRF. For a proper GMRF this is diag(Q⁻¹); for an intrinsic (rank-deficient) GMRF it is the generalised-inverse diagonal on the non-null subspace, diag(Q⁺) = diag(inv(Q + V V') - V V').
method:
:auto(default) —:selinvfor proper GMRFs,:densefor intrinsic. The intrinsic path augmentsQwith the null-space basis, which defeats sparsity, so dense is the honest default there.:selinv— force the sparse Takahashi path. Errors on intrinsic GMRFs (Q is singular).:dense— densify. Correctness oracle only.
GMRFs.marginal_variances — Method
marginal_variances(Q::AbstractSparseMatrix; method = :selinv) -> Vector{Float64}Return diag(Q⁻¹) for a proper, symmetric positive-definite sparse precision Q.
method:
:selinv(default) — Takahashi recursion on the sparse Cholesky factor viaSelectedInversion.selinv_diag. Scales to large sparseQ.:dense— densifyQand take a straight inverse. Correctness oracle only; slow.
GMRFs.nconnected_components — Method
nconnected_components(g::AbstractGMRFGraph)Number of connected components of the underlying graph. Relevant for constraint generation on disconnected intrinsic models (one sum-to-zero constraint per component, Freni-Sterrantino et al. 2018).
GMRFs.null_space_basis — Function
null_space_basis(g::AbstractGMRF) -> Matrix{Float64}Return an orthonormal basis V of the null space of precision_matrix(g), with shape (n, rankdef(g)). V'V = I. For proper GMRFs the result is zeros(n, 0).
This is a Float64 matrix by default. If a different element type is needed, convert at the call site.
GMRFs.num_nodes — Function
num_nodes(g::AbstractGMRFGraph)Number of latent nodes in the graph (length of any vector sampled from a GMRF on g).
GMRFs.per_component_scale_factors — Method
per_component_scale_factors(g::AbstractGMRFGraph) -> Vector{Float64}Per-component Sørbye-Rue (2014) geometric-mean scaling constants for the intrinsic Besag GMRF on a (possibly disconnected) graph g. Returns a vector of length K = nconnected_components(g) indexed by component label, where entry k makes the geometric-mean marginal variance equal 1 on the non-null subspace of component k alone.
This is the Freni-Sterrantino et al. (2018) refinement of Sørbye-Rue on disconnected graphs: each component is scaled independently rather than scaling the union with a single constant. R-INLA has applied this correction since 2018; matching it is required for posterior agreement on graphs like Sardinia.
GMRFs.precision_matrix — Function
precision_matrix(g::AbstractGMRF)Full symmetric precision matrix Q as a SparseMatrixCSC{<:Real,Int}. Must include both upper and lower triangles so that slicing and linear solves work without additional wrapping.
GMRFs.prior_mean — Method
prior_mean(g::AbstractGMRF)Prior mean vector. Defaults to zero for all currently implemented GMRFs; carried as a function so that future mean-shifted types can override.
GMRFs.rankdef — Method
rankdef(g::AbstractGMRF) -> IntRank deficiency of Q. 0 for proper GMRFs (IID, AR1), positive for intrinsic ones (1 for connected RW1/Besag, 2 for RW2, equal to the number of connected components for disconnected Besag).
GMRFs.scale_factor — Method
scale_factor(g::AbstractGMRFGraph) -> Float64The Sørbye & Rue (2014) geometric-mean scaling constant for a connected intrinsic Besag GMRF on g. Applied so that under Q_scaled = c · L, the geometric mean of marginal variances of τ⁻¹ Q_scaled⁻¹ on the non-null subspace equals τ⁻¹.
For disconnected graphs use per_component_scale_factors: the single-scalar scaling is not the correct generalisation (Freni-Sterrantino et al. 2018). On a disconnected g this function returns the geometric mean of all per-component constants — kept for backward compatibility but not what BesagGMRF / BYM / BYM2 use.
The reference implementation is dense. For production use on large graphs, the LGM Phase 3 selected-inversion implementation should be used instead (see plans/defaults-parity.md and ADR-004).
GMRFs.scale_model — Method
scale_model(g::BesagGMRF)Return a scaled copy. If g.scale_model == true this returns g unchanged; otherwise it returns a new BesagGMRF with scale_model = true. Pure function.
GMRFs.sum_to_zero_constraints — Method
sum_to_zero_constraints(g::AbstractGMRFGraph; T = Float64) -> LinearConstraintBuild the per-component sum-to-zero constraint for a graph with r = nconnected_components(g) components. The resulting A has r rows, one for each component, with 1s in that component's nodes and 0s elsewhere. RHS is zeros(r).
This is the Freni-Sterrantino et al. (2018) correction for intrinsic CAR on disconnected graphs.
GMRFs.tabulated_precision — Method
tabulated_precision(n, qfunc; pattern_edges = Tuple{Int,Int}[])Build a SymmetricQ from a scalar callback qfunc(i, j) -> value, in the style of gmrflib's GMRFLib_tabulate_Qfunc. qfunc is called on i == j for diagonal entries and on (i, j) for each listed edge (both off-diagonal positions). The resulting matrix is symmetric.
pattern_edges is a collection of (i, j) pairs with i < j that describe the off-diagonal sparsity pattern. Any pair where qfunc(i, j) == 0 is skipped.
This is a utility for tests and for bespoke precision structures; most concrete GMRFs in this package build their own sparse matrix directly.
GMRFs.update! — Method
update!(cache::FactorCache, Q_new::AbstractMatrix)Re-factorise in place, reusing the symbolic factorisation. Q_new must have the same sparsity pattern as the matrix the cache was constructed from. Returns cache.
On SuiteSparse this runs the numeric-only phase of supernodal Cholesky (cholmod_factorize_p) — the symbolic phase is not repeated.
Graphs.LinAlg.adjacency_matrix — Function
adjacency_matrix(g::AbstractGMRFGraph)Symmetric Bool adjacency matrix of the graph. The (i,j) entry is true iff i and j are distinct and connected by an edge. Diagonal is false. This is the precision-matrix sparsity pattern off-diagonal; the diagonal is always present regardless.
Graphs.LinAlg.laplacian_matrix — Function
laplacian_matrix(g::AbstractGMRFGraph)The combinatorial graph Laplacian L = D - A as a SparseMatrixCSC{Int,Int}, where D is the degree diagonal and A is the adjacency matrix. This is the precision structure matrix for intrinsic Besag / ICAR models (before τ scaling).
LinearAlgebra.logdet — Method
logdet(cache::FactorCache)Log-determinant of the factored matrix.
Random.rand! — Method
Random.rand!(rng::AbstractRNG, x::AbstractVector, g::AbstractGMRF)In-place draw. x must have length num_nodes(g). Returns x.