Estimators

This page introduces two types of estimators in Microstructure.jl for estimating parameters and quantifying uncertainties: the Markov Chain Monte Carlo (MCMC) sampling method and Monte Carlo dropout using neural networks. These two types of estimators are flexibly parametrized, allowing you to specify sampling options for MCMC and training options for neural networks.

MCMC

MCMC methods aim to generate independent samples from the posterior distributions of tissue parameters given certain MRI measurements. You will need to tune the sampler parameters for a specific biophysical model.

Define a sampler for your model

Microstructure.SamplerType
Sampler(
    params::Tuple{Vararg{String}},
    prior_range::Tuple{Vararg{Tuple{Float64, Float64}}}, 
    proposal::Tuple{Vararg{<:Any}},
    paralinks::Tuple{Vararg{Pair{String}}}, 
    nsamples::Int64 
    burnin::Int64
    thinning::Int64 
)

Return a Sampler Type object for a biophysical model.

Examples

julia>Sampler(
    params = ("axon.da","axon.dpara","extra.dperp_frac","fracs"),
    prior_range = ((1.0e-7,1.0e-5),(0.01e-9,0.9e-9),(0.0, 1.0),(0.0,1.0)),
    proposal = (Normal(0,0.25e-6), Normal(0,0.025e-9), Normal(0,0.05), MvNormal([0.0025 0 0;0 0.0001 0; 0 0 0.0001])),
    paralinks = ("axon.d0" => "axon.dpara", "extra.dpara" => "axon.dpara"),
    nsamples = 70000,
    burnin = 20000
)
Sampler(("axon.da", "axon.dpara", "extra.dperp_frac", "fracs"), ((1.0e-7, 1.0e-5), (1.0e-11, 9.0e-10), (0.0, 1.0), (0.0, 1.0)), (Normal{Float64}(μ=0.0, σ=2.5e-7), Normal{Float64}(μ=0.0, σ=2.5e-11), Normal{Float64}(μ=0.0, σ=0.05), ZeroMeanFullNormal(
dim: 3
μ: Zeros(3)
Σ: [0.0025 0.0 0.0; 0.0 0.0001 0.0; 0.0 0.0 0.0001]
)
), ("axon.d0" => "axon.dpara", "extra.dpara" => "axon.dpara"), 70000, 20000, 1)
source

Define a noise model

Microstructure.NoisemodelType
Noisemodel(logpdf::Function, 
sigma_start::Float64, 
sigma_range::Tuple{Float64,Float64}, 
proposal::Distribution)

Return a Noisemodel object with logpdf Function to calculate log likelihood of measurements (set this between logp_gauss and logp_rician), sigma_start as the starting value of noise level, sigma_range as prior range and proposal distribution for MCMC sampling.

Examples

julia> Noisemodel()
Noisemodel(Microstructure.logp_gauss, 0.01, (0.005, 0.1), Distributions.Normal{Float64}(μ=0.0, σ=0.005))
julia> Noisemodel(logpdf = logp_rician, sigma_start = 0.02, proposal = Normal(0,0.001))
Noisemodel(Microstructure.logp_rician, 0.02, (0.005, 0.1), Normal{Float64}(μ=0.0, σ=0.001))
source

Run MCMC on your model and data

Microstructure.mcmc!Function

Method 1 generates pertubations within function, creates and returns a dict chain, and modify final model estimates in place. This method is useful in checking a few voxels, e.g. for quality of fitting, chain dignostics and optimizing sampler for models.

mcmc!(
    estimates::BiophysicalModel,
    meas::Vector{Float64},
    protocol::Protocol,
    sampler::Sampler,
    noise::Noisemodel = Noisemodel(),
    rng::Int64 = 1
)
julia> chain = mcmc!(estimates, measurements, protocol, sampler, noise_model, rng)

Method 2 takes chain and pertubations as input, mutating chain in place which can be used to calculate finial estimates and uncertainties. This method is used for processing larger dataset, e.g. for whole-barin/slices. This method is used together with multi-threads processing that pre-allocate spaces for caching chains, avoiding creating them for each voxel. This method also reuses pertubations for faster computation speed; we usually use very large numbers of pertubations (e.g. ~10^4) to densely sample the proposal distributions.

mcmc!(
    chain::Vector{Any},
    estimates::BiophysicalModel,
    meas::Vector{Float64},
    protocol::Protocol,
    sampler::Sampler,
    pertubations::Vector{Vector{Any}},
    noise::Noisemodel = Noisemodel()
)
julia> mcmc!(chain, estimates, meas, protocol, sampler, pertubations, noise_model))

References

For using MCMC in microsturcture imaging, here are some recommended references:

Behrens, T.E.J., Woolrich, M.W., Jenkinson, M., Johansen-Berg, H., Nunes, R.G., Clare, S., Matthews, P.M., Brady, J.M., Smith, S.M., 2003. Characterization and Propagation of Uncertainty in Diffusion-Weighted MR Imaging. Magn Reson Med 50, 1077–1088. https://doi.org/10.1002/MRM.10609

Alexander, D.C., 2008. A General Framework for Experiment Design in Diffusion MRI and Its Application in Measuring Direct Tissue-Microstructure Features. Magn Reson Med 60, 439–448. https://doi.org/10.1002/mrm.21646

source

Function mcmc! runs on single thread and suitable for testing sampler parameters and inspecting chains for small dataset. After optimizing sampler parameters, if you are processing datasets with many voxels, use the threading function for multi-threads processing. Refer to multi-threads page for more details.

Neural Networks

This module currently includes simple multi-layer perceptrons and training data generation function, which allows supervised training of the MLPs on synthesised data with given training parameter distributions.

1. Specify a network model for your task

Microstructure.NetworkArgType
NetworkArg(
    model::BiophysicalModel
    protocol::Protocol
    params::Tuple{Vararg{String}}
    prior_range::Tuple{Vararg{Tuple{Float64,Float64}}} # range for priors 
    prior_dist::Tuple{Vararg{<:Any}}
    paralinks::Tuple{Vararg{Pair{String,<:String}}} = ()
    noise_type::String = "Gaussian" # "Rician"    
    sigma_range::Tuple{Float64, Float64}
    sigma_dist::Distribution
    nsamples::Int64
    nin::Int64
    nout::Int64
    hidden_layers::Tuple{Vararg{Int64}}
    dropoutp::Union{<:AbstractFloat, Tuple{Vararg{<:AbstractFloat}}}
    actf::Function
)

Return a NetworkArg object with necessary parameters to construct a neural network model and generate training samples for specifc biophysical model. A test network architecture and training samples can be automaticlly determined from the modelling task by using function

NetworkArg(model, protocol, params, prior_range, prior_dist, paralinks, noisetype, sigma_range, sigma_dist)
source

2. Specify training parameters

Microstructure.TrainingArgType
TrainingArg(
    batchsize::Int64 
    lossf::Function
    lr::Float64
    epoch::Int64
    tv_split::Float64
    patience::Tuple{Int64,Int64} 
    device::Function
)

Return TrainingArg Type object with fields related to how network will be trained. batch size; loss function; learning rate; number of epoches; validation/training data split; patience for train loss plateau, patience for validation loss to increase. Patiences are currently not applied when training and validating on generated training samples from uniform parameter distributions, therefore training will stop when reaching the number of epoches. The patience parameter will be considered in the future when training with real data or generated data with other distributions.

source

3. Train a network with specified network and training parameters

Microstructure.trainingFunction
training(
    arg::TrainingArg, 
    net::NetworkArg, 
    rng_seed::Int
)

Train and return a trained mlp model, a Dict of training logs with train loss, training data loss and validation data loss for each epoch, and the inputs and labels (training data) the mlp was trained on. This function allows for both cpu and gpu training.

source

4. Apply trained model to your test data

Microstructure.testFunction
test(mlp::Chain, data::Array{<:AbstractFloat,2}, ntest)

Return probabilistic estimates by applying a trained mlp to test data for ntest times with dropout layers on.

test(mlp::Chain, data::Array{<:AbstractFloat,2})

Get deterministic estimates with dropout layers off

source

Other useful functions

Generate task specific MLP and training samples:

Microstructure.create_mlpFunction
create_mlp(
    ninput::Int, 
    noutput::Int, 
    hiddenlayers::Tuple{Vararg{Int}}, 
    dropoutp::Union{<:AbstractFloat,Tuple{Vararg{<:AbstractFloat}}}
    )

Return a mlp with ninput/noutput as the number of input/output channels, and number of units in each layer specified in hiddenlayers; 'dropoutp' contains the dropout probalibities for dropout layers; it can be a single value (one dropout layer before output) or same length as the hidden layers

source
Microstructure.generate_samplesFunction
generate_samples(
    model::BiophysicalModel,
    protocol::Protocol,
    params::Tuple{Vararg{String}},
    prior_range::Tuple{Vararg{Tuple{Float64,Float64}}}, 
    prior_dist::Tuple{Vararg{<:Any}},
    nsample::Int,
    paralinks::Union{Pair{String},Tuple{Vararg{Pair{String}}}},
    sigma_range::Tuple{Float64, Float64},
    sigma::Distribution,
    noise_type::String="Gaussian",
    rng_seed,
)

Generate and return training samples for a model using given priors of tissue parameters and specified noise model ("Gaussian" or "Rician") and noise level.

source

Training on given model and training samples

Microstructure.train_loop!Function
train_loop!(
    mlp::Chain, 
    arg::TrainingArg, 
    inputs::Array{Float64,2}, 
    labels::Array{Float64,2}
)

Train and update the mlp and return a Dict of training logs with train loss, training data loss and validation data loss for each epoch. This function works on cpu, which is sufficiently fast for most cases.

source