Signal representation

Towards discarding regularization

Laplace inversion of the diffusion signal typically relies on regularization in order to guide the search for a suitable solution to the inversion problem [@Provencher:1982 ; @Kroeker:1986 ; @Whittall:1989 ; @Mitchell:2012]. However, the memory requirements of these algorithms make them impractical for high dimensions [@Sun:2005], imposing a need for data compression [@Venkataramanan:2002]. Besides, the choice of regularization term inherently influences the characteristics of the solution, potentially causing a mismatch between these characteristics and those of the underlying microstructure.

To circumvent these problems, diffusion tensor distribution imaging (DTD) [@Topgaard:2019] explores the space of diffusion tensors using a quasi-genetic approach that does not rely on regularization nor on assumptions regarding the nature of the voxel content (other than those of the \(\mathcal{P}(\mathbf{D})\) description).

Quasi-genetic inversion algorithm

Discretization

To facilitate the numerical inversion of the above signal, the continuous distribution \(\mathcal{P}(\mathbf{D})\) can be approximated by a discrete vector \(\mathbf{w}\) containing the weights of \(\mathit{N}\) distinct diffusion tensors. This allows us to rewrite the above multidimensional Laplace transform as

\[ \mathcal{S}_m = \sum_{n=1}^{N} w_n\, \exp(-\mathbf{b}_m:\mathbf{D}_n) \, , \]

where \(\mathcal{S}_\mathit{m}\) is the \(\mathit{m}\)-th signal amplitude measured with the encoding tensor \(\mathbf{b}_\mathit{m}\), and \(\mathit{w}_\mathit{n}\) is the weight of the \(\mathit{n}\)-th component of the discretized DTD. The weights \(w_n\) are normalized so that \(\sum_{n=1}^{N} w_n = \mathcal{S}_0\), implying that \(w_n/\sum_{m=1}^{N} w_m\) corresponds to the signal fraction of the \(\mathit{n}\)-th component of \(\mathcal{P}(\mathbf{D})\).

Non-negative least-squares minimization

Let us denote by \(M\) the number of acquisition points. Since \(\mathit{M}\) is typically larger than the number \(\mathit{N}\) of unknowns, inverting the above equation is an overdetermined system. The estimation of \(\mathbf{w}\) is thus formulated as a non-negative linear least squares (NNLS) problem [@Lawson_book:1974]

\[ \mathbf{w} = \underset{\mathbf{w}^\prime\geq 0}{\mathrm{argmin}} \, \Vert \mathbf{s}-\mathbf{K}\cdot\mathbf{w}^\prime \Vert_2 \, , \]

where \(\Vert \cdot \Vert_2\) denotes the Euclidian norm, \(\mathbf{w}\) is the \(\mathit{N} \times 1\) sought-for probability vector, \(\mathbf{s}\) is the \(\mathit{M} \times 1\) vector containing the signal amplitude measurements, and \(\mathbf{K}\) is the \(\mathit{M} \times \mathit{N}\) matrix whose elements correpond to the various signal decays. While the above non-negativity constraint could be seen as a form of regularization, it merely enforces the fact that the weights \(w_n\) are positive numbers.

Monte-Carlo inversion

Overall, the signal inversion is performed using a quasi-genetic process called “Monte Carlo inversion”, introduced in the field of physical chemistry [@Prange:2009] and detailed in previous works [@Topgaard:2019 ; @Reymbaut_accuracy_precision:2020]. Its main steps read as follows:

Typically, one uses \(\mathit{N}_\mathrm{in} = 200\), \(\mathit{N}_\mathrm{p} = 20\), \(\mathit{N}_\mathrm{m} = 20\) and \(\mathit{N}_\mathrm{out} = 50\). Such a large value of \(\mathit{N}_\mathrm{out}\) ensures that voxel contents with low orientational order remain properly captured (as each relevant sub-voxel diffusion direction should be represented by at least one component).

While the final trimming of \(\mathit{N}_\mathrm{out}\) components might seem restrictive, it concludes the rather random exploration of around \(\mathit{N}_\mathrm{in}(\mathit{N}_\mathrm{p}+\mathit{N}_\mathrm{m})\) components. As an order of magnitude, using the previous numbers gives \(\mathit{N}_\mathrm{in}(\mathit{N}_\mathrm{p}+\mathit{N}_\mathrm{m}) = 8\,000\) explored components.

Bootstrapping

Embracing the inherent ill-conditioning of Laplace inversion problems, the DTD method is usually performed using bootstrapping with replacement [@Efron:1979 ; @Efron:1997 ; @de_Kort:2014] on the acquired data. For each voxel, this results in estimating an ensemble of \(\mathit{N}_\mathrm{b}\) plausible sets of components, also called “bootstrap solutions”, each denoted by \(\{(\mathit{D}_{\parallel,\mathit{n}}, \mathit{D}_{\perp,\mathit{n}} , \theta_\mathit{n}, \phi_\mathit{n}, \mathit{w}_\mathit{n})\}_{1\leq \mathit{n}\leq \mathit{N}_\mathrm{out}}\).